• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1683
  • 601
  • 243
  • 147
  • 134
  • 113
  • 79
  • 47
  • 32
  • 20
  • 18
  • 15
  • 11
  • 11
  • 6
  • Tagged with
  • 3650
  • 485
  • 463
  • 408
  • 386
  • 365
  • 333
  • 288
  • 247
  • 237
  • 234
  • 212
  • 204
  • 199
  • 196
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

"Fusão de imagens médicas para aplicação em sistemas de planejamento de tratamentos em radioterapia" / MEDICAL IMAGES FUSION FOR APPLICATION IN TREATMENT PLANNING SYSTEMS IN RADIOTHERAPY

Ros, Renato Assenci 29 June 2006 (has links)
Foi desenvolvido um programa para fusão de imagens médicas para utilização nos sistemas de planejamento de tratamento de radioterapia CAT3D e de radiocirurgia MNPS. Foi utilizada uma metodologia de maximização da informação mútua para fazer a fusão das imagens de modalidades diferentes pela medida da dependência estatística entre os pares de voxels. O alinhamento por pontos referenciais faz uma aproximação inicial para o processo de otimização não linear pelo método de downhill simplex para gerar o histograma conjugado. A função de transformação de coordenadas utiliza uma interpolação trilinear e procura pelo valor de máximo global em um espaço de 6 dimensões, com 3 graus de liberdade para translação e 3 graus de liberdade para rotação, utilizando o modelo de corpo rígido. Este método foi avaliado com imagens de TC, RM e PET do banco de dados da Universidade Vanderbilt, para verificar sua exatidão pela comparação das coordenadas de transformação de cada fusão de imagens com os valores de referência. O valor da mediana dos erros de alinhamento das imagens foi de 1,6 mm para a fusão de TC-RM e de 3,5 mm para PET-RM, com a exatidão dos padrões de referência estimada em 0,4 mm para TC-RM e 1,7 mm para PET-RM. Os valores máximos de erros foram de 5,3 mm para TC-RM e de 7,4 mm para PET-RM e 99,1% dos erros foram menores que o tamanho dos voxels das imagens. O tempo médio de processamento para a fusão de imagens foi de 24 s. O programa foi concluído com sucesso e inserido na rotina de 59 serviços de radioterapia, dos quais 42 estão no Brasil e 17 na América Latina. Este método não apresenta limitações quanto às resoluções diferentes das imagens, tamanhos de pixels e espessuras de corte. Além disso, o alinhamento pode ser realizado com imagens transversais, coronais ou sagitais. / Software for medical images fusion was developed for utilization in CAT3D radiotherapy and MNPS radiosurgery treatment planning systems. A mutual information maximization methodology was used to make the image registration of different modalities by measure of the statistical dependence between the voxels pairs. The alignment by references points makes an initial approximation to the non linear optimization process by downhill simplex method for estimation of the joint histogram. The coordinates transformation function use a trilinear interpolation and search for the global maximum value in a 6 dimensional space, with 3 degree of freedom for translation and 3 degree of freedom for rotation, by making use of the rigid body model. This method was evaluated with CT, MR and PET images from Vanderbilt University database to verify its accuracy by comparison of transformation coordinates of each images fusion with gold-standard values. The median of images alignment error values was 1.6 mm for CT-MR fusion and 3.5 mm for PET-MR fusion, with gold-standard accuracy estimated as 0.4 mm for CT-MR fusion and 1.7 mm for PET-MR fusion. The maximum error values were 5.3 mm for CT-MR fusion and 7.4 mm for PET-MR fusion, and 99.1% of alignment errors were images subvoxels values. The mean computing time was 24 s. The software was successfully finished and implemented in 59 radiotherapy routine services, of which 42 are in Brazil and 17 are in Latin America. This method doesn’t have limitation about different resolutions from images, pixels sizes and slice thickness. Besides, the alignment may be accomplished by axial, coronal or sagital images.
802

Binocular geometry and camera motion directly from normal flows. / CUHK electronic theses & dissertations collection

January 2009 (has links)
Active vision systems are about mobile platform equipped with one or more than one cameras. They perceive what happens in their surroundings from the image streams the cameras grab. Such systems have a few fundamental tasks to tackle---they need to determine from time to time what their motion in space is, and should they have multiple cameras, they need to know how the cameras are relatively positioned so that visual information collected by the respective cameras can be related. In the simplest form, the tasks are about finding the motion of a camera, and finding the relative geometry of every two cameras, from the image streams the cameras collect. / On determining the ego-motion of a camera, there have been many previous works as well. However, again, most of the works require to track distinct features in the image stream or to infer the full optical flow field from the normal flow field. Different from the traditional works, utilizing no motion correspondence nor the epipolar geometry, a new method is developed that operates again on the normal flow data directly. The method has a number of features. It can employ the use of every normal flow data, thus requiring less texture from the image scene. A novel formulation of what the normal flow direction at an image position has to offer on the camera motion is given, and this formulation allows a locus of the possible camera motion be outlined from every data point. With enough data points or normal flows over the image domain, a simple voting scheme would allow the various loci intersect and pinpoint the camera motion. / On determining the relative geometry of two cameras, there already exist a number of calibration techniques in the literature. They are based on the presence of either some specific calibration objects in the imaged scene, or a portion of the scene that is observable by both cameras. However, in active vision, because of the "active" nature of the cameras, it could happen that a camera pair do not share much or anything in common in their visual fields. In the first part of this thesis, we propose a new solution method to the problem. The method demands image data under a rigid motion of the camera pair, but unlike the existing motion correspondence-based calibration methods it does not estimate the optical flows or motion correspondences explicitly. Instead it estimates the inter-camera geometry from the monocular normal flows. Moreover, we propose a strategy on selecting optimal groups of normal flow vectors to improve the accuracy and efficiency of the estimation. / The relative motion between a camera and the imaged environment generally induces a flow field in the image stream captured by the camera. The flow field, which is about motion correspondences of the various image positions over the image frames, is referred to as the optical flows in the literature. If the optical flow field of every camera can be made available, the motion of a camera can be readily determined, and so can the relative geometry of two cameras. However, due to the well-known aperture problem, directly observable at any image position is generally not the full optical flow, but only the component of it that is normal to the iso-brightness contour of the intensity profile at the position. The component is widely referred to as the normal flow. It is not impossible to infer the full flow field from the normal flow field, but then it requires some specific assumptions about the imaged scene, like it is smooth almost everywhere etc. / This thesis aims at exploring how the above two fundamental tasks can be tackled by operating on the normal flow field directly. The objective is, without the full flow inferred explicitly in the process, and in turn no specific assumption made about the imaged scene, the developed methods can be applicable to a wider set of scenes. The thesis consists of two parts. The first part is about how the inter-camera geometry of two cameras can be determined from the two monocular normal flow fields. The second part is about how a camera's ego-motion can be determined by examining only the normal flows the camera observes. / We have tested the methods on both synthetic image data and real image sequences. Experimental results show that the developed methods are effective in determining inter-camera geometry and camera motion from normal flow fields. / Yuan, Ding. / Adviser: Ronald Chung. / Source: Dissertation Abstracts International, Volume: 70-09, Section: B, page: . / Thesis submitted in: October 2008. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 121-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
803

Artificial intelligence system for continuous affect estimation from naturalistic human expressions

Abd Gaus, Yona Falinie January 2018 (has links)
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
804

Estudo experimental dos regimes de operação da densidade do plasma no tokamak start e sistemas de diagnósticos / Experimental study of the operation regimes of the plasma density in Tokamak Start and diagnostic systems

Celso Ribeiro 12 December 2003 (has links)
Tokamaks de baixa razão de aspecto (LART), também chamados tokamaks esféricos (ST), são dispositivos concebidos teoricamente para ter um alto fator beta IND. T, típico de um disposito de efeito estrição de campo reverso (RFP) ou Spheromak (beta IND. T SETA INFINITO, mas com geometria similar a um ultra-LART), porém mantendo o confinamento de um tokamak convencional, adicionado às melhores propriedades de estabilidade. Aqui beta IND. T 3 BARRAS 91/ V IND. p) INT. pdV/(B IND. To POT. 2/ 2mü IND. 0), B IND. To 3 BARRAS B IND. T POT. vac (R IND. 0) e p é a pressão do plasma. START foi o primeiro LART a demonstrar essas caracterísiticas em temperaturas de T IND. i APROXIMADAMENTE IGUAL A T IND. e 10 POT. 2eV. O trabalho descrito nesta tese teve como objetivo o desenvolvimento e a utilização de sistemas de diagnóstivos para o estudo dos regimes de operação da densidade do plasma no START. Desenvolveu-se um interferômetro de dupla-passagem de feixe ao longo do plano médio horizontal do vaso, com a finalidade de medir a densidade média de linha do plasma. Esse interferômetro era composto de um laser HCN (337müm), uma mesa óptica e detectores (díodos Schottky). Mediu-se n BARRA IND. e < 10 POT. 18_ > 2x10 POT. 20m POT. -3. Desenvolveu-se um sistema de pré-ionização por ondas ressonantes elétron-ciclotrônicas, baseado num gerador klystron de 6GHz e P IND. ECR-P < ou =3kW, para o auxílio na ruptura elétrica do gás no regime Tokamak, reduzindo o valor de fluxo magnético necessário para romper e manter a descarga e aumentar sua reprodutibilidade. Observou-se, a pressões de preenchimento 1,1-1,5X10 POT. -5 mbar, uma maior reprodutibilidade da descarga e redução do tempo de ruptura do gás com o aumento de P IND. ECR-P. Colocou-se também em operação um injetor de pastilhas criogênicas (8-9K) de deutério, D IND.2, com o objetivo de aumentar a densidade ) do plasma. Esse era do tipo canhão de gás, de uma única pastilha com valores de massa 3 x 10 IND. 19 átomos e velocidadee de 50-400m/s, propulsiopnadas por gás H IND. 2 (0-4bar). Pastilhas forma injetadas da parte superior do START, quase verticalmente, na região de alto campo magnético do vaso, em regimes ôhmico e aquecido por feixes de partículas neutras (NBI), de alto e baixo beta IND. T, aumentando substancialmente a densidade. Com o uso desses diagnósticos, estudaram-se os regimes de alta e baixa densidade, Com injeção de gás, obteve-se n BARRA IND. e POT. max = 1,0x10 POT.20 densidade de Greenwald N IND. G POT. max APROXIMADAMENTE IGUAL À 0,9) e observou-se a contração da coluna do plasma e o colapso das descargas em duas fases, semelhante ao que ocorre nos Tokamaks convencionais. Observou-se um aumento da densidade com o uso de NBI, mas não da densidade de Greenwald. Obteve-se, também, via injeção de pastilha, simultaneamente, altas densidades (N IND. G 1,0-1,1), grande eficiência de reposição de partículas ( 100%) e melhoria no confinamento em descargas de altos valores de beta IND. T 9 23-27%). É provável que o limite de pressão tenha sido atingido. / Low Aspect Ratio Tokamaks (LARTs) or Spherical Tokamaks (STs) are devices theoretically conceived to have the high toroidal T of the Reversed- Field Pinch and Spheromak (T but similar geometry to an ultra-LART) whilst retaining the confinement of the conventional Tokamak together with improved stability. Here T = (1/Vp) f pdV/ (B2TO/2o) BT= BT (Ro), and p is the plasma pressure. START was the first LART to demonstrate these features at temperatures of Tt ~Te ~102eV. The work described in this thesis has aimed to develop and use diagnostics and their use for studying the density operating regimes on START. A double-pass mid plane interferometer was commissioned for measuring the plasma density. This system was composed by an existing HCN laser (337 ), an optical table, and detectors (Schottky diodes).
805

"Fusão de imagens médicas para aplicação em sistemas de planejamento de tratamentos em radioterapia" / MEDICAL IMAGES FUSION FOR APPLICATION IN TREATMENT PLANNING SYSTEMS IN RADIOTHERAPY

Renato Assenci Ros 29 June 2006 (has links)
Foi desenvolvido um programa para fusão de imagens médicas para utilização nos sistemas de planejamento de tratamento de radioterapia CAT3D e de radiocirurgia MNPS. Foi utilizada uma metodologia de maximização da informação mútua para fazer a fusão das imagens de modalidades diferentes pela medida da dependência estatística entre os pares de voxels. O alinhamento por pontos referenciais faz uma aproximação inicial para o processo de otimização não linear pelo método de downhill simplex para gerar o histograma conjugado. A função de transformação de coordenadas utiliza uma interpolação trilinear e procura pelo valor de máximo global em um espaço de 6 dimensões, com 3 graus de liberdade para translação e 3 graus de liberdade para rotação, utilizando o modelo de corpo rígido. Este método foi avaliado com imagens de TC, RM e PET do banco de dados da Universidade Vanderbilt, para verificar sua exatidão pela comparação das coordenadas de transformação de cada fusão de imagens com os valores de referência. O valor da mediana dos erros de alinhamento das imagens foi de 1,6 mm para a fusão de TC-RM e de 3,5 mm para PET-RM, com a exatidão dos padrões de referência estimada em 0,4 mm para TC-RM e 1,7 mm para PET-RM. Os valores máximos de erros foram de 5,3 mm para TC-RM e de 7,4 mm para PET-RM e 99,1% dos erros foram menores que o tamanho dos voxels das imagens. O tempo médio de processamento para a fusão de imagens foi de 24 s. O programa foi concluído com sucesso e inserido na rotina de 59 serviços de radioterapia, dos quais 42 estão no Brasil e 17 na América Latina. Este método não apresenta limitações quanto às resoluções diferentes das imagens, tamanhos de pixels e espessuras de corte. Além disso, o alinhamento pode ser realizado com imagens transversais, coronais ou sagitais. / Software for medical images fusion was developed for utilization in CAT3D radiotherapy and MNPS radiosurgery treatment planning systems. A mutual information maximization methodology was used to make the image registration of different modalities by measure of the statistical dependence between the voxels pairs. The alignment by references points makes an initial approximation to the non linear optimization process by downhill simplex method for estimation of the joint histogram. The coordinates transformation function use a trilinear interpolation and search for the global maximum value in a 6 dimensional space, with 3 degree of freedom for translation and 3 degree of freedom for rotation, by making use of the rigid body model. This method was evaluated with CT, MR and PET images from Vanderbilt University database to verify its accuracy by comparison of transformation coordinates of each images fusion with gold-standard values. The median of images alignment error values was 1.6 mm for CT-MR fusion and 3.5 mm for PET-MR fusion, with gold-standard accuracy estimated as 0.4 mm for CT-MR fusion and 1.7 mm for PET-MR fusion. The maximum error values were 5.3 mm for CT-MR fusion and 7.4 mm for PET-MR fusion, and 99.1% of alignment errors were images subvoxels values. The mean computing time was 24 s. The software was successfully finished and implemented in 59 radiotherapy routine services, of which 42 are in Brazil and 17 are in Latin America. This method doesn’t have limitation about different resolutions from images, pixels sizes and slice thickness. Besides, the alignment may be accomplished by axial, coronal or sagital images.
806

Fusão nuclear e processos periféricos nos sistemas 16,18O + 58,60,64Ni / Nuclear fusion and peripheral processes in 16,18O + 58,60,64Ni systems

Silva, Cely Paula da 16 August 1996 (has links)
Com o objetivo de investigar os processos de fusão nuclear e espalhamento elástico entre íons pesados, fizemos medidas para a seção de choque de fusão dos sistemas ANTPOT.16,18 O + ANTPOT.58,60,64 Ni, no intervalo de energias de bombardeio abrangido por 38.0 E IND. LAB 72.0 MeV, e medidas para a seção de choque de espalhamento elástico dos sistemas 18O+ 58,60,64Ni, no intervalo compreendido entre 35.1 E IND.LAB 55.1 MeV. As distribuições angulares do processo de fusão foram obtidas em ângulos entre 2 IND.LAB 18 graus, enquanto que para o de espalhamento elástico a variação ocorreu para 17.5 IND.CM M 170.0 graus. Nossos resultados, para as medidas das funções de excitação dos resíduos de evaporação indicam que, na região de energias logo abaixo da barreira coulombiana, o sistema ANTPOT.18 O + ANTPOT.58 Ni apresenta um favorecimento da seção de choque de fusão bastante significativo, quando comparado A sistemática dos isótopos pares do níquel, obtida de nossos dados e da literatura. O desvio padrão do raio de interação, extraído dos sistemas ANTPOT.16,18 O + ANTPOT.58,60,64 Ni para a fusão, na faixa de energias abaixo da barreira, é comparado aqueles associados a modos de vibrações superficiais de núcleos em estados de baixas energias de excitação e de emparelhamento. Efeitos não locais também foram investigados para as referidas medidas. Neste trabalho encontra-se a análise, via modelo óptico, dos resultados experimentais do processo de espalhamento elástico para vinte e sete distribuições angulares. Por último é apresentada uma conexão entre o aumento da fusão e a anomalia de limiar na região em tomo da barreira coulombiana. / With the objective of investigating the heavy-ion fusion and elastic scattering processes, we performed measurements of fusion cross sections for the 16,18O + 58,60.64Ni systems in the bombarding energy range 38.0 ELAB 72.0 MeV and of elastic scattering cross sections for the 18O+ 58,60,64Ni systems in the interval 35.1 I EL- I 55.1 MeV. The fusion process angular distributions were obtained for angles between 2 .0 LAB 18.0 degrees, whereas for the elastic scattering the angles varied in the interval 17.5 CM M 170.0 degrees. Our results for the evaporation residues excitation functions indicate that, for energies right below the coulomb barrier, the 18 O+ 58Ni system presents a significant enhancement of the fusion cross section when compared to systematics for even nickel isotopes, obtained from our data and data in the literature. The interaction radius standard deviation extracted from the fusion data for the 16,18O + 58,60,64Ni systems at energies below the barrier, is compared to those associated to surface vibration modes of nuclei at low excitation and pairing energies. Non local effects were also investigated for these measurements. In this work, we also performed for the elastic scattering data, an optical model analysis for twenty seven angular distributions. Finally, a connection between the fusion enhancement and the threshold anomaly at energies close to the barrier is also presented.
807

Multiple Target Tracking in Realistic Environments Using Recursive-RANSAC in a Data Fusion Framework

Millard, Jeffrey Dyke 01 December 2017 (has links)
Reliable track continuity is an important characteristic of multiple target tracking (MTT) algorithms. In the specific case of visually tracking multiple ground targets from an aerial platform, challenges arise due to realistic operating environments such as video compression artifacts, unmodeled camera vibration, and general imperfections in the target detection algorithm. Some popular visual detection techniques include Kanade-Lucas-Tomasi (KLT)-based motion detection, difference imaging, and object feature matching. Each of these algorithmic detectors has fundamental limitations in regard to providing consistent measurements. In this thesis we present a scalable detection framework that simultaneously leverages multiple measurement sources. We present the recursive random sample consensus (R-RANSAC) algorithm in a data fusion architecture that accommodates multiple measurement sources. Robust track continuity and real-time performance are demonstrated with post-processed flight data and a hardware demonstration in which the aircraft performs automated target following. Applications involving autonomous tracking of ground targets occasionally encounter situations where semantic information about targets would improve performance. This thesis also presents an autonomous target labeling framework that leverages cloud-based image classification services to classify targets that are tracked by the R-RANSAC MTT algorithm. The communication is managed by a Python robot operating system (ROS) node that accounts for latency and filters the results over time. This thesis articulates the feasibility of this approach and suggests hardware improvements that would yield reliable results. Finally, this thesis presents a framework for image-based target recognition to address the problem of tracking targets that become occluded for extended periods of time. This is done by collecting descriptors of targets tracked by R-RANSAC. Before new tracks are assigned an ID, an attempt to match visual information with historical tracks is triggered. The concept is demonstrated in a simulation environment with a single target, using template-based target descriptors. This contribution provides a framework for improving track reliability when faced with target occlusions.
808

Joint recovery of high-dimensional signals from noisy and under-sampled measurements using fusion penalties

Poddar, Sunrita 01 December 2018 (has links)
The presence of missing entries pose a hindrance to data analysis and interpretation. The missing entries may occur due to a variety of reasons such as sensor malfunction, limited acquisition time or unavailability of information. In this thesis, we present algorithms to analyze and complete data which contain several missing entries. We consider the recovery of a group of signals, given a few under-sampled and noisy measurements of each signal. This involves solving ill-posed inverse problems, since the number of available measurements are considerably fewer than the dimensionality of the signal that we aim to recover. In this work, we consider different data models to enable joint recovery of the signals from their measurements, as opposed to the independent recovery of each signal. This prior knowledge makes the inverse problems well-posed. While compressive sensing techniques have been proposed for low-rank or sparse models, such techniques have not been studied to the same extent for other models such as data appearing in clusters or lying on a low-dimensional manifold. In this work, we consider several data models arising in different applications, and present some theoretical guarantees for the joint reconstruction of the signals from few measurements. Our proposed techniques make use of fusion penalties, which are regularizers that promote solutions with similarity between certain pairs of signals. The first model that we consider is that of points lying on a low-dimensional manifold, embedded in high dimensional ambient space. This model is apt for describing a collection of signals, each of which is a function of only a few parameters; the manifold dimension is equal to the number of parameters. We propose a technique to recover a series of such signals, given a few measurements for each signal. We demonstrate this in the context of dynamic Magnetic Resonance Imaging (MRI) reconstruction, where only a few Fourier measurements are available for each time frame. A novel acquisition scheme enables us to detect the neighbours of each frame on the manifold. We then recover each frame by enforcing similarity with its neighbours. The proposed scheme is used to enable fast free-breathing cardiac and speech MRI scans. Next, we consider the recovery of curves/surfaces from few sampled points. We model the curves as the zero-level set of a trigonometric polynomial, whose bandwidth controls the complexity of the curve. We present theoretical results for the minimum number of samples required to uniquely identify the curve. We show that the null-space vectors of high dimensional feature maps of these points can be used to recover the curve. The method is demonstrated on the recovery of the structure of DNA filaments from a few clicked points. This idea is then extended to recover data lying on a high-dimensional surface from few measurements. The formulated algorithm has similarities to our algorithm for recovering points on a manifold. Hence, we apply the above ideas to the cardiac MRI reconstruction problem, and are able to show better image quality with reduced computational complexity. Finally, we consider the case where the data is organized into clusters. The goal is to recover the true clustering of the data, even when a few features of each data point is unknown. We propose a fusion-penalty based optimization problem to cluster data reliably in the presence of missing entries, and present theoretical guarantees for successful recovery of the correct clusters. We next propose a computationally efficient algorithm to solve a relaxation of this problem. We demonstrate that our algorithm reliably recovers the true clusters in the presence of large fractions of missing entries on simulated and real datasets. This work thus results in several theoretical insights and solutions to different practical problems which involve reconstructing and analyzing data with missing entries. The fusion penalties that are used in each of the above models are obtained directly as a result of model assumptions. The proposed algorithms show very promising results on several real datasets, and we believe that they are general enough to be easily extended to several other practical applications.
809

Background subtraction using ensembles of classifiers with an extended feature set

Klare, Brendan F 30 June 2008 (has links)
The limitations of foreground segmentation in difficult environments using standard color space features often result in poor performance during autonomous tracking. This work presents a new approach for classification of foreground and background pixels in image sequences by employing an ensemble of classifiers, each operating on a different feature type such as the three RGB features, gradient magnitude and orientation features, and eight Haar features. These thirteen features are used in an ensemble classifier where each classifier operates on a single image feature. Each classifier implements a Mixture of Gaussians-based unsupervised background classification algorithm. The non-thresholded, classification decision score of each classifier are fused together by taking the average of their outputs and creating one single hypothesis. The results of using the ensemble classifier on three separate and distinct data sets are compared to using only RGB features through ROC graphs. The extended feature vector outperforms the RGB features on all three data sets, and shows a large scale improvement on two of the three data sets. The two data sets with the greatest improvements are both outdoor data sets with global illumination changes and the other has many local illumination changes. When using the entire feature set, to operate at a 90% true positive rate, the per pixel, false alarm rate is reduced five times in one data set and six times in the other data set.
810

Trajectory generation and data fusion for control-oriented advanced driver assistance systems / Génération de trajectoires et fusion de données pour des systèmes de commande d'aide à la conduite avancés

Daniel, Jérémie 01 December 2010 (has links)
Depuis l'invention de l'automobile à la fin du 19eme siècle, la taille du parc ainsi que l'importance du trafic routier n'ont cessées d'augmenter. Ceci a malheureusement été suivi par l'augmentation constante du Nombre d'accidents routiers. Un grand nombre d'études et notamment un rapport fourni par l'Organisation Mondiale de la Santé, a présenté un état alarmant du nombre de blessés et de décès liés aux accidents routiers. Afin de réduire ces chiffres, une solution réside dans le Développement de systèmes d'aide à la conduite qui ont pour but d'assister le conducteur dans sa tâche de conduite. La recherche dans le domaine des aides à la conduite s'est montrée très dynamique et productive durant les vingt dernières années puisque des systèmes tels que l'antiblocage de sécurité (ABS), le programme de stabilité électronique (ESP), le régulateur de vitesse intelligent (ACC), l'assistant aux manœuvres de parking (PMA), les phares orientables (DBL), etc. sont maintenant commercialisés et acceptés par la majorité des conducteurs. Cependant, si ces systèmes ont permis d'améliorer la sécurité des conducteurs, de nombreuses pistes sont encore à explorer. En effet, les systèmes d'aide à la conduite existants ont un comportement microscopique, en d'autres termes ils se focalisent uniquement sur la tâche qu'ils ont à effectuer. Partant du principe que la collaboration entre toutes ces aides à la conduite est plus efficace que leur utilisation en parallèle, une approche globale d'aide à la conduite devient nécessaire. Ceci se traduit par la nécessité de développer une nouvelle génération d'aide à la conduite, prenant en compte d'avantage d'informations et de contraintes liées au véhicule, au conducteur et à son environnement. [...] / Since the origin of the automotive at the end of the 19th century, the traffic flow is subject to a constant increase and, unfortunately, involves a constant augmentation of road accidents. Research studies such as the one performed by the World Health Organization, show alarming results about the number of injuries and fatalities due to these accidents. To reduce these figures, a solution lies in the development of Advanced Driver Assistance Systems (ADAS) which purpose is to help the Driver in his driving task. This research topic has been shown to be very dynamic and productive during the last decades. Indeed, several systems such as Anti-lock Braking System (ABS), Electronic Stability Program (ESP), Adaptive Cruise Control (ACC), Parking Manoeuvre Assistant (PMA), Dynamic Bending Light (DBL), etc. are yet market available and their benefits are now recognized by most of the drivers. This first generation of ADAS are usually designed to perform a specific task in the Controller/Vehicle/Environment framework and thus requires only microscopic information, so requires sensors which are only giving local information about an element of the Vehicle or of its Environment. On the opposite, the next ADAS generation will have to consider more aspects, i.e. information and constraints about of the Vehicle and its Environment. Indeed, as they are designed to perform more complex tasks, they need a global view about the road context and the Vehicle configuration. For example, longitudinal control requires information about the road configuration (straight line, bend, etc.) and about the eventual presence of other road users (vehicles, trucks, etc.) to determine the best reference speed. [...]

Page generated in 0.0442 seconds