• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 13
  • 13
  • 13
  • 12
  • 11
  • 10
  • 9
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Nonlinear Dimensionality Reduction with Side Information

Ghodsi Boushehri, Ali January 2006 (has links)
In this thesis, I look at three problems with important applications in data processing. Incorporating side information, provided by the user or derived from data, is a main theme of each of these problems. <br /><br /> This thesis makes a number of contributions. The first is a technique for combining different embedding objectives, which is then exploited to incorporate side information expressed in terms of transformation invariants known to hold in the data. It also introduces two different ways of incorporating transformation invariants in order to make new similarity measures. Two algorithms are proposed which learn metrics based on different types of side information. These learned metrics can then be used in subsequent embedding methods. Finally, it introduces a manifold learning algorithm that is useful when applied to sequential decision problems. In this case we are given action labels in addition to data points. Actions in the manifold learned by this algorithm have meaningful representations in that they are represented as simple transformations.
32

Statistical atlases of cardiac motion and deformation for the characterization of CRT responders

Duchateau, Nicolas Guillem 28 February 2012 (has links)
The definition of optimal selection criteria for maximizing the response rate to Cardiac Resynchronization Therapy (CRT) is still an issue under active debate. Recent clinical approaches propose a classification of patients into classes of mechanisms that could lead to heart failure and study their response to the therapy. In this line of research, the computation of a metric between the motion and deformation patterns of a given subject and well identified classes of CRT responders is considered in this thesis, as the basis of a new strategy to compute patient selection indexes. The thesis proposes first an improved design for the construction of statistical atlases of myocardial motion and deformation, and applies it to the characterization of populations of patients involved in CRT. The added-value of our approach is highlighted in a clinical study, applying the methodology to a large population of patients with a given pattern of dyssynchrony (septal flash) and understanding the link between its correction and CRT response. Finally, we propose a method to extend the analysis to the comparison of individuals to reference populations, either healthy or pathological, using manifold learning techniques to model a disease as progressive deviations from normality along a manifold structure, and demonstrate the potential of our method for inter-subject comparison in CRT patients. / La definición de un criterio óptimo para mejorar la respuesta a la Terapia de Resincronización Cardíaca (TRC) sigue siendo un debate abierto. Estudio clínicos recientemente publicados proponen clasificar pacientes según diversos mecanismos patofisiológicos que pueden inducir insuficiencia cardíaca y estudian su respuesta a la terapia. Siguiendo esta línea de investigación, esta tesis considera el cálculo de una distancia entre los patrones de movimiento y deformación de un individuo y las clases de respondedores a la TRC, siendo la base de una nueva estrategia para calcular índices para seleccionar pacientes. Esta tesis presenta primero un método para construir un atlas estadístico de movimiento y deformación miocárdica, y su aplicación posterior a la caracterización de poblaciones de potenciales candidatos a la TRC. El valor añadido de nuestro método se enfatiza en un estudio clínico, en el cual se aplica la metodología a una gran población de pacientes con un patrón específico de disincronía cardíaca (llamado septal flash), y se relaciona su corrección y la respuesta a la TRC. Finalmente, se extiende el método para comparar individuos a una población de referencia, sana o patológica, usando técnicas de manifold learning para representar una patología como una desviación progresiva de la normalidad, con una estructura no lineal específica, y se demuestra el potencial de nuestro método para comparar entre sí candidatos a la TRC.
33

Méthodes de démélange non-linéaires pour l'imagerie hyperspectrale / Non-linear unmixing methods for hyperspectral imaging

Nguyen Hoang, Nguyen 03 December 2013 (has links)
Dans cette thèse, nous avons présenté les aspects de la technologie d'imagerie hyperspectrale en concentrant sur le problème de démélange non-linéaire. Pour cette tâche, nous avons proposé trois solutions. La première consiste à intégrer les avantages de l'apprentissage de variétés dans les méthodes de démélange classique pour concevoir leurs versions non-linéaires. Les résultats avec les données générées sur une variété bien connue - le "Swissroll"- donne des résultats prometteurs. Les méthodes fonctionnent beaucoup mieux avec l'augmentation de la non-linéarité. Cependant, l'absence de contrainte de non-négativité dans ces méthodes reste une question ouverte pour des améliorations à trouver. La deuxième proposition vise à utiliser la méthode de pré-image pour estimer une transformation inverse de l'espace de données entrées des pixels vers l'espace des abondances. L'ajout des informations spatiales sous forme "variation totale" est également introduit pour rendre l'algorithme plus robuste au bruit. Néanmoins, le problème d'obtention des données de réalité terrain nécessaires pour l'étape d'apprentissage limite l'application de ce type d'algorithmes. / In this thesis , we present several aspects of hyperspectral imaging technology , while focusing on the problem of non- linear unmixing . We have proposed three solutions for this task. The first one is integrating the advantages of manifold learning in classical unmixing methods to design their nonlinear versions . Results with data generated on a well-known manifold- the " Swissroll " - seem promising. The methods work much better with the increase in non- linearity compared with their linear version. However, the absence of constraint of non- negativity in these methods remains an open question for improvements . The second proposal is using the pre-image method for estimating an inverse transformation of the data form pixel space to abundance of space . The adoption of spatial information as " total variation " is also introduced to make the algorithm more robust to noise . However, the problem of obtaining ground truth data required for learning step limits the application of such algorithms.
34

Learning Techniques For Information Retrieval And Mining In High-dimensional Databases

Cheng, Hao 01 January 2009 (has links)
The main focus of my research is to design effective learning techniques for information retrieval and mining in high-dimensional databases. There are two main aspects in the retrieval and mining research: accuracy and efficiency. The accuracy problem is how to return results which can better match the ground truth, and the efficiency problem is how to evaluate users' requests and execute learning algorithms as fast as possible. However, these problems are non-trivial because of the complexity of the high-level semantic concepts, the heterogeneous natures of the feature space, the high dimensionality of data representations and the size of the databases. My dissertation is dedicated to addressing these issues. Specifically, my work has five main contributions as follows. The first contribution is a novel manifold learning algorithm, Local and Global Structures Preserving Projection (LGSPP), which defines salient low-dimensional representations for the high-dimensional data. A small number of projection directions are sought in order to properly preserve the local and global structures for the original data. Specifically, two groups of points are extracted for each individual point in the dataset: the first group contains the nearest neighbors of the point, and the other set are a few sampled points far away from the point. These two point sets respectively characterize the local and global structures with regard to the data point. The objective of the embedding is to minimize the distances of the points in each local neighborhood and also to disperse the points far away from their respective remote points in the original space. In this way, the relationships between the data in the original space are well preserved with little distortions. The second contribution is a new constrained clustering algorithm. Conventionally, clustering is an unsupervised learning problem, which systematically partitions a dataset into a small set of clusters such that data in each cluster appear similar to each other compared with those in other clusters. In the proposal, the partial human knowledge is exploited to find better clustering results. Two kinds of constraints are integrated into the clustering algorithm. One is the must-link constraint, indicating that the involved two points belong to the same cluster. On the other hand, the cannot-link constraint denotes that two points are not within the same cluster. Given the input constraints, data points are arranged into small groups and a graph is constructed to preserve the semantic relations between these groups. The assignment procedure makes a best effort to assign each group to a feasible cluster without violating the constraints. The theoretical analysis reveals that the probability of data points being assigned to the true clusters is much higher by the new proposal, compared to conventional methods. In general, the new scheme can produce clusters which can better match the ground truth and respect the semantic relations between points inferred from the constraints. The third contribution is a unified framework for partition-based dimension reduction techniques, which allows efficient similarity retrieval in the high-dimensional data space. Recent similarity search techniques, such as Piecewise Aggregate Approximation (PAA), Segmented Means (SMEAN) and Mean-Standard deviation (MS), prove to be very effective in reducing data dimensionality by partitioning dimensions into subsets and extracting aggregate values from each dimension subset. These partition-based techniques have many advantages including very efficient multi-phased pruning while being simple to implement. They, however, are not adaptive to different characteristics of data in diverse applications. In this study, a unified framework for these partition-based techniques is proposed and the issue of dimension partitions is examined in this framework. An investigation of the relationships of query selectivity and the dimension partition schemes discovers indicators which can predict the performance of a partitioning setting. Accordingly, a greedy algorithm is designed to effectively determine a good partitioning of data dimensions so that the performance of the reduction technique is robust with regard to different datasets. The fourth contribution is an effective similarity search technique in the database of point sets. In the conventional model, an object corresponds to a single vector. In the proposed study, an object is represented by a set of points. In general, this new representation can be used in many real-world applications and carries much more local information, but the retrieval and learning problems become very challenging. The Hausdorff distance is the common distance function to measure the similarity between two point sets, however, this metric is sensitive to outliers in the data. To address this issue, a novel similarity function is defined to better capture the proximity of two objects, in which a one-to-one mapping is established between vectors of the two objects. The optimal mapping minimizes the sum of distances between each paired points. The overall distance of the optimal matching is robust and has high retrieval accuracy. The computation of the new distance function is formulated into the classical assignment problem. The lower-bounding techniques and early-stop mechanism are also proposed to significantly accelerate the expensive similarity search process. The classification problem over the point-set data is called Multiple Instance Learning (MIL) in the machine learning community in which a vector is an instance and an object is a bag of instances. The fifth contribution is to convert the MIL problem into a standard supervised learning in the conventional vector space. Specially, feature vectors of bags are grouped into clusters. Each object is then denoted as a bag of cluster labels, and common patterns of each category are discovered, each of which is further reconstructed into a bag of features. Accordingly, a bag is effectively mapped into a feature space defined by the distances from this bag to all the derived patterns. The standard supervised learning algorithms can be applied to classify objects into pre-defined categories. The results demonstrate that the proposal has better classification accuracy compared to other state-of-the-art techniques. In the future, I will continue to explore my research in large-scale data analysis algorithms, applications and system developments. Especially, I am interested in applications to analyze the massive volume of online data.
35

Probabilistic Graphical Models for Prognosis and Diagnosis of Breast Cancer

KHADEMI, MAHMOUD 04 1900 (has links)
<p>One in nine women is expected to be diagnosed with breast cancer during her life. In 2013, an estimated 23, 800 Canadian women will be diagnosed with breast cancer and 5, 000 will die of it. Making decisions about the treatment for a patient is difficult since it depends on various clinical features, genomic factors, and pathological and cellular classification of a tumor.</p> <p>In this research, we propose a probabilistic graphical model for prognosis and diagnosis of breast cancer that can help medical doctors make better decisions about the best treatment for a patient. Probabilistic graphical models are suitable for making decisions under uncertainty from big data with missing attributes and noisy evidence.</p> <p>Using the proposed model, we may enter the results of different tests (e.g. estrogen and progesterone receptor test and HER2/neu test), microarray data, and clinical traits (e.g. woman's age, general health, menopausal status, stage of cancer, and size of the tumor) to the model and answer to following questions. How likely is it that the cancer will extend in the body (distant metastasis)? What is the chance of survival? How likely is that the cancer comes back (local or regional recurrence)? How promising is a treatment? For example, how likely metastasis is and how likely recurrence is for a new patient, if certain treatment e.g. surgical removal, radiation therapy, hormone therapy, or chemotherapy is applied. We can also classify various types of breast cancers using this model.</p> <p>Previous work mostly relied on clinical data. In our opinion, since cancer is a genetic disease, the integration of the genomic (microarray) and clinical data can improve the accuracy of the model for prognosis and diagnosis. However, increasing the number of variables may lead to poor results due to the curse of dimensionality dilemma and small sample size problem. The microarray data is high dimensional. It consists of around 25, 000 variables per patient. Moreover, structure learning and parameter learning for probabilistic graphical models require a significant amount of computations. The number of possible structures is also super-exponential with respect to the number of variables. For instance, there are more than 10^18 possible structures with just 10 variables.</p> <p>We address these problems by applying manifold learning and dimensionality reduction techniques to improve the accuracy of the model. Extensive experiments using real-world data sets such as METRIC and NKI show the accuracy of the proposed method for classification and predicting certain events, like recurrence and metastasis.</p> / Master of Science (MSc)
36

Smart Additive Manufacturing Using Advanced Data Analytics and Closed Loop Control

Liu, Chenang 19 July 2019 (has links)
Additive manufacturing (AM) is a powerful emerging technology for fabrication of components with complex geometries using a variety of materials. However, despite promising potential, due to the complexity of the process dynamics, how to ensure product quality and consistency of AM parts efficiently during the process still remains challenging. Therefore, the objective of this dissertation is to develop effective methodologies for online automatic quality monitoring and improvement, i.e., to build a basis for smart additive manufacturing. The fast-growing sensor technology can easily generate a massive amount of real-time process data, which provides excellent opportunities to address the barriers of online quality assurance in AM through data-driven perspectives. Although this direction is very promising, the online sensing data typically have high dimensionality and complex inherent structure, which causes the tasks of real-time data-driven analytics and decision-making to be very challenging. To address these challenges, multiple data-driven approaches have been developed in this dissertation to achieve effective feature extraction, process modeling, and closed-loop quality control. These methods are successfully validated by a typical AM process, namely, fused filament fabrication (FFF). Specifically, four new methodologies are proposed and developed as listed below, (1) To capture the variation of hidden patterns in sensor signals, a feature extraction approach based on spectral graph theory is developed for defect detection in online quality monitoring of AM. The most informative feature is extracted and integrated with a statistical control chart, which can effectively detect the anomalies caused by cyber-physical attack. (2) To understand the underlying structure of high dimensional sensor data, an effective dimension reduction method based on an integrated manifold learning approach termed multi-kernel metric learning embedded isometric feature mapping (MKML-ISOMAP) is proposed for online process monitoring and defect diagnosis of AM. Based on the proposed method, process defects can be accurately identified by supervised classification algorithms. (3) To quantify the layer-wise quality correlation in AM by taking into consideration of reheating effects, a novel bilateral time series modeling approach termed extended autoregressive (EAR) model is proposed, which successfully correlates the quality characteristics of the current layer with not only past but also future layers. The resulting model is able to online predict the defects in a layer-wise manner. (4) To achieve online defect mitigation for AM process, a closed-loop quality control system is implemented using an image analysis-based proportional-integral-derivative (PID) controller, which can mitigate the defects by adaptively adjusting machine parameters during the printing process in a timely manner. By fully utilizing the online sensor data with innovative data analytics and closed-loop control approaches, the above-proposed methodologies are expected to have excellent performance in online quality assurance for AM. In addition, these methodologies are inherently integrated into a generic framework. Thus, they can be easily transformed for applications in other advanced manufacturing processes. / Doctor of Philosophy / Additive manufacturing (AM) technology is rapidly changing the industry; and online sensor-based data analytics is one of the most effective enabling techniques to further improve AM product quality. The objective of this dissertation is to develop methodologies for online quality assurance of AM processes using sensor technology, advanced data analytics, and closed-loop control. It aims to build a basis for the implementation of smart additive manufacturing. The proposed new methodologies in this dissertation are focused to address the quality issues in AM through effective feature extraction, advanced statistical modeling, and closed-loop control. To validate their effectiveness and efficiency, a widely used AM process, namely, fused filament fabrication (FFF), is selected as the experimental platform for testing and validation. The results demonstrate that the proposed methods are very promising to detect and mitigate quality defects during AM operations. Consequently, with the research outcome in this dissertation, our capability of online defect detection, diagnosis, and mitigation for the AM process is significantly improved. However, the future applications of the accomplished work in this dissertation are not just limited to AM. The developed generic methodological framework can be further extended to many other types of advanced manufacturing processes.
37

Oculométrie Numérique Economique : modèle d'apparence et apprentissage par variétés / Eye Tracking system : appearance based model and manifold learning

Liang, Ke 13 May 2015 (has links)
L'oculométrie est un ensemble de techniques dédié à enregistrer et analyser les mouvements oculaires. Dans cette thèse, je présente l'étude, la conception et la mise en œuvre d'un système oculométrique numérique, non-intrusif permettant d'analyser les mouvements oculaires en temps réel avec une webcam à distance et sans lumière infra-rouge. Dans le cadre de la réalisation, le système oculométrique proposé se compose de quatre modules: l'extraction des caractéristiques, la détection et le suivi des yeux, l'analyse de la variété des mouvements des yeux à partir des images et l'estimation du regard par l'apprentissage. Nos contributions reposent sur le développement des méthodes autour de ces quatre modules: la première réalise une méthode hybride pour détecter et suivre les yeux en temps réel à partir des techniques du filtre particulaire, du modèle à formes actives et des cartes des yeux (EyeMap); la seconde réalise l'extraction des caractéristiques à partir de l'image des yeux en utilisant les techniques des motifs binaires locaux; la troisième méthode classifie les mouvements oculaires selon la variété générée par le Laplacian Eigenmaps et forme un ensemble de données d'apprentissage; enfin, la quatrième méthode calcul la position du regard à partir de cet ensemble d'apprentissage. Nous proposons également deux méthodes d'estimation:une méthode de la régression par le processus gaussien et un apprentissage semi-supervisé et une méthode de la catégorisation par la classification spectrale (spectral clustering). Il en résulte un système complet, générique et économique pour les applications diverses dans le domaine de l'oculométrie. / Gaze tracker offers a powerful tool for diverse study fields, in particular eye movement analysis. In this thesis, we present a new appearance-based real-time gaze tracking system with only a remote webcam and without infra-red illumination. Our proposed gaze tracking model has four components: eye localization, eye feature extraction, eye manifold learning and gaze estimation. Our research focuses on the development of methods on each component of the system. Firstly, we propose a hybrid method to localize in real time the eye region in the frames captured by the webcam. The eye can be detected by Active Shape Model and EyeMap in the first frame where eye occurs. Then the eye can be tracked through a stochastic method, particle filter. Secondly, we employ the Center-Symmetric Local Binary Patterns for the detected eye region, which has been divided into blocs, in order to get the eye features. Thirdly, we introduce manifold learning technique, such as Laplacian Eigen-maps, to learn different eye movements by a set of eye images collected. This unsupervised learning helps to construct an automatic and correct calibration phase. In the end, as for the gaze estimation, we propose two models: a semi-supervised Gaussian Process Regression prediction model to estimate the coordinates of eye direction; and a prediction model by spectral clustering to classify different eye movements. Our system with 5-points calibration can not only reduce the run-time cost, but also estimate the gaze accurately. Our experimental results show that our gaze tracking model has less constraints from the hardware settings and it can be applied efficiently in different real-time applications.
38

New statistical modeling of multi-sensor images with application to change detection / Nouvelle modélisation statistique des images multi-capteurs et son application à la détection des changements

Prendes, Jorge 22 October 2015 (has links)
Les images de télédétection sont des images de la surface de la Terre acquises par des satellites ou des avions. Ces images sont de plus en plus disponibles et leur technologies évoluent rapidement. On peut observer une amélioration des capteurs existants, mais de nouveaux types de capteurs ont également vu le jour et ont montré des propriétés intéressantes pour le traitement d'images. Ainsi, les images multispectrales et radar sont devenues très classiques.La disponibilité de différents capteurs est très intéressante car elle permet de capturer une grande variété de propriétés des objets. Ces propriétés peuvent être exploitées pour extraire des informations plus riches sur les objets. Une des applications majeures de la télédétection est la détection de changements entre des images multi-temporelles (images de la même scène acquise à des instants différents). Détecter des changements entre des images acquises par des capteurs homogènes est un problème classique. Mais le problème de la détection de changements entre images acquises par des capteurs hétérogènes est un problème beaucoup plus difficile.Avoir des méthodes de détection de changements adaptées aux images issues de capteurs hétérogènes est nécessaire pour le traitement de catastrophes naturelles. Des bases de données constituées d'images optiques sont disponible, mais il est nécessaire d'avoir de bonnes conditions climatiques pour les acquérir. En revanche, les images radar sont accessibles rapidement quelles que soient les conditions climatiques et peuvent même être acquises de nuit. Ainsi, détecter des changements entre des images optiques et radar est un problème d'un grand intérêt en télédétection.L'intérêt de cette thèse est d'étudier des méthodes statistiques de détention de changements adaptés aux images issues de capteurs hétérogènes.Chapitre 1 rappelle ce qu'on entend par une image de télédétection et résume rapidement quelques méthodes de détection de changements disponibles dans la littérature. Les motivations à développer des méthodes de détection de changements adaptées aux images hétérogènes et les difficultés associiées sont présentés.Chapitre 2 étudie les propriétés statistiques des images en l'absence de changements. Un modèle de mélange de lois adapté aux ces images est introduit. La performance des méthodes classiques de détection de changements est également étudiée. Dans plusieurs cas, ce modèle permet d'expliquer certains défauts de certaines méthodes de la literature.Chapitre 3 étudie les propriétés des paramètres du modèle introduit au chapitre 2 en faisant l'hypothèse qu'ils appartiennent à une variété en l'absence de changements. Cette hypothèse est utilisée pour définir une mesure de similarité qui permet d'éviter les défauts des approches statistiques classiques. Une méthode permettant d'estimer cette mesure de similarité est présentée. Enfin, la stratégie de détection de changements basée sur cette mesure est validée à l'aide d'images synthétiques.Chapitre 4 étudie un algorithme Bayésien non-paramétrique (BNP) qui permet d'améliorer l'estimation de la variété introduite au chapitre 3, qui est basé sur un processus de restaurant Chinois (CRP) et un champs de Markov qui exploite la corrélation spatiale entre des pixels voisins de l'image. Une nouvelle loi a priori de Jeffrey pour le paramètre de concentration de ce CRP est définit. L'estimation des paramètres de ce nouveau modèle est effectuée à l'aide d'un échantillonneur de Gibbs de type "collapsed Gibbs sampler". La stratégie de détection de changement issue de ce modèle non-paramétrique est validée à l'aide d'images synthétiques.Le dernier chapitre est destiné à la validation des algorithmes de détection de changements développés sur des jeux d'images réelles montrant des résultats encourageant pour tous les cas d'étude. Le modèle BNP permet d'obtenir de meilleurs performances que le modèle paramétrique, mais ceci se fait au prix d'une complexité calculatoire plus importante. / Remote sensing images are images of the Earth surface acquired from satellites or air-borne equipment. These images are becoming widely available nowadays and its sensor technology is evolving fast. Classical sensors are improving in terms of resolution and noise level, while new kinds of sensors are proving to be useful. Multispectral image sensors are standard nowadays and synthetic aperture radar (SAR) images are very popular.The availability of different kind of sensors is very advantageous since it allows us to capture a wide variety of properties of the objects contained in a scene. These properties can be exploited to extract richer information about these objects. One of the main applications of remote sensing images is the detection of changes in multitemporal datasets (images of the same area acquired at different times). Change detection for images acquired by homogeneous sensors has been of interest for a long time. However the wide range of different sensors found in remote sensing makes the detection of changes in images acquired by heterogeneous sensors an interesting challenge.Accurate change detectors adapted to heterogeneous sensors are needed for the management of natural disasters. Databases of optical images are readily available for an extensive catalog of locations, but, good climate conditions and daylight are required to capture them. On the other hand, SAR images can be quickly captured, regardless of the weather conditions or the daytime. For these reasons, optical and SAR images are of specific interest for tracking natural disasters, by detecting the changes before and after the event.The main interest of this thesis is to study statistical approaches to detect changes in images acquired by heterogeneous sensors. Chapter 1 presents an introduction to remote sensing images. It also briefly reviews the different change detection methods proposed in the literature. Additionally, this chapter presents the motivation to detect changes between heterogeneous sensors and its difficulties.Chapter 2 studies the statistical properties of co-registered images in the absence of change, in particular for optical and SAR images. In this chapter a finite mixture model is proposed to describe the statistics of these images. The performance of classical statistical change detection methods is also studied by taking into account the proposed statistical model. In several situations it is found that these classical methods fail for change detection.Chapter 3 studies the properties of the parameters associated with the proposed statistical mixture model. We assume that the model parameters belong to a manifold in the absence of change, which is then used to construct a new similarity measure overcoming the limitations of classic statistical approaches. Furthermore, an approach to estimate the proposed similarity measure is described. Finally, the proposed change detection strategy is validated on synthetic images and compared with previous strategies.Chapter 4 studies Bayesian non parametric algorithm to improve the estimation of the proposed similarity measure. This algorithm is based on a Chinese restaurant process and a Markov random field taking advantage of the spatial correlations between adjacent pixels of the image. This chapter also defines a new Jeffreys prior for the concentration parameter of this Chinese restaurant process. The estimation of the different model parameters is conducted using a collapsed Gibbs sampler. The proposed strategy is validated on synthetic images and compared with the previously proposed strategy. Finally, Chapter 5 is dedicated to the validation of the proposed change detection framework on real datasets, where encouraging results are obtained in all cases. Including the Bayesian non parametric model into the change detection strategy improves change detection performance at the expenses of an increased computational cost.
39

Reconstruction tomographique d'objets déformables pour la cryo-microscopie électronique à particules isolées / TomographIc reconstruction for deformable object applied to single particle cryo-electron microscopy

Michels, Yves 26 September 2018 (has links)
La cryo-microscopie électronique à particules isolées est une modalité d’imagerie permettant d’estimer la structure 3D de molécules. L’obtention d’un volume 3D est effectué par des algorithmes de reconstruction tomographique après acquisition par un microscope électronique à transmission d’un ensemble d’images de projection de l’objet observé. Les méthodes de reconstruction tomographique existantes permettent de déterminer la structure des molécules avec des résolutions proches de l’angström. Cependant la résolution est dégradée lorsque les molécules observées sont déformables. Les travaux réalisés au cours de cette thèse contribuent au développement de méthodes de traitement informatique des données (projections) dans le but de prendre en compte les déformations de l’objet observé dans le processus de reconstruction tomographique ab initio. Les problématiques principales abordées dans ce mémoire sont l’estimation des paramètres de projection basée sur la réduction de dimension non-linéaire, la détection des arêtes erronées dans les graphes de voisinages pour l’amélioration de la robustesse au bruit des méthodes de réduction de dimension, et la reconstruction tomographique basée sur un modèle paramétrique du volume. / Single particle cryo-electron microscopy is a technique that allows to estimate the 3D structure of biological complex. The construction of the 3D volume is performed by computerized tomography applied on a set of projection images from transmission electron microscope. Existing tomographic reconstructionalgorithms allow us to visualize molecular structure with a resolution around one angstrom. However the resolution is degraded when the molecules are deformable. This thesis contributes to the development of signal processing method in order to take into account the deformation information of the observed object for the ab initio tomographic reconstruction. The main contributions of this thesis are the estimation of projection parameters based on non-linear dimensionreduction, the false edges detection in neighborhood graphs to improve noise robustness of dimension reduction methods, and tomographic reconstruction based on a parametric model of the volume.
40

Statistical methods with application to machine learning and artificial intelligence

Lu, Yibiao 11 May 2012 (has links)
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.

Page generated in 0.0814 seconds