• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

Francisco, André Biasin Segalla 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
2

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

André Biasin Segalla Francisco 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
3

Process Monitoring and Control of Advanced Manufacturing based on Physics-Assisted Machine Learning

Chung, Jihoon 05 July 2023 (has links)
With the advancement of equipment and the development of technology, the manufacturing process is becoming more and more advanced. This appears as an advanced manufacturing process that uses innovative technology, including robotics, artificial intelligence, and autonomous systems. Additive manufacturing (AM), also known as 3D printing, is the representative advanced manufacturing technology that creates 3D geometries in a layer-by-layer fashion with various types of materials. However, quality assurance in the manufacturing process requires high expectations as the process develops. Therefore, the objective of this dissertation is to propose innovative methodologies for process monitoring and control to achieve quality assurance in advanced manufacturing. The development of sensor technologies and computational power offer process data, providing opportunities to achieve effective quality assurance through a machine learning approach. Hence, exploring the connections between sensor data and process quality using machine learning methodologies would be advantageous. Although this direction is promising, some constraints and complex process dynamics in the actual process hinder achieving quality assurance from the existing machine learning methods. To address these challenges, several machine learning approaches assisted by the physics knowledge obtained from the process have been proposed in this dissertation. These approaches are successfully validated by various manufacturing processes, including AM and multistage assembly processes. Specifically, three new methodologies are proposed and developed, as listed below. -To detect the process anomalies with imbalanced process data due to different ratios of occurrence between process states, a new Generative Adversarial Network (GAN)-based method is proposed. The proposed method jointly optimizes the GAN and classifier to augment realistic and state-distinguishable images to provide balanced data. Specifically, the method utilizes the knowledge and features of normal process data to generate effective abnormal process data. The benefits of the proposed approach have been confirmed in both polymer AM and metal AM processes. -To diagnose process faults with a limited number of sensors caused by the physical constraints in the multistage assembly process, a novel sparse Bayesian learning is proposed. The method is based on a practical assumption that it will likely have a few process faults (sparse). In addition, the temporal correlation of process faults and the prior knowledge of process faults are considered through the Bayesian framework. Based on the proposed method, process faults can be accurately identified with limited sensors. -To achieve online defect mitigation of new defects that occurred during the printing due to the complex process dynamics of the AM process, a novel Reinforcement Learning (RL)-based algorithm is proposed. The proposed method is to learn the machine parameter adjustment to mitigate the new defects during the printing. The method transfers knowledge learned from various sources in the AM process to RL. Therefore, with a theoretical guarantee, the proposed method learns the mitigation strategy with fewer training samples than traditional RL. By overcoming the challenges in the process, the above-proposed methodologies successfully achieve quality assurance in the advanced manufacturing process. Furthermore, the methods are not designed for the typical processes. Therefore, they can easily be applied to other domains, such as healthcare systems. / Doctor of Philosophy / The development of equipment and technologies has led to advanced manufacturing processes. Along with that, quality assurance in the manufacturing processes has become a very important issue. Therefore, the objective of this dissertation is to accomplish quality assurance by developing advanced machine learning approaches. In this dissertation, several advanced machine learning methodologies using the physics knowledge from the process are proposed. These methods overcome some constraints and complex process dynamics of the actual process that degrade the performance of existing machine learning methodologies in achieving quality assurance. To validate the effectiveness of the proposed methodologies, various advanced manufacturing processes, including additive manufacturing and multistage assembly processes, are utilized. The performance of the proposed methodologies provides superior results for achieving quality assurance in various scenarios compared to existing state-of-the-art machine learning methods. The applications of the achievements in this dissertation are not limited to the manufacturing process. Therefore, the proposed machine learning approaches can be further extended to other application areas, such as healthcare systems.
4

High-Dimensional Generative Models for 3D Perception

Chen, Cong 21 June 2021 (has links)
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval. / Doctor of Philosophy / The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
5

Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing

January 2019 (has links)
abstract: Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
6

Apprentissage statistique pour la personnalisation de modèles cardiaques à partir de données d’imagerie / Statistical learning for image-based personalization of cardiac models

Le Folgoc, Loïc 27 November 2015 (has links)
Cette thèse porte sur un problème de calibration d'un modèle électromécanique de cœur, personnalisé à partir de données d'imagerie médicale 3D+t ; et sur celui - en amont - de suivi du mouvement cardiaque. A cette fin, nous adoptons une méthodologie fondée sur l'apprentissage statistique. Pour la calibration du modèle mécanique, nous introduisons une méthode efficace mêlant apprentissage automatique et une description statistique originale du mouvement cardiaque utilisant la représentation des courants 3D+t. Notre approche repose sur la construction d'un modèle statistique réduit reliant l'espace des paramètres mécaniques à celui du mouvement cardiaque. L'extraction du mouvement à partir d'images médicales avec quantification d'incertitude apparaît essentielle pour cette calibration, et constitue l'objet de la seconde partie de cette thèse. Plus généralement, nous développons un modèle bayésien parcimonieux pour le problème de recalage d'images médicales. Notre contribution est triple et porte sur un modèle étendu de similarité entre images, sur l'ajustement automatique des paramètres du recalage et sur la quantification de l'incertitude. Nous proposons une technique rapide d'inférence gloutonne, applicable à des données cliniques 4D. Enfin, nous nous intéressons de plus près à la qualité des estimations d'incertitude fournies par le modèle. Nous comparons les prédictions du schéma d'inférence gloutonne avec celles données par une procédure d'inférence fidèle au modèle, que nous développons sur la base de techniques MCMC. Nous approfondissons les propriétés théoriques et empiriques du modèle bayésien parcimonieux et des deux schémas d'inférence / This thesis focuses on the calibration of an electromechanical model of the heart from patient-specific, image-based data; and on the related task of extracting the cardiac motion from 4D images. Long-term perspectives for personalized computer simulation of the cardiac function include aid to the diagnosis, aid to the planning of therapy and prevention of risks. To this end, we explore tools and possibilities offered by statistical learning. To personalize cardiac mechanics, we introduce an efficient framework coupling machine learning and an original statistical representation of shape & motion based on 3D+t currents. The method relies on a reduced mapping between the space of mechanical parameters and the space of cardiac motion. The second focus of the thesis is on cardiac motion tracking, a key processing step in the calibration pipeline, with an emphasis on quantification of uncertainty. We develop a generic sparse Bayesian model of image registration with three main contributions: an extended image similarity term, the automated tuning of registration parameters and uncertainty quantification. We propose an approximate inference scheme that is tractable on 4D clinical data. Finally, we wish to evaluate the quality of uncertainty estimates returned by the approximate inference scheme. We compare the predictions of the approximate scheme with those of an inference scheme developed on the grounds of reversible jump MCMC. We provide more insight into the theoretical properties of the sparse structured Bayesian model and into the empirical behaviour of both inference schemes
7

Sparse Bayesian Learning For Joint Channel Estimation Data Detection In OFDM Systems

Prasad, Ranjitha January 2015 (has links) (PDF)
Bayesian approaches for sparse signal recovery have enjoyed a long-standing history in signal processing and machine learning literature. Among the Bayesian techniques, the expectation maximization based Sparse Bayesian Learning(SBL) approach is an iterative procedure with global convergence guarantee to a local optimum, which uses a parameterized prior that encourages sparsity under an evidence maximization frame¬work. SBL has been successfully employed in a wide range of applications ranging from image processing to communications. In this thesis, we propose novel, efficient and low-complexity SBL-based algorithms that exploit structured sparsity in the presence of fully/partially known measurement matrices. We apply the proposed algorithms to the problem of channel estimation and data detection in Orthogonal Frequency Division Multiplexing(OFDM) systems. Further, we derive Cram´er Rao type lower Bounds(CRB) for the single and multiple measurement vector SBL problem of estimating compressible vectors and their prior distribution parameters. The main contributions of the thesis are as follows: We derive Hybrid, Bayesian and Marginalized Cram´er Rao lower bounds for the problem of estimating compressible vectors drawn from a Student-t prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error(MSE) in the estimates. Through simulations, we demonstrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector. OFDM is a well-known multi-carrier modulation technique that provides high spectral efficiency and resilience to multi-path distortion of the wireless channel It is well-known that the impulse response of a wideband wireless channel is approximately sparse, in the sense that it has a small number of significant components relative to the channel delay spread. In this thesis, we consider the estimation of the unknown channel coefficients and its support in SISO-OFDM systems using a SBL framework. We propose novel pilot-only and joint channel estimation and data detection algorithms in block-fading and time-varying scenarios. In the latter case, we use a first order auto-regressive model for the time-variations, and propose recursive, low-complexity Kalman filtering based algorithms for channel estimation. Monte Carlo simulations illustrate the efficacy of the proposed techniques in terms of the MSE and coded bit error rate performance. • Multiple Input Multiple Output(MIMO) combined with OFDM harnesses the inherent advantages of OFDM along with the diversity and multiplexing advantages of a MIMO system. The impulse response of wireless channels between the Nt transmit and Nr receive antennas of a MIMO-OFDM system are group approximately sparse(ga-sparse),i.e. ,the Nt Nr channels have a small number of significant paths relative to the channel delay spread, and the time-lags of the significant paths between transmit and receive antenna pairs coincide. Often, wire¬less channels are also group approximately-cluster sparse(ga-csparse),i.e.,every ga-sparse channel consists of clusters, where a few clusters have all strong components while most clusters have all weak components. In this thesis, we cast the problem of estimating the ga-sparse and ga-csparse block-fading and time-varying channels using a multiple measurement SBL framework. We propose a bouquet of novel algorithms for MIMO-OFDM systems that generalize the algorithms proposed in the context of SISO-OFDM systems. The efficacy of the proposed techniques are demonstrated in terms of MSE and coded bit error rate performance.

Page generated in 0.0979 seconds