• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 20
  • 13
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 116
  • 116
  • 45
  • 36
  • 36
  • 31
  • 31
  • 22
  • 22
  • 18
  • 18
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Contribution à la détection et à l'analyse des signaux EEG épileptiques : débruitage et séparation de sources / Contribution to the detection and analysis of epileptic EEG signals : denoising and source separation

Romo Vazquez, Rebeca del Carmen 24 February 2010 (has links)
L'objectif principal de cette thèse est le pré-traitement des signaux d'électroencéphalographie (EEG). En particulier, elle vise à développer une méthodologie pour obtenir un EEG dit "propre" à travers l'identification et l'élimination des artéfacts extra-cérébraux (mouvements oculaires, clignements, activité cardiaque et musculaire) et du bruit. Après identification, les artéfacts et le bruit doivent être éliminés avec une perte minimale d'information, car dans le cas d'EEG, il est de grande importance de ne pas perdre d'information potentiellement utile à l'analyse (visuelle ou automatique) et donc au diagnostic médical. Plusieurs étapes sont nécessaires pour atteindre cet objectif : séparation et identification des sources d'artéfacts, élimination du bruit de mesure et reconstruction de l'EEG "propre". A travers une approche de type séparation aveugle de sources (SAS), la première partie vise donc à séparer les signaux EEG dans des sources informatives cérébrales et des sources d'artéfacts extra-cérébraux à éliminer. Une deuxième partie vise à classifier et éliminer les sources d'artéfacts et elle consiste en une étape de classification supervisée. Le bruit de mesure, quant à lui, il est éliminé par une approche de type débruitage par ondelettes. La mise en place d'une méthodologie intégrant d'une manière optimale ces trois techniques (séparation de sources, classification supervisée et débruitage par ondelettes) constitue l'apport principal de cette thèse. La méthodologie développée, ainsi que les résultats obtenus sur une base de signaux d'EEG réels (critiques et inter-critiques) importante, sont soumis à une expertise médicale approfondie, qui valide l'approche proposée / The goal of this research is the electroencephalographic (EEG) signals preprocessing. More precisely, we aim to develop a methodology to obtain a "clean" EEG through the extra- cerebral artefacts (ocular movements, eye blinks, high frequency and cardiac activity) and noise identification and elimination. After identification, the artefacts and noise must be eliminated with a minimal loss of cerebral activity information, as this information is potentially useful to the analysis (visual or automatic) and therefore to the medial diagnosis. To accomplish this objective, several pre-processing steps are needed: separation and identification of the artefact sources, noise elimination and "clean" EEG reconstruction. Through a blind source separation (BSS) approach, the first step aims to separate the EEG signals into informative and artefact sources. Once the sources are separated, the second step is to classify and to eliminate the identified artefacts sources. This step implies a supervised classification. The EEG is reconstructed only from informative sources. The noise is finally eliminated using a wavelet denoising approach. A methodology ensuring an optimal interaction of these three techniques (BSS, classification and wavelet denoising) is the main contribution of this thesis. The methodology developed here, as well the obtained results from an important real EEG data base (ictal and inter-ictal) is subjected to a detailed analysis by medical expertise, which validates the proposed approach
52

Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.

Pavan, Flávio Renê Miranda 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
53

Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.

Flávio Renê Miranda Pavan 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
54

Análise de componentes esparsos locais com aplicações em ressonância magnética funcional / Local sparse component analysis: an application to funcional magnetic resonance imaging

Vieira, Gilson 13 October 2015 (has links)
Esta tese apresenta um novo método para analisar dados de ressonância magnética funcional (FMRI) durante o estado de repouso denominado Análise de Componentes Esparsos Locais (LSCA). A LSCA é uma especialização da Análise de Componentes Esparsos (SCA) que leva em consideração a informação espacial dos dados para reconstruir a informação temporal de fontes bem localizadas, ou seja, fontes que representam a atividade de regiões corticais conectadas. Este estudo contém dados de simulação e dados reais. Os dados simulados foram preparados para avaliar a LSCA em diferentes cenários. Em um primeiro cenário, a LSCA é comparada com a Análise de Componentes Principais (PCA) em relação a capacidade de detectar fontes locais sob ruído branco e gaussiano. Em seguida, a LSCA é comparada com o algoritmo de Maximização da Expectativa (EM) no quesito detecção de fontes dinâmicas locais. Os dados reais foram coletados para fins comparativos e ilustrativos. Imagens de FMRI de onze voluntários sadios foram adquiridas utilizando um equipamento de ressonância magnética de 3T durante um protocolo de estado de repouso. As imagens foram pré-processadas e analisadas por dois métodos: a LSCA e a Análise de Componentes Independentes (ICA). Os componentes identificados pela LSCA foram comparados com componentes comumente reportados na literatura utilizando a ICA. Além da comparação direta com a ICA, a LSCA foi aplicada com o propósito único de caracterizar a dinâmica das redes de estado de repouso. Resultados simulados mostram que a LSCA é apropriada para identificar fontes esparsas locais. Em dados de FMRI no estado de repouso, a LSCA é capaz de identificar as mesmas fontes que são identificadas pela ICA, permitindo uma análise mais detalhada das relações entre regiões dentro de e entre componentes e sugerindo que muitos componentes identificados pela ICA em FMRI durante o estado de repouso representam um conjunto de componentes esparsos locais. Utilizando a LSCA, grande parte das fontes identificadas pela ICA podem ser decompostas em um conjunto de fontes esparsas locais que não são necessariamente independentes entre si. Além disso, as fontes identificadas pela LSCA aproximam muito melhor o sinal temporal observado nas regiões representadas por seus componentes do que as fontes identificadas pela ICA. Finalmente, uma análise mais elaborada utilizando a LSCA permite estimar também relações dinâmicas entre os componentes previamente identificados. Assim, a LSCA permite identificar relações clássicas bem como relações causais entre componentes do estado de repouso. As principais implicações desse resultado são que diferentes premissas permitem decomposições aproximadamente equivalentes, entretanto, critérios menos restritivos tais como esparsidade e localização permitem construir modelos mais compactos e biologicamente mais plausíveis. / This thesis presents Local Sparse Component Analysis (LSCA), a new method for analyzing resting state functional magnetic resonance imaging (fMRI) datasets. LSCA, a extension of Sparse Component Analysis (SCA), takes into account data spatial information to reconstruct temporal sources representing connected regions of significant activity. This study contains simulation data and real data. The simulated data were prepared to evaluate the LSCA in different scenarios. In the first scenario, the LSCA is compared with Principal Component Analysis (PCA) for detecting local sources under Gaussian white noise. Then, LSCA is compared with the expectation maximization algorithm (EM) for detecting the dynamics of local sources. Real data were collected for comparative and illustrative purposes. FMRI images from eleven healthy volunteers were acquired using a 3T MRI scanner during a resting state protocol. Images were preprocessed and analyzed using LSCA and Independent Components Analysis (ICA). LSCA components were compared with commonly reported ICA components. In addition, LSCA was applied for characterizing the dynamics of resting state networks. Simulated results have shown that LSCA is suitable for identifying local sparse sources.For real resting state FMRI data, LSCA is able to identify the same sources that are identified using ICA, allowing detailed functional connectivity analysis of the identified regions within and between components. This suggests that ICA resting state networks can be further decomposed into local sparse components that are not necessarily independent from each other. Moreover, LSCA sources better represent local FMRI signal oscillations than ISCA sources. Finally, brain connectivity analysis shows that LSCA can identify both instantaneous and causal relationships between resting state components. The main implication of this study is that independence and sparsity are equivalent assumptions in resting state FMRI. However, less restrictive criteria such as sparsity and source localization allow building much more compact and biologically plausible brain connectivity models.
55

Méthodes avancées de séparation de sources applicables aux mélanges linéaires-quadratiques / Advanced methods of source separation applicable to linear-quadratic mixtures

Jarboui, Lina 18 November 2017 (has links)
Dans cette thèse, nous nous sommes intéressés à proposer de nouvelles méthodes de Séparation Aveugle de Sources (SAS) adaptées aux modèles de mélange non-linéaires. La SAS consiste à estimer les signaux sources inconnus à partir de leurs mélanges observés lorsqu'il existe très peu d'informations disponibles sur le modèle de mélange. La contribution méthodologique de cette thèse consiste à prendre en considération les interactions non-linéaires qui peuvent se produire entre les sources en utilisant le modèle linéaire-quadratique (LQ). A cet effet, nous avons développé trois nouvelles méthodes de SAS. La première méthode vise à résoudre le problème du démélange hyperspectral en utilisant un modèle linéaire-quadratique. Celle-ci se repose sur la méthode d'Analyse en Composantes Parcimonieuses (ACPa) et nécessite l'existence des pixels purs dans la scène observée. Dans le même but, nous proposons une deuxième méthode du démélange hyperspectral adaptée au modèle linéaire-quadratique. Elle correspond à une méthode de Factorisation en Matrices Non-négatives (FMN) se basant sur l'estimateur du Maximum A Posteriori (MAP) qui permet de prendre en compte les informations a priori sur les distributions des inconnus du problème afin de mieux les estimer. Enfin, nous proposons une troisième méthode de SAS basée sur l'analyse en composantes indépendantes (ACI) en exploitant les Statistiques de Second Ordre (SSO) pour traiter un cas particulier du mélange linéaire-quadratique qui correspond au mélange bilinéaire. / In this thesis, we were interested to propose new Blind Source Separation (BSS) methods adapted to the nonlinear mixing models. BSS consists in estimating the unknown source signals from their observed mixtures when there is little information available on the mixing model. The methodological contribution of this thesis consists in considering the non-linear interactions that can occur between sources by using the linear-quadratic (LQ) model. To this end, we developed three new BSS methods. The first method aims at solving the hyperspectral unmixing problem by using a linear-quadratic model. It is based on the Sparse Component Analysis (SCA) method and requires the existence of pure pixels in the observed scene. For the same purpose, we propose a second hyperspectral unmixing method adapted to the linear-quadratic model. It corresponds to a Non-negative Matrix Factorization (NMF) method based on the Maximum A Posteriori (MAP) estimate allowing to take into account the available prior information about the unknown parameters for a better estimation of them. Finally, we propose a third BSS method based on the Independent Component Analysis (ICA) method by using the Second Order Statistics (SOS) to process a particular case of the linear-quadratic mixture that corresponds to the bilinear one.
56

Experimental Modal Analysis using Blind Source Separation Techniques / Analyse modale expérimentale basée sur les techniques de séparation de sources aveugle

Poncelet, Fabien 08 July 2010 (has links)
This dissertation deals with dynamics of engineering structures and principally discusses the identification of the modal parameters (i.e., natural frequencies, damping ratios and vibration modes) using output-only information, the excitation sources being considered as unknown and unmeasurable. To solve these kind of problems, a quite large selection of techniques is available in the scientific literature, each of them possessing its own features, advantages and limitations. One common limitation of most of the methods concerns the post-processing procedures that have proved to be delicate and time consuming in some cases, and usually require good users expertise. The constant concern of this work is thus the simplification of the result interpretation in order to minimize the influence of this ungovernable parameter. A new modal parameter estimation approach is developed in this work. The proposed methodology is based on the so-called Blind Source Separation techniques, that aim at reducing large data set to reveal its essential structure. The theoretical developments demonstrate a one-to-one relationship between the so-called mixing matrix and the vibration modes. Two separation algorithms, namely the Independent Component Analysis and the Second-Order Blind Identification, are considered. Their performances are compared, and, due to intrinsic features, one of them is finally identified as more suitable for modal identification problems. For the purpose of comparison, numerous academic case studies are considered to evaluate the influence of parameters such as damping, noise and nondeterministic excitations. Finally, realistic examples dealing with a large number of active modes, typical impact hammer modal testing and operational testing conditions, are studied to demonstrate the applicability of the proposed methodology for practical applications.
57

Perturbation analysis and performance evaluation of a distance based localisation for wireless sensor networks.

Adewumi, Omotayo Ganiyu. January 2013 (has links)
M. Tech. Electrical Engineering. / Discusses node location as a major problem when considering several areas of application based on wireless sensor networks. Many localisation algorithms have been proposed in the literature to solve the problem of locating sensor nodes in WSN. However, most of these algorithms have poor localisation accuracy and high computational cost. Due to these limitations, this research study considers the modelling of an efficient and robust localisation scheme to determine the location of individual sensor nodes in WSN. To successfully solve this task, this research study focuses on the aspect of improving the position accuracy of wireless sensor nodes in WSN. The study considers a distance based cooperative localisation algorithm called Curvilinear Component Analysis Mapping (CCA-MAP) to accurately localise the sensor nodes in WSN. CCA-MAP is used because it delivers improved position accuracy and computational efficiency.
58

Κατασκευή συστήματος αναγνώρισης προτύπων ηχητικών σημάτων ανθρώπου που κοιμάται / Design of a pattern recognition system to estimate sleep sounds

Βερτεούρη, Ελένη 03 April 2012 (has links)
Το θέμα της κατασκευής ενός συστήματος αναγνώρισης προτύπων για τα ηχητικά σήματα ενός ανθρώπου που κοιμάται είναι ένα από τα ανοιχτά ζητήματα της Βιοιατρικής. Στην παρούσα διπλωματική εξετάζουμε την εξαγωγή ερμηνεύσιμων σημάτων που αντιστοιχούν στον καρδιακό ρυθμό, την αναπνοή και το ροχαλητό. Χρησιμοποιούμε μεθόδους Ανάλυσης σε Ανεξάρτητες Συνιστώσες και μεθόδους Τυφλού Διαχωρισμού που εκμεταλεύονται Στατιστικές Δεύτερης Τάξης. Συμπεραίνουμε ότι οι δεύτερες είναι οι πλέον κατάλληλες όταν συνοδεύονται από ένα στάδιο προεπεξεργασίας που αφορά ανάλυση σε ζώνες συχνοτήτων. / The design of a non-intrusive Pattern Recognition System to estimate the sleep sounds is an open problem of Bioengineering. We use recordings from body-sensors to estimate the heart beat, the breathing and the snoring. In this thesis we examine the effectiveness of Independent Component Analysis for this Blind Source Separation Problem and we compare it with methods that perform Source Separation using Second Order Statistics. We take into account the temporal structure of the sources as well as the presence of noise. Our system is greatly improved by a preprocessing stage of targeted subband decomposition which uses a priori knowledge about the sources. We propose an efficient solution to this problem which is confirmed by medical data.
59

Κατασκευή συστήματος ταυτόχρονης αναγνώρισης ομιλίας

Χαντζιάρα, Μαρία 08 January 2013 (has links)
Σκοπός της παρούσας διπλωματικής εργασίας είναι η δημιουργία ενός συστήματος μίξης ηχητικών σημάτων και προσπάθεια διαχωρισμού τους με βάση τις μεθόδους τυφλού διαχωρισμού σημάτων. Έχοντας ως δεδομένα τα αρχικά σήματα των πηγών γίνεται προσπάθεια, αρχικά μέσω της εφαρμογής της μεθόδου Ανάλυσης Ανεξάρτητων Συνιστωσών (ICA) για την περίπτωση της στιγμιαίας μίξης και στη συνέχεια μέσω της χρήσης αλγορίθμων που στηρίζονται στο μοντέλο παράλληλου παράγοντα (PARAFAC) για την περίπτωση της συνελικτικής μίξης, να προσδιοριστούν τα σήματα των πηγών από τα σήματα μίξης. Επιπλέον, τροποποιώντας τις παραμέτρους του συστήματος που μελετάμε σε κάθε περίπτωση, προσπαθούμε να πετύχουμε τη βέλτιστη απόδοση του διαχωρισμού. / The subject of this diploma thesis is the creation of a mixing system of speech signals and the attempt of their separation using the methods of blind source separation (BSS). Considering the original source signals known, we attempt, firstly by using independent component analysis for instantaneous mixtures and then by using PARAFAC model for convolutive mixtures, to extract the original source signals from the mixing signals. Moreover, by modifying the parameters of the system we make an effort to achieve the best performance of the separation.
60

Separação cega de sinais de fala utilizando detectores de voz. / Blind separation of speech signals using voice detectors.

Ronaldo Alencar da Rocha 28 January 2014 (has links)
Neste trabalho contemplamos o emprego de detectores de voz como uma etapa de pré- processamento de uma técnica de separação cega de sinais implementada no domínio do tempo, que emprega estatísticas de segunda ordem para a separação de misturas convolutivas e determinadas. Seu algoritmo foi adaptado para realizar a separação tanto em banda cheia quanto em sub-bandas, considerando a presença e a ausência de instantes de silêncio em misturas de sinais de voz. A ideia principal consiste em detectar trechos das misturas que contenham atividade de voz, evitando que o algoritmo de separação seja acionado na ausência de voz, promovendo ganho de desempenho e redução do custo computacional. / In this work we contemplate the use of voice detectors as a preprocessing step of a time-domain blind source separation technique, employing second order statistics in the separation of convolutive and determined mixtures. This algorithm is adapted to perform the separation both in fullband and in subbands, considering the presence and the absence of a moments of silence in mixtures of voice signals. The main idea aims at detect portions of the mixtures containing voice activity, avoiding that the separation algorithm is triggered in the absence of voice, promoting performance improvement and reduced computational cost.

Page generated in 0.1171 seconds