• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 35
  • 20
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 189
  • 189
  • 117
  • 60
  • 56
  • 49
  • 47
  • 47
  • 40
  • 39
  • 29
  • 23
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A Comparative Study of Signal Processing Methods for Fetal Phonocardiography Analysis

Vadali, Venkata Akshay Bhargav Krishna 17 July 2018 (has links)
More than one million fetal deaths occur in the United States every year [1]. Monitoring the long-term heart rate variability provides a great amount of information about the fetal health condition which requires continuous monitoring of the fetal heart rate. All the existing technologies have either complex instrumentation or need a trained professional at all times or both. The existing technologies are proven to be impractical for continuous monitoring [2]. Hence, there is an increased interest towards noninvasive, continuous monitoring, and less expensive technologies like fetal phonocardiography. Fetal Phonocardiography (FPCG) signal is obtained by placing an acoustic transducer on the abdomen of the mother. FPCG is rich in physiological bio-signals and can continuously monitor the fetal heart rate non-invasively. Despite its high diagnostic potential, it is still not being used as the secondary point of care. There are two challenges as to why it is still being considered as the secondary point of care; in the data acquisition system and the signal processing methodologies. The challenges pertaining to data acquisition systems are but not limited to sensor placement, maternal obesity and multiple heart rates. While, the challenges in the signal processing methodologies are dynamic nature of FPCG signal, multiple known and unknown signal components and SNR of the signal. Hence, to improve the FPCG based care, challenges in FPCG signal processing methodologies have been addressed in this study. A comparative evaluation was presented on various advanced signal processing techniques to extract the bio-signals with fidelity. Advanced signal processing approaches, namely empirical mode decomposition, spectral subtraction, wavelet decomposition and adaptive filtering were used to extract the vital bio-signals. However, extracting these bio-signals with fidelity is a challenging task in the context of FPCG as all the bio signals and the unwanted artifacts overlap in both time and frequency. Additionally, the signal is corrupted by noise induced from the fetal and maternal movements as well the background and the sensor. Empirical mode decomposition algorithm was efficient to denoise and extract the maternal and fetal heart sounds in a single step. Whereas, spectral subtraction was used to denoise the signal which was later subjected to wavelet decomposition to extract the signal of interest. On the other hand, adaptive filtering was used to estimate the fetal heart sound from a noisy FPCG where maternal heart sound was the reference input. The extracted signals were validated by obtaining the frequency ranges computed by the Short Time Fourier Transform (STFT). It was observed that the bandwidths of extracted fetal heart sounds and maternal heart sounds were consistent with the existing gold standards. Furthermore, as a means of additional validation, the heart rates were calculated. Finally, the results obtained from all these methods were compared and contrasted qualitatively and quantitatively.
72

Iterative issues of ICA, quality of separation and number of sources: a study for biosignal applications

Naik, Ganesh Ramachandra, ganesh.naik@rmit.edu.au January 2009 (has links)
This thesis has evaluated the use of Independent Component Analysis (ICA) on Surface Electromyography (sEMG), focusing on the biosignal applications. This research has identified and addressed the following four issues related to the use of ICA for biosignals: • The iterative nature of ICA • The order and magnitude ambiguity problems of ICA • Estimation of number of sources based on dependency and independency nature of the signals • Source separation for non-quadratic ICA (undercomplete and overcomplete) This research first establishes the applicability of ICA for sEMG and also identifies the shortcomings related to order and magnitude ambiguity. It has then developed, a mitigation strategy for these issues by using a single unmixing matrix and neural network weight matrix corresponding to the specific user. The research reports experimental verification of the technique and also the investigation of the impact of inter-subject and inter-experimental variations. The results demonstrate that while using sEMG without separation gives only 60% accuracy, and sEMG separated using traditional ICA gives an accuracy of 65%, this approach gives an accuracy of 99% for the same experimental data. Besides the marked improvement in accuracy, the other advantages of such a system are that it is suitable for real time operations and is easy to train by a lay user. The second part of this thesis reports research conducted to evaluate the use of ICA for the separation of bioelectric signals when the number of active sources may not be known. The work proposes the use of value of the determinant of the Global matrix generated using sparse sub band ICA for identifying the number of active sources. The results indicate that the technique is successful in identifying the number of active muscles for complex hand gestures. The results support the applications such as human computer interface. This thesis has also developed a method of determining the number of independent sources in a given mixture and has also demonstrated that using this information, it is possible to separate the signals in an undercomplete situation and reduce the redundancy in the data using standard ICA methods. The experimental verification has demonstrated that the quality of separation using this method is better than other techniques such as Principal Component Analysis (PCA) and selective PCA. This has number of applications such as audio separation and sensor networks.
73

Knowledge-based speech enhancement

Srinivasan, Sriram January 2005 (has links)
Speech is a fundamental means of human communication. In the last several decades, much effort has been devoted to the efficient transmission and storage of speech signals. With advances in technology making mobile communication ubiquitous, communications anywhere has become a reality. The freedom and flexibility offered by mobile technology brings with it new challenges, one of which is robustness to acoustic background noise. Speech enhancement systems form a vital front-end for mobile telephony in noisy environments such as in cars, cafeterias, subway stations, etc., in hearing aids, and to improve the performance of speech recognition systems. In this thesis, which consists of four research articles, we discuss both single and multi-microphone approaches to speech enhancement. The main contribution of this thesis is a framework to exploit available prior knowledge about both speech and noise. The physiology of speech production places a constraint on the possible shapes of the speech spectral envelope, and this information s captured using codebooks of speech linear predictive (LP) coefficients obtained from a large training database. Similarly, information about commonly occurring noise types is captured using a set of noise codebooks, which can be combined with sound environment classi¯cation to treat different environments differently. In paper A, we introduce maximum-likelihood estimation of the speech and noise LP parameters using the codebooks. The codebooks capture only the spectral shape. The speech and noise gain factors are obtained through a frame-by-frame optimization, providing good performance in practical nonstationary noise environments. The estimated parameters are subsequently used in a Wiener filter. Paper B describes Bayesian minimum mean squared error estimation of the speech and noise LP parameters and functions there-of, while retaining the in- stantaneous gain computation. Both memoryless and memory-based estimators are derived. While papers A and B describe single-channel techniques, paper C describes a multi-channel Bayesian speech enhancement approach, where, in addition to temporal processing, the spatial diversity provided by multiple microphones s also exploited. In paper D, we introduce a multi-channel noise reduction technique motivated by blind source separation (BSS) concepts. In contrast to standard BSS approaches, we use the knowledge that one of the signals is speech and that the other is noise, and exploit their different characteristics. / QC 20100929
74

Improving the quality of speech in noisy environments

Parikh, Devangi Nikunj 06 November 2012 (has links)
In this thesis, we are interested in processing noisy speech signals that are meant to be heard by humans, and hence we approach the noise-suppression problem from a perceptual perspective. We develop a noise-suppression paradigm that is based on a model of the human auditory system, where we process signals in a way that is natural to the human ear. Under this paradigm, we transform an audio signal in to a perceptual domain, and processes the signal in this perceptual domain. This approach allows us to reduce the background noise and the audible artifacts that are seen in traditional noise-suppression algorithms, while preserving the quality of the processed speech. We develop a single- and dual-microphone algorithm based on this perceptual paradigm, and conduct subjecting tests to show that this approach outperforms traditional noise-suppression techniques. Moreover, we investigate the cause of audible artifacts that are generated as a result of suppressing the noise in noisy signals, and introduce constraints on the noise-suppression gain such that these artifacts are reduced.
75

Why only two ears? Some indicators from the study of source separation using two sensors

Joseph, Joby 08 1900 (has links)
In this thesis we develop algorithms for estimating broadband source signals from a mixture using only two sensors. This is motivated by what is known in the literature as cocktail party effect, the ability of human beings to listen to the desired source from a mixture of sources with at most two ears. Such a study lets us, achieve a better understanding of the auditory pathway in the brain and confirmation of the results from physiology and psychoacoustics, have a clue to search for an equivalent structure in the brain which corresponds to the modification which improves the algorithm, come up with a benchmark system to automate the evaluation of the systems like 'surround sound', perform speech recognition in noisy environments. Moreover, it is possible that, what we learn about the replication of the functional units in the brain may help us in replacing those using signal processing units for patients suffering due to the defects in these units. There are two parts to the thesis. In the first part we assume the source signals to be broadband and having strong spectral overlap. Channel is assumed to have a few strong multipaths. We propose an algorithm to estimate all the strong multi-paths from each source to the sensors for more than two sources with measurement from two sensors. Because the channel matrix is not invertible when the number of sources is more than the number of sensors, we make use of the estimates of the multi-path delays for each source to improve the SIR of the sources. In the second part we look at a specific scenario of colored signals and channel being one with a prominent direct path. Speech signals as the sources in a weakly reverberant room and a pair of microphones as the sensors satisfy these conditions. We consider the case with and without a head like structure between the microphones. The head like structure we used was a cubical block of wood. We propose an algorithm for separating sources under such a scenario. We identify the features of speech and the channel which makes it possible for the human auditory system to solve the cocktail party problem. These properties are the same as that satisfied by our model. The algorithm works well in a partly acoustically treated room, (with three persons speaking and two microphones and data acquired using standard PC setup) and not so well in a heavily reverberant scenario. We see that there are similarities in the processing steps involved in the algorithm and what we know of the way our auditory system works, especially so in the regions before the auditory cortex in the auditory pathway. Based on the above experiments we give reasons to support the hypothesis about why all the known organisms need to have only two ears and not more but may have more than two eyes to their advantage. Our results also indicate that part of pitch estimation for individual sources might be occurring in the brain after separating the individual source components. This might solve the dilemma of having to do multi-pitch estimation. Recent works suggest that there are parallel pathways in the brain up to the primary auditory cortex which deal with temporal cue based processing and spatial cue based processing. Our model seem to mimic the pathway which makes use of the spatial cues.
76

Αξιολόγηση αυτόματων μεθόδων διαχωρισμού ακουστικών βιοσημάτων τα οποία λαμβάνονται από συστοιχία πιεζοηλεκτρικών αισθητήρων σε χαμηλές συχνότητες

Μακρυγιώργου, Δήμητρα 24 November 2014 (has links)
Στην παρούσα εργασία θα αξιολογηθούν κάποιες αυτόματες μέθοδοι διαχωρισμού ακουστικών βιοσημάτων τα οποία λαμβάνονται από συστοιχία πιεζοηλεκτρικών αισθητήρων σε χαμηλές συχνότητες. Πιο συγκεκριμένα αρχικά θα οριστεί το πρόβλημα το οποίο μας ζητείται να επιλύσουμε και θα γίνουν αναφορές στη διαδρομή των δύο σημαντικότερων μεθόδων διαχωρισμού , της PCA και της ICA. Εν συνεχεία θα γίνει αναφορά στα βιοσήματα τόσο ως προς την προέλευση όσο και ως προς τα σημαντικότερα χαρακτηριστικά τους , η γνώση των οποίων διευκολύνει κατά πολύ τόσο τη διαδικασία του διαχωρισμού όσο και την αξιολόγηση της τελευταίας. Σε επόμενο κεφάλαιο θα γίνει εκτενής αναφορά στους πιεζοηλεκτρικούς αισθητήρες και τον τρόπο με τον οποίο κωδικοποιούν τα βιοσήματα με στόχο την περαιτέρω επεξεργασία τους. Στο μεγαλύτερο τμήμα της εργασίας αυτής ωστόσο θα αναλυθούν οι δύο τεχνικές διαχωρισμού , PCA και ICA και θα γίνει νύξη στους σημαντικότερους αλγορίθμους των παραπάνω (FastICA). Τέλος, θα γίνει εφαρμογή των μεθόδων αυτών τόσο σε τεχνητά όσο και σε πραγματικά σήματα και ανάλυση των αποτελεσμάτων που θα εξαχθούν. / In this diploma thesis some automatic acoustic bio-signal separation techniques are going to be evaluated. The signals used are taken from an array of piezoelectric sensors at low frequencies. To be more specific we are going to set the problem and make a brief report of the main historical facts about PCA and ICA. Furthermore, we are going to analyze both the origin and the most significant characteristics of bio-signals. This knowledge is going to provide us with a much easier separation procedure and a robust evaluation. Additionally not only piezoelectric sensors are going to be analyzed but also PCA and ICA will be resolved too. Main algorithms of both techniques will be mentioned. In conclusion those methods will be applied both on artificial and real data in order to draw some useful conclusions.
77

Dictionary learning methods for single-channel source separation

Lefèvre, Augustin 03 October 2012 (has links) (PDF)
In this thesis we provide three main contributions to blind source separation methods based on NMF. Our first contribution is a group-sparsity inducing penalty specifically tailored for Itakura-Saito NMF. In many music tracks, there are whole intervals where only one source is active at the same time. The group-sparsity penalty we propose allows to blindly indentify these intervals and learn source specific dictionaries. As a consequence, those learned dictionaries can be used to do source separation in other parts of the track were several sources are active. These two tasks of identification and separation are performed simultaneously in one run of group-sparsity Itakura-Saito NMF. Our second contribution is an online algorithm for Itakura-Saito NMF that allows to learn dictionaries on very large audio tracks. Indeed, the memory complexity of a batch implementation NMF grows linearly with the length of the recordings and becomes prohibitive for signals longer than an hour. In contrast, our online algorithm is able to learn NMF on arbitrarily long signals with limited memory usage. Our third contribution deals user informed NMF. In short mixed signals, blind learning becomes very hard and sparsity do not retrieve interpretable dictionaries. Our contribution is very similar in spirit to inpainting. It relies on the empirical fact that, when observing the spectrogram of a mixture signal, an overwhelming proportion of it consists in regions where only one source is active. We describe an extension of NMF to take into account time-frequency localized information on the absence/presence of each source. We also investigate inferring this information with tools from machine learning.
78

Traitement d’antenne tensoriel / Tensor array processing

Raimondi, Francesca 22 September 2017 (has links)
L’estimation et la localisation de sources sont des problèmes centraux en traitement d’antenne, en particulier en télécommunication, sismologie, acoustique, ingénierie médicale ou astronomie. Une antenne de capteurs est un système d’acquisition composé par de multiples capteurs qui reçoivent des ondes en provenance de sources de directions différentes: elle échantillonne les champs incidents en espace et en temps.Pour cette raison, des techniques haute résolution comme MUSIC utilisent ces deux éléments de diversité, l’espace et le temps, afin d’estimer l’espace signal engendré par les sources incidentes, ainsi que leur direction d’arrivée. Ceci est généralement atteint par une estimation préalable de statistiques de deuxième ordre ou d’ordre supérieur, comme la covariance spatiale de l’antenne, qui nécessitent donc de temps d’observation suffisamment longs.Seulement récemment, l’analyse tensorielle a été appliquée au traitement d’antenne, grâce à l’introduction, comme troisième modalité (ou diversité), de la translation en espace d’une sous-antenne de référence, sans faire appel à l’estimation préalable de quantités statistiques.Les décompositions tensorielles consistent en l’analyse de cubes de données multidimensionnelles, au travers de leur décomposition en somme d’éléments constitutifs plus simples, grâce à la multilinéarité et à la structure de rang faible du modèle sous-jacent.Ainsi, les mêmes techniques tensorielles nous fournissent une estimée des signaux eux-mêmes, ainsi que de leur direction d’arrivée, de façon déterministe. Ceci peut se faire en vertu du modèle séparable et de rang faible vérifié par des sources en bande étroite et en champs lointain.Cette thèse étudie l’estimation et la localisation de sources par des méthodes tensorielles de traitement d’antenne.Le premier chapitre présente le modèle physique de source en bande étroite et en champs lointain, ainsi que les définitions et hypothèses fondamentales. Le deuxième chapitre passe en revue l’état de l’art sur l’estimation des directions d’arrivée, en mettant l’accent sur les méthodes haute résolution à sous-espace. Le troisième chapitre introduit la notation tensorielle, à savoir la définition des tableaux de coordonnées multidimensionnels, les opérations et décompositions principales. Le quatrième chapitre présente le sujet du traitement tensoriel d’antenne au moyen de l’invariance par translation.Le cinquième chapitre introduit un modèle tensoriel général pour traiter de multiples diversités à la fois, comme l’espace, le temps, la translation en espace, les profils de gain spatial et la polarisation des ondes élastiques en bande étroite.Par la suite, les sixième et huitième chapitres établissent un modèle tensoriel pour un traitement d’antenne bande large cohérent. Nous proposons une opération de focalisation cohérente et séparable par une transformée bilinéaire et par un ré-échantillonnage spatial, respectivement, afin d’assurer la multilinéarité des données interpolées.Nous montrons par des simulations numériques que l’estimation proposée des paramètres des signaux s’améliore considérablement, par rapport au traitement tensoriel classique en bande étroite, ainsi qu’à MUSIC cohérent bande large.Egalement, tout au long de la thèse, nous comparons les performances de l’estimation tensorielle avec la borne de Cramér-Rao du modèle multilinéaire associé, que nous développons, dans sa forme la plus générale, dans le septième chapitre. En outre, dans le neuvième chapitre nous illustrons une application à des données sismiques réelles issues d’une campagne de mesure sur un glacier alpin, grâce à la diversité de vitesse de propagation.Enfin, le dixième et dernier chapitre de cette thèse traite le sujet parallèle de la factorisation spectrale multidimensionnelle d’ondes sismiques, et présente une application à l’estimation de la réponse impulsionnelle du soleil pour l’héliosismologie. / Source estimation and localization are a central problem in array signal processing, and in particular in telecommunications, seismology, acoustics, biomedical engineering, and astronomy. Sensor arrays, i.e. acquisition systems composed of multiple sensors that receive source signals from different directions, sample the impinging wavefields in space and time. Hence, high resolution techniques such as MUSIC make use of these two elements of diversities: space and time, in order to estimate the signal subspace generated by impinging sources, as well as their directions of arrival. This is generally done through the estimation of second or higher orders statistics, such as the array spatial covariance matrix, thus requiring sufficiently large data samples. Only recently, tensor analysis has been applied to array processing using as a third mode (or diversity), the space shift translation of a reference subarray, with no need for the estimation of statistical quantities. Tensor decompositions consist in the analysis of multidimensional data cubes of at least three dimensions through their decomposition into a sum of simpler constituents, thanks to the multilinearity and low rank structure of the underlying model. Thus, tensor methods provide us with an estimate of source signatures, together with directions of arrival, in a deterministic way. This can be achieved by virtue of the separable and low rank model followed by narrowband sources in the far field. This thesis deals with source estimation and localization of multiple sources via these tensor methods for array processing. Chapter 1 presents the physical model of narrowband elastic sources in the far field, as well as the main definitions and assumptions. Chapter 2 reviews the state of the art on direction of arrival estimation, with a particular emphasis on high resolution signal subspace methods. Chapter 3 introduces the tensor formalism, namely the definition of multi-way arrays of coordinates, the main operations and multilinear decompositions. Chapter 4 presents the subject of tensor array processing via rotational invariance. Chapter 5 introduces a general tensor model to deal with multiple physical diversities, such as space, time, space shift, polarization, and gain patterns of narrowband elastic waves. Subsequently, Chapter 6 and Chapter 8 establish a tensor model for wideband coherent array processing. We propose a separable coherent focusing operation through bilinear transform and through a spatial resampling, respectively, in order to ensure the multilinearity of the interpolated data. We show via computer simulations that the proposed estimation of signal parameters considerably improves, compared to existing narrowband tensor processing and wideband MUSIC. Throughout the chapters we also compare the performance of tensor estimation to the Cramér-Rao bounds of the multilinear model, which we derive in its general formulation in Chapter 7. Moreover, in Chapter 9 we propose a tensor model via the diversity of propagation speed for seismic waves and illustrate an application to real seismic data from an Alpine glacier. Finally, the last part of this thesis in Chapter 10 moves to the parallel subject of multidimensional spectral factorization of seismic ways, and illustrates an application to the estimation of the impulse response of the Sun for helioseismology.
79

Modélisation cyclostationnaire et séparation de sources des signaux électromyographiques / Cyclostationary modeling and blind source separation of electromyographic signals

Roussel, Julien 08 December 2014 (has links)
L’objectif de cette thèse est de développer des méthodes de décomposition des signaux électromyographiques (EMG) en signaux élémentaires, les trains de potentiels d’action d’unité motrice (TPAUM). Nous avons proposé deux modèles de génération des signaux et nous avons mis en évidence la propriété de cyclostationnarité et de cyclostationnarité floue de ces deux modèles. Dans l’objectif de la décomposition, nous avons enfin proposé une méthode de décomposition aveugle à partir de signaux EMG multi-capteurs en utilisant cette propriété. Nous présentons les limitations théoriques de la méthode, notamment par un seuil limite de la fréquence de décharge. Nous avons effectué une évaluation des performances de la méthode proposée avec comparaison à une méthode classique de séparation à l’ordre 2.Il a été montré que l’exploitation de la propriété de cyclostationnarité apportait de meilleures performances de séparation dans le cas bruité et non bruité, sur le modèle cyclostationnaire et sur le modèle cyclostationnaire flou. Les performances se trouvent dégradées lorsque la fréquence de décharge dépasse le seuil théorique. Cette évaluation a été réalisée au moyen de simulations de Monte-Carlo construites sur des observations réelles. Enfin, la méthode appliquée sur des données réelles a montré de bons résultats sur des signaux EMG intramusculaires. / The aim of this thesis is to develop decomposition methods of electromyographic (EMG) signals into elementary signals, called motor unit action potential trains (MUAPT). We proposed two signal generation models and we have demonstrated the cyclostationary and fuzzy cyclostationary properties of these. We finally proposed a blind decomposition method from multi-sensor EMG signals using these properties. We present the theoretical limitations of the method, in particular the existence of a limiting threshold of the discharge frequency. We conducted a performance evaluation of the proposed method with a comparison with conventional 2nd order separation method. It has been shown that the contribution of cyclostationarity property brings better performance in noisy and noiseless cases and in the cyclostationary and fuzzy cyclostationary model cases. We highlighted a performance degradation when the discharge frequency was beyond the theoretical threshold. This evaluation was performed via Monte Carlo simulations based on real observations. Finally, we presented real EMG signals results. The method has shown good results on intramuscular EMG signals.
80

Separação cega de fontes em tempo real utilizando FPGA

Fratini Filho, Oswaldo January 2017 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2017. / O metodo estatistico de Independent Component Analysis (ICA) e um dos mais amplamente utilizados para solucionar o problema de Blind Source Separation (BSS) que, junto a outros metodos de processamento de sinais, sao colocados a prova com o aumento do numero das fontes de sinais e amostras disponiveis para processamento, e sao a base de aplicacoes com requisitos de desempenho cada vez maiores. O objetivo deste trabalho e realizar o estudo do metodo ICA e analise dos algoritmos FastICA e Joint Approximate Diagonalization of Eigen-matrices (JADE) implementados em Field-Programmable Gate Array (FPGA) e seu comportamento quando variamos o numero de amostras das misturas e os numeros de iteracoes ou updates. Outros trabalhos de pesquisa ja foram realizados com o objetivo de demonstrar a viabilidade da implementacao de tais algoritmos em FPGA, mas pouco apresentam sobre o metodo utilizado para definir detalhes de implementacao como numero de amostradas utilizados, a razao da representacao numerica escolhida e sobre o thoughtput alcancado. A analise que este trabalho propos realizar, num primeiro momento, passa por demonstrar o comportamento do core dos algoritmos quando implementados utilizando diferentes representacoes numericas de ponto flutuante com precisao simples (32 bits) e ponto fixo com diferentes numeros de amostras e fontes a serem estimadas, por meio de simulacoes. Foi verificada a viabilidade desses serem utilizados para atender aplicacoes que precisam resolver o problema de BSS com boa acuracia, quando comparados com implementacoes dos mesmos algoritmos que se utilizaram de uma representacao numerica de ponto flutuante com precisao dupla (64 bits). Utilizando o Simulink R¿e a biblioteca DSP Builder R¿da Altera R¿para implementar os modelos de cada algoritmo, foi possivel analisar outros aspectos importantes, em busca de demonstrar a possibilidade da utilizacao de tais implementacoes em aplicacoes com requisitos de tempo real, que necessitam de alto desempenho, utilizando FPGA de baixo custo, como: a quantidade de recursos de FPGA necessarios na implementacao de cada algoritmo, principalmente buscando minimizar a utilizacao de blocos DSP, a latencia, e maximizar o throughput de processamento. / Independent Component Analysis (ICA) is one of the most widely used statistical method to solve the problem of Blind Source Separation (BSS), which, along with other signal processing methods, faces new challenges with the increasing the number of signal sources and samples available for processing, being the base of applications with increasing performance requirements. The aim of this work is to study the FastICA and the Joint Approximate Diagonalization of Eigen-matrices (JADE) algorithms and implement them in Field- Programmable Gate Array (FPGA). Other researches have already been carried out with the objective of demonstrating the feasibility of implementing such algorithms in FPGA, but they present little about the methodology used and implementation details such as the number of samples used, why the numerical representation was chosen and the obtained thoughtput. The analysis carried out in this work demonstrates the behavior of the core of the algorithms when implemented using different representations, such as singleprecision floating-point (32 bits) and fixed point with different numbers of samples and sources to be estimated. It was verified these immplementations are able to solve the BSS problem with good accuracy when compared with implementations of the same algorithms that exmploy a double-precision floating-point representation (64 bits). Using the Simulink R¿ and Alterafs R¿ DSP Builder R¿ library to implement the models of each algorithm, it was possible to analyze other important aspects, in order to demonstrate the possibility of using such implementations in applications with real-time requirements that require high performance, using low cost FPGA, such as: the necessary FPGA resources in the implementation of each algorithm, mainly seeking to minimize the use of DSP blocks, latency, and to maximize the processing throughput.

Page generated in 1.3055 seconds