101 |
Modeling spatial and temporal variabilities in hyperspectral image unmixing / Modélisation de la variabilité spectrale pour le démélange d’images hyperspectralThouvenin, Pierre-Antoine 17 October 2017 (has links)
Acquises dans plusieurs centaines de bandes spectrales contiguës, les images hyperspectrales permettent d'analyser finement la composition d'une scène observée. En raison de la résolution spatiale limitée des capteurs utilisés, le spectre d'un pixel d'une image hyperspectrale résulte de la composition de plusieurs signatures associées à des matériaux distincts. À ce titre, le démélange d'images hyperspectrales vise à estimer les signatures des différents matériaux observés ainsi que leur proportion dans chacun des pixels de l'image. Pour cette analyse, il est d'usage de considérer qu'une signature spectrale unique permet de décrire un matériau donné, ce qui est généralement intrinsèque au modèle de mélange choisi. Toutefois, la signature d'un matériau présente en pratique une variabilité spectrale qui peut être significative d'une image à une autre, voire au sein d'une même image. De nombreux paramètres peuvent en être cause, tels que les conditions d'acquisitions (e.g., conditions d'illumination locales), la déclivité de la scène observée ou des interactions complexes entre la lumière incidente et les éléments observés. À défaut d'être prises en compte, ces sources de variabilité perturbent fortement les signatures extraites, tant en termes d'amplitude que de forme. De ce fait, des erreurs d'estimation peuvent apparaître, qui sont d'autant plus importantes dans le cas de procédures de démélange non-supervisées. Le but de cette thèse consiste ainsi à proposer de nouvelles méthodes de démélange pour prendre en compte efficacement ce phénomène. Nous introduisons dans un premier temps un modèle de démélange original visant à prendre explicitement en compte la variabilité spatiale des spectres purs. Les paramètres de ce modèle sont estimés à l'aide d'un algorithme d'optimisation sous contraintes. Toutefois, ce modèle s'avère sensible à la présence de variations spectrales abruptes, telles que causées par la présence de données aberrantes ou l'apparition d'un nouveau matériau lors de l'analyse d'images hyperspectrales multi-temporelles. Pour pallier ce problème, nous introduisons une procédure de démélange robuste adaptée à l'analyse d'images multi-temporelles de taille modérée. Compte tenu de la dimension importante des données étudiées, notamment dans le cas d'images multi-temporelles, nous avons par ailleurs étudié une stratégie d'estimation en ligne des différents paramètres du modèle de mélange proposé. Enfin, ce travail se conclut par l'étude d'une procédure d'estimation distribuée asynchrone, adaptée au démélange d'un grand nombre d'images hyperspectrales acquises sur une même scène à différents instants. / Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models. Read more
|
102 |
Calibra??o cega de receptores cinco-portas baseada em separa??o cega de fontesVidal, Francisco Jos? Targino 24 May 2013 (has links)
Made available in DSpace on 2014-12-17T14:55:16Z (GMT). No. of bitstreams: 1
FranciscoJTV_TESE.pdf: 16694617 bytes, checksum: 98c04bab1f2a3180ba8bd87b03174888 (MD5)
Previous issue date: 2013-05-24 / The exponential growth in the applications of radio frequency (RF) is accompanied
by great challenges as more efficient use of spectrum as in the design of new architectures
for multi-standard receivers or software defined radio (SDR) .
The key challenge in designing architecture of the software defined radio is the implementation
of a wide-band receiver, reconfigurable, low cost, low power consumption,
higher level of integration and flexibility.
As a new solution of SDR design, a direct demodulator architecture, based on fiveport
technology, or multi-port demodulator, has been proposed. However, the use of
the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration)
procedure in order to generate the in-phase (I) and quadrature (Q) components of the
transmitted baseband signal.
In this work, we propose to evaluate the performance of a blind calibration technique
without additional knowledge about training or pilot sequences of the transmitted signal
based on independent component analysis for the regeneration of I/Q five-port downconversion,
by exploiting the information on the statistical properties of the three output
signals / Estudos recentes apontam que o aumento nas aplica??es de r?dio frequ?ncia (RF) vem
acompanhado por grandes desafios tanto no uso eficiente do espectro eletromagn?tico
quanto no projeto de novas arquiteturas para receptores multi-padr?o, ou r?dio definidos
por software (RDS). O principal desafio da arquitetura f?sica de um RDS ? a implementa??o
de um receptor banda-larga com caracter?sticas de baixo custo, baixo consumo, maior
grau de integra??o e flexibilidade.
A arquitetura homodina, baseada na tecnologia cinco-portas, surge como uma alternativa
para aplica??es em r?dio definidos por software. No entanto, a regenera??o das
componentes em fase e quadratura, no receptor cinco-portas, comumente denominada de
calibra??o, constitui um dos maiores desafios na aplica??o dessa tecnologia.
Os m?todos de calibra??o, propostos na literatura, normalmente baseiam-se no conhecimento
do modelo matem?tico do circuito, em que o mesmo ? calibrado previamente
(off-line), para um tipo de sinal com caracter?sticas espec?ficas ou em tempo real, com base
no conhecimento da sequ?ncia de aprendizagem e do tipo de modula??o. Nesse trabalho,
? apresentado uma proposta de regenera??o cega dessas componentes, para um receptor
homodino cinco-portas, utilizando a abordagem denominada Separa??o Cega de Fontes
(an?lise de componentes independentes - ICA), que explora as caracter?sticas estat?sticas
dos tr?s sinais de sa?da do receptor cinco-portas. A valida??o dessa abordagem ? realizada
por meio de simula??o e de resultados experimentais obtidos para o receptor cinco portas
implementado em tecnologia de microfita Read more
|
103 |
Separação cega de fontes aplicada no sensoriamento do espectro em rádio cognitivo / Blind source separation applied in spectrum sensing in cognitive radioRocha, Gustavo Nozella 01 June 2012 (has links)
Cognitive radio technology has been an important area of research in
telecommunications for solving the problem of spectrum scarcity. That\'s
because in addition to allowing dynamic allocation of the electromagnetic
spectrum, cognitive radios must be able to identify the non cognitive user\'s
transmission on the channel. This operation is only possible through the
continuous sensing of the electromagnetic spectrum. In this context, this
paper presents a detailed study on spectrum sensing, an important stage in
cognitive radio technology.
For the presentation of this work, a detailed study on software dened
radio (SDR) was carried out, without which it would be impossible to work
with cognitive radios, once they are implemented by means of SDR technology.
It was also presented the tools GNU Radio and USRP, which together
form a solution of SDR, through implementation of AM receivers.
The theoretical foundations of spectrum sensing and blind source separation
(BSS) are presented and then is made a detailed study of the use of
BSS for spectral sensing. From the study of BSS, it is possible to use new
metrics for decision making about the presence or the absence of a primary
user in the channel.
Throughout the study, simulations and implementations were conducted
on MATLAB in order to perform various situations, and, nally, it is presented
outcomes and conclusions reached during the work. / A tecnologia de rádio cognitivo tem sido uma importante área de pesquisa
em telecomunicações para a solução do problema da escassez espectral. Isto
porque, além de permitirem a alocação dinâmica do espectro eletromagnético,
os rádios cognitivos devem ser capazes de identificar as transmissões de
usuários não cognitivos no canal. Esta operação só é possível por meio do
sensoriamento contínuo do espectro eletromagnético. Neste contexto, este
trabalho apresenta um estudo detalhado sobre o sensoriamento de espectro,
uma importante etapa da tecnologia de rádios cognitivos. Para a apresentação deste trabalho foi realizado um estudo detalhado a respeito de rádio definido por software (SDR), sem o qual não seria possível o trabalho com rádios cognitivos, uma vez que este é implementado por meio da tecnologia de SDR. Também foram apresentadas as ferramentas GNU Radio e USRP, que, juntas, formam uma solução de SDR, por meio de implementações de receptores AM.
Os fundamentos teóricos de sensoriamento de espectro e separação cega
de fontes (BSS) são apresentados e, em seguida, é realizado um estudo aprofundado do uso de BSS para o sensoriamento espectral. A partir do estudo
de BSS, é possível utilizar novas métricas de decisão a respeito da presença
ou não de um usuário primário no canal.
Durante todo este trabalho foram realizadas implementações e simulações
no MATLAB com a finalidade de executar diversas situações e, finalmente,
são apresentados resultados verificados e conclusões obtidas neste trabalho. / Mestre em Ciências Read more
|
104 |
Análise de componentes independentes aplicada à separação de sinais de áudio. / Independent component analysis applied to separation of audio signals.Fernando Alves de Lima Moreto 19 March 2008 (has links)
Este trabalho estuda o modelo de análise em componentes independentes (ICA) para misturas instantâneas, aplicado na separação de sinais de áudio. Três algoritmos de separação de misturas instantâneas são avaliados: FastICA, PP (Projection Pursuit) e PearsonICA; possuindo dois princípios básicos em comum: as fontes devem ser independentes estatisticamente e não-Gaussianas. Para analisar a capacidade de separação dos algoritmos foram realizados dois grupos de experimentos. No primeiro grupo foram geradas misturas instantâneas, sinteticamente, a partir de sinais de áudio pré-definidos. Além disso, foram geradas misturas instantâneas a partir de sinais com características específicas, também geradas sinteticamente, para avaliar o comportamento dos algoritmos em situações específicas. Para o segundo grupo foram geradas misturas convolutivas no laboratório de acústica do LPS. Foi proposto o algoritmo PP, baseado no método de Busca de Projeções comumente usado em sistemas de exploração e classificação, para separação de múltiplas fontes como alternativa ao modelo ICA. Embora o método PP proposto possa ser utilizado para separação de fontes, ele não pode ser considerado um método ICA e não é garantida a extração das fontes. Finalmente, os experimentos validam os algoritmos estudados. / This work studies Independent Component Analysis (ICA) for instantaneous mixtures, applied to audio signal (source) separation. Three instantaneous mixture separation algorithms are considered: FastICA, PP (Projection Pursuit) and PearsonICA, presenting two common basic principles: sources must be statistically independent and non-Gaussian. In order to analyze each algorithm separation capability, two groups of experiments were carried out. In the first group, instantaneous mixtures were generated synthetically from predefined audio signals. Moreover, instantaneous mixtures were generated from specific signal generated with special features, synthetically, enabling the behavior analysis of the algorithms. In the second group, convolutive mixtures were probed in the acoustics laboratory of LPS at EPUSP. The PP algorithm is proposed, based on the Projection Pursuit technique usually applied in exploratory and clustering environments, for separation of multiple sources as an alternative to conventional ICA. Although the PP algorithm proposed could be applied to separate sources, it couldnt be considered an ICA method, and source extraction is not guaranteed. Finally, experiments validate the studied algorithms. Read more
|
105 |
Chaînes de Markov cachées et séparation non supervisée de sources / Hidden Markov chains and unsupervised source separationRafi, Selwa 11 June 2012 (has links)
Le problème de la restauration est rencontré dans domaines très variés notamment en traitement de signal et de l'image. Il correspond à la récupération des données originales à partir de données observées. Dans le cas de données multidimensionnelles, la résolution de ce problème peut se faire par différentes approches selon la nature des données, l'opérateur de transformation et la présence ou non de bruit. Dans ce travail, nous avons traité ce problème, d'une part, dans le cas des données discrètes en présence de bruit. Dans ce cas, le problème de restauration est analogue à celui de la segmentation. Nous avons alors exploité les modélisations dites chaînes de Markov couples et triplets qui généralisent les chaînes de Markov cachées. L'intérêt de ces modèles réside en la possibilité de généraliser la méthode de calcul de la probabilité à posteriori, ce qui permet une segmentation bayésienne. Nous avons considéré ces méthodes pour des observations bi-dimensionnelles et nous avons appliqué les algorithmes pour une séparation sur des documents issus de manuscrits scannés dans lesquels les textes des deux faces d'une feuille se mélangeaient. D'autre part, nous avons attaqué le problème de la restauration dans un contexte de séparation aveugle de sources. Une méthode classique en séparation aveugle de sources, connue sous l'appellation "Analyse en Composantes Indépendantes" (ACI), nécessite l'hypothèse d'indépendance statistique des sources. Dans des situations réelles, cette hypothèse n'est pas toujours vérifiée. Par conséquent, nous avons étudié une extension du modèle ACI dans le cas où les sources peuvent être statistiquement dépendantes. Pour ce faire, nous avons introduit un processus latent qui gouverne la dépendance et/ou l'indépendance des sources. Le modèle que nous proposons combine un modèle de mélange linéaire instantané tel que celui donné par ACI et un modèle probabiliste sur les sources avec variables cachées. Dans ce cadre, nous montrons comment la technique d'Estimation Conditionnelle Itérative permet d'affaiblir l'hypothèse usuelle d'indépendance en une hypothèse d'indépendance conditionnelle / The restoration problem is usually encountered in various domains and in particular in signal and image processing. It consists in retrieving original data from a set of observed ones. For multidimensional data, the problem can be solved using different approaches depending on the data structure, the transformation system and the noise. In this work, we have first tackled the problem in the case of discrete data and noisy model. In this context, the problem is similar to a segmentation problem. We have exploited Pairwise and Triplet Markov chain models, which generalize Hidden Markov chain models. The interest of these models consist in the possibility to generalize the computation procedure of the posterior probability, allowing one to perform bayesian segmentation. We have considered these methods for two-dimensional signals and we have applied the algorithms to retrieve of old hand-written document which have been scanned and are subject to show through effect. In the second part of this work, we have considered the restoration problem as a blind source separation problem. The well-known "Independent Component Analysis" (ICA) method requires the assumption that the sources be statistically independent. In practice, this condition is not always verified. Consequently, we have studied an extension of the ICA model in the case where the sources are not necessarily independent. We have introduced a latent process which controls the dependence and/or independence of the sources. The model that we propose combines a linear instantaneous mixing model similar to the one of ICA model and a probabilistic model on the sources with hidden variables. In this context, we show how the usual independence assumption can be weakened using the technique of Iterative Conditional Estimation to a conditional independence assumption Read more
|
106 |
Porovnání úspěšnosti vícekanálových metod separace řečových signálů / Comparison of success rate of multi-channel methods of speech signal separationPřikryl, Petr January 2008 (has links)
The separation of independent sources from mixed observed data is a fundamental problem in many practical situations. A typical example is speech recordings made in an acoustic environment in the presence of background noise or other speakers. Problems of signal separation are explored by a group of methods called Blind Source Separation. Blind Source Separation (BSS) consists on estimating a set of N unknown sources from P observations resulting from the mixture of these sources and unknown background. Some existing solutions for instantaneous mixtures are reviewed and in Matlab implemented , i.e Independent Componnent Analysis (ICA) and Time-Frequency Analysis (TF). The acoustic signals recorded in real environment are not instantaneous, but convolutive mixtures. In this case, an ICA algorithm for separation of convolutive mixtures in frequency domain is introduced and in Matlab implemented. This diploma thesis examines the useability and comparisn of proposed separation algorithms.
|
107 |
Blind Source Separation for the Processing of Contact-Less BiosignalsWedekind, Daniel 08 July 2021 (has links)
(Spatio-temporale) Blind Source Separation (BSS) eignet sich für die Verarbeitung von Multikanal-Messungen im Bereich der kontaktlosen Biosignalerfassung. Ziel der BSS ist dabei die Trennung von (z.B. kardialen) Nutzsignalen und Störsignalen typisch für die kontaktlosen Messtechniken. Das Potential der BSS kann praktisch nur ausgeschöpft werden, wenn (1) ein geeignetes BSS-Modell verwendet wird, welches der Komplexität der Multikanal-Messung gerecht wird und (2) die unbestimmte Permutation unter den BSS-Ausgangssignalen gelöst wird, d.h. das Nutzsignal praktisch automatisiert identifiziert werden kann. Die vorliegende Arbeit entwirft ein Framework, mit dessen Hilfe die Effizienz von BSS-Algorithmen im Kontext des kamera-basierten Photoplethysmogramms bewertet werden kann. Empfehlungen zur Auswahl bestimmter Algorithmen im Zusammenhang mit spezifischen Signal-Charakteristiken werden abgeleitet. Außerdem werden im Rahmen der Arbeit Konzepte für die automatisierte Kanalauswahl nach BSS im Bereich der kontaktlosen Messung des Elektrokardiogramms entwickelt und bewertet. Neuartige Algorithmen basierend auf Sparse Coding erwiesen sich dabei als besonders effizient im Vergleich zu Standard-Methoden. / (Spatio-temporal) Blind Source Separation (BSS) provides a large potential to process distorted multichannel biosignal measurements in the context of novel contact-less recording techniques for separating distortions from the cardiac signal of interest. This potential can only be practically utilized (1) if a BSS model is applied that matches the complexity of the measurement, i.e. the signal mixture and (2) if permutation indeterminacy is solved among the BSS output components, i.e the component of interest can be practically selected. The present work, first, designs a framework to assess the efficacy of BSS algorithms in the context of the camera-based photoplethysmogram (cbPPG) and characterizes multiple BSS algorithms, accordingly. Algorithm selection recommendations for certain mixture characteristics are derived. Second, the present work develops and evaluates concepts to solve permutation indeterminacy for BSS outputs of contact-less electrocardiogram (ECG) recordings. The novel approach based on sparse coding is shown to outperform the existing concepts of higher order moments and frequency-domain features. Read more
|
108 |
Independent component analysis and slow feature analysisBlaschke, Tobias 25 May 2005 (has links)
Der Fokus dieser Dissertation liegt auf den Verbindungen zwischen ICA (Independent Component Analysis - Unabhängige Komponenten Analyse) und SFA (Slow Feature Analysis - Langsame Eigenschaften Analyse). Um einen Vergleich zwischen beiden Methoden zu ermöglichen wird CuBICA2, ein ICA Algorithmus basierend nur auf Statistik zweiter Ordnung, d.h. Kreuzkorrelationen, vorgestellt. Dieses Verfahren minimiert zeitverzögerte Korrelationen zwischen Signalkomponenten, um die statistische Abhängigkeit zwischen denselben zu reduzieren. Zusätzlich wird eine alternative SFA-Formulierung vorgestellt, die mit CuBICA2 verglichen werden kann. Im Falle linearer Gemische sind beide Methoden äquivalent falls nur eine einzige Zeitverzögerung berücksichtigt wird. Dieser Vergleich kann allerdings nicht auf mehrere Zeitverzögerungen erweitert werden. Für ICA lässt sich zwar eine einfache Erweiterung herleiten, aber ein ähnliche SFA-Erweiterung kann nicht im originären SFA-Sinne (SFA extrahiert die am langsamsten variierenden Signalkomponenten aus einem gegebenen Eingangssignal) interpretiert werden. Allerdings kann eine im SFA-Sinne sinnvolle Erweiterung hergeleitet werden, welche die enge Verbindung zwischen der Langsamkeit eines Signales (SFA) und der zeitlichen Vorhersehbarkeit desselben verdeutlich. Im Weiteren wird CuBICA2 und SFA kombiniert. Das Resultat kann aus zwei Perspektiven interpretiert werden. Vom ICA-Standpunkt aus führt die Kombination von CuBICA2 und SFA zu einem Algorithmus, der das Problem der nichtlinearen blinden Signalquellentrennung löst. Vom SFA-Standpunkt aus ist die Kombination eine Erweiterung der standard SFA. Die standard SFA extrahiert langsam variierende Signalkomponenten die untereinander unkorreliert sind, dass heißt statistisch unabhängig bis zur zweiten Ordnung. Die Integration von ICA führt nun zu Signalkomponenten die mehr oder weniger statistisch unabhängig sind. / Within this thesis, we focus on the relation between independent component analysis (ICA) and slow feature analysis (SFA). To allow a comparison between both methods we introduce CuBICA2, an ICA algorithm based on second-order statistics only, i.e.\ cross-correlations. In contrast to algorithms based on higher-order statistics not only instantaneous cross-correlations but also time-delayed cross correlations are considered for minimization. CuBICA2 requires signal components with auto-correlation like in SFA, and has the ability to separate source signal components that have a Gaussian distribution. Furthermore, we derive an alternative formulation of the SFA objective function and compare it with that of CuBICA2. In the case of a linear mixture the two methods are equivalent if a single time delay is taken into account. The comparison can not be extended to the case of several time delays. For ICA a straightforward extension can be derived, but a similar extension to SFA yields an objective function that can not be interpreted in the sense of SFA. However, a useful extension in the sense of SFA to more than one time delay can be derived. This extended SFA reveals the close connection between the slowness objective of SFA and temporal predictability. Furthermore, we combine CuBICA2 and SFA. The result can be interpreted from two perspectives. From the ICA point of view the combination leads to an algorithm that solves the nonlinear blind source separation problem. From the SFA point of view the combination of ICA and SFA is an extension to SFA in terms of statistical independence. Standard SFA extracts slowly varying signal components that are uncorrelated meaning they are statistically independent up to second-order. The integration of ICA leads to signal components that are more or less statistically independent. Read more
|
109 |
Estimation and separation of linear frequency- modulated signals in wireless communications using time - frequency signal processing.Nguyen, Linh- Trung January 2004 (has links)
Signal processing has been playing a key role in providing solutions to key problems encountered in communications, in general, and in wireless communications, in particular. Time-Frequency Signal Processing (TFSP) provides eective tools for analyzing nonstationary signals where the frequency content of signals varies in time as well as for analyzing linear time-varying systems. This research aimed at exploiting the advantages of TFSP, in dealing with nonstationary signals, into the fundamental issues of signal processing, namely the signal estimation and signal separation. In particular, it has investigated the problems of (i) the Instantaneous Frequency (IF) estimation of Linear Frequency-Modulated (LFM) signals corrupted in complex-valued zero-mean Multiplicative Noise (MN), and (ii) the Underdetermined Blind Source Separation (UBSS) of LFM signals, while focusing onto the fast-growing area of Wireless Communications (WCom). A common problem in the issue of signal estimation is the estimation of the frequency of Frequency-Modulated signals which are seen in many engineering and real-life applications. Accurate frequency estimation leads to accurate recovery of the true information. In some applications, the random amplitude modulation shows up when the medium is dispersive and/or when the assumption of point target is not valid; the original signal is considered to be corrupted by an MN process thus seriously aecting the recovery of the information-bearing frequency. The IF estimation of nonstationary signals corrupted by complex-valued zero-mean MN was investigated in this research. We have proposed a Second-Order Statistics approach, rather than a Higher-Order Statistics approach, for IF estimation using Time-Frequency Distributions (TFDs). The main assumption was that the autocorrelation function of the MN is real-valued but not necessarily positive (i.e. the spectrum of the MN is symmetric but does not necessary has the highest peak at zero frequency). The estimation performance was analyzed in terms of bias and variance, and compared between four dierent TFDs: Wigner-Ville Distribution, Spectrogram, Choi-Williams Distribution and Modified B Distribution. To further improve the estimation, we proposed to use the Multiple Signal Classification algorithm and showed its better performance. It was shown that the Modified B Distribution performance was the best for Signal-to-Noise Ratio less than 10dB. In the issue of signal separation, a new research direction called Blind Source Separation (BSS) has emerged over the last decade. BSS is a fundamental technique in array signal processing aiming at recovering unobserved signals or sources from observed mixtures exploiting only the assumption of mutual independence between the signals. The term "blind" indicates that neither the structure of the mixtures nor the source signals are known to the receivers. Applications of BSS are seen in, for example, radar and sonar, communications, speech processing, biomedical signal processing. In the case of nonstationary signals, a TF structure forcing approach was introduced by Belouchrani and Amin by defining the Spatial Time- Frequency Distribution (STFD), which combines both TF diversity and spatial diversity. The benefit of STFD in an environment of nonstationary signals is the direct exploitation of the information brought by the nonstationarity of the signals. A drawback of most BSS algorithms is that they fail to separate sources in situations where there are more sources than sensors, referred to as UBSS. The UBSS of nonstationary signals was investigated in this research. We have presented a new approach for blind separation of nonstationary sources using their TFDs. The separation algorithm is based on a vector clustering procedure that estimates the source TFDs by grouping together the TF points corresponding to "closely spaced" spatial directions. Simulations illustrate the performances of the proposed method for the underdetermined blind separation of FM signals. The method developed in this research represents a new research direction for solving the UBSS problem. The successful results obtained in the research development of the above two problems has led to a conclusion that TFSP is useful for WCom. Future research directions were also proposed. Read more
|
110 |
Nedourčená slepá separace zvukových signálů / Underdetermined Blind Audio Signal SeparationČermák, Jan January 2008 (has links)
We often have to face the fact that several signals are mixed together in unknown environment. The signals must be first extracted from the mixture in order to interpret them correctly. This problem is in signal processing society called blind source separation. This dissertation thesis deals with multi-channel separation of audio signals in real environment, when the source signals outnumber the sensors. An introduction to blind source separation is presented in the first part of the thesis. The present state of separation methods is then analyzed. Based on this knowledge, the separation systems implementing fuzzy time-frequency mask are introduced. However these methods are still introducing nonlinear changes in the signal spectra, which can yield in musical noise. In order to reduce musical noise, novel methods combining time-frequency binary masking and beamforming are introduced. The new separation system performs linear spatial filtering even if the source signals outnumber the sensors. Finally, the separation systems are evaluated by objective and subjective tests in the last part of the thesis.
|
Page generated in 0.1189 seconds