1 |
Convex geometry-based blind separation of quasi-stationary sources / CUHK electronic theses & dissertations collectionJanuary 2014 (has links)
Fu, Xiao. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 145-154). / Abstracts also in Chinese. / Title from PDF title page (viewed on 15, September, 2016).
|
2 |
Generative rhythmic modelsRae, Alexander. January 2009 (has links)
Thesis (M. S.)--Music, Georgia Institute of Technology, 2009. / Committee Chair: Chordia, Parag; Committee Member: Freeman, Jason; Committee Member: Weinberg, Gil.
|
3 |
Blind signal separation /Lu, Jun. Luo, Zhi-Quan. January 1900 (has links)
Thesis (Ph.D.)--McMaster University, 2004. / Advisor: Zhi-Quan (Tom) Luo. Includes bibliographical references (leaves 90-97). Also available via World Wide Web.
|
4 |
Fetal ECG Extraction Using Nonlinear Noise Reduction and Blind Source SeparationYuki, Shingo 08 1900 (has links)
The fetal electrocardiogram contains within it, information regarding the health of the fetus. Currently, fetal ECG is recorded directly from the scalp of the baby during labour. However, it has been shown that fetal ECG can also be measured using surface electrodes attached to a pregnant mother's abdomen. The advantage of this method lies in the fact that fetal ECG can be measured noninvasively before the onset of labour. The difficulty lies in isolating the fetal ECG from extraneous signals that are simultaneously recorded with it. Several signal processing methodologies have been put forth in order to extract the fetal ECG component from a mixture of signals. Two recent techniques that have been put forth include a scheme that has previously been used to nonlinearly reduce noise in deterministically chaotic noise and the other uses a blind source separation technique called independent component analysis. In this thesis, we describe the significance of the fetal electrocardiogram as a diagnostic tool in medicine, a brief overview of the theory behind the nonlinear noise reduction technique and blind source separation, and results from having processed synthetic and real data using both techniques. We find that although the noise reduction technique performs adequately, the blind source separation process performs faster and more robustly against similar data. The two techniques can be used in tandem to arrive at an approximate fetal ECG signal, which can be further analyzed by calculating, for example, the fetal heart rate. / Thesis / Master of Engineering (ME)
|
5 |
Unsupervised Signal Deconvolution for Multiscale Characterization of Tissue HeterogeneityWang, Niya 29 June 2015 (has links)
Characterizing complex tissues requires precise identification of distinctive cell types, cell-specific signatures, and subpopulation proportions. Tissue heterogeneity, arising from multiple cell types, is a major confounding factor in studying individual subpopulations and repopulation dynamics. Tissue heterogeneity cannot be resolved directly by most global molecular and genomic profiling methods. While signal deconvolution has widespread applications in many real-world problems, there are significant limitations associated with existing methods, mainly unrealistic assumptions and heuristics, leading to inaccurate or incorrect results. In this study, we formulate the signal deconvolution task as a blind source separation problem, and develop novel unsupervised deconvolution methods within the Convex Analysis of Mixtures (CAM) framework, for characterizing multi-scale tissue heterogeneity. We also explanatorily test the application of Significant Intercellular Genomic Heterogeneity (SIGH) method.
Unlike existing deconvolution methods, CAM can identify tissue-specific markers directly from mixed signals, a critical task, without relying on any prior knowledge. Fundamental to the success of our approach is a geometric exploitation of tissue-specific markers and signal non-negativity. Using a well-grounded mathematical framework, we have proved new theorems showing that the scatter simplex of mixed signals is a rotated and compressed version of the scatter simplex of pure signals and that the resident markers at the vertices of the scatter simplex are the tissue-specific markers. The algorithm works by geometrically locating the vertices of the scatter simplex of measured signals and their resident markers. The minimum description length (MDL) criterion is applied to determine the number of tissue populations in the sample. Based on CAM principle, we integrated nonnegative independent component analysis (nICA) and convex matrix factorization (CMF) methods, developed CAM-nICA/CMF algorithm, and applied them to multiple gene expression, methylation and protein datasets, achieving very promising results validated by the ground truth or gene enrichment analysis. We integrated CAM with compartment modeling (CM) and developed multi-tissue compartment modeling (MTCM) algorithm, tested on real DCE-MRI data derived from mouse models with consistent and plausible results. We also developed an open-source R-Java software package that implements various CAM based algorithms, including an R package approved by Bioconductor specifically for tumor-stroma deconvolution.
While intercellular heterogeneity is often manifested by multiple clones with distinct sequences, systematic efforts to characterize intercellular genomic heterogeneity must effectively distinguish significant genuine clonal sequences from probabilistic fake derivatives. Based on the preliminary studies originally targeting immune T-cells, we tested and applied the SIGH algorithm to characterize intercellular heterogeneity directly from mixed sequencing reads. SIGH works by exploiting the statistical differences in both the sequencing error rates at different nucleobases and the read counts of fake sequences in relation to genuine clones of variable abundance. / Ph. D.
|
6 |
Bayesian methods for sparse data decomposition and blind source separationRoussos, Evangelos January 2012 (has links)
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or 'sources' via a generally unknown mapping. Reconstructing sources from their mixtures is an extremely ill-posed problem in general. However, solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian method- ology, allowing us to incorporate "soft" constraints in a natural manner. This Thesis proposes the use of sparse statistical decomposition methods for ex- ploratory analysis of datasets. We make use of the fact that many natural signals have a sparse representation in appropriate signal dictionaries. The work described in this Thesis is mainly driven by problems in the analysis of large datasets, such as those from functional magnetic resonance imaging of the brain for the neuro-scientific goal of extracting relevant 'maps' from the data. We first propose Bayesian Iterative Thresholding, a general method for solv- ing blind linear inverse problems under sparsity constraints, and we apply it to the problem of blind source separation. The algorithm is derived by maximiz- ing a variational lower-bound on the likelihood. The algorithm generalizes the recently proposed method of Iterative Thresholding. The probabilistic view en- ables us to automatically estimate various hyperparameters, such as those that control the shape of the prior and the threshold, in a principled manner. We then derive an efficient fully Bayesian sparse matrix factorization model for exploratory analysis and modelling of spatio-temporal data such as fMRI. We view sparse representation as a problem in Bayesian inference, following a ma- chine learning approach, and construct a structured generative latent-variable model employing adaptive sparsity-inducing priors. The construction allows for automatic complexity control and regularization as well as denoising. The performance and utility of the proposed algorithms is demonstrated on a variety of experiments using both simulated and real datasets. Experimental results with benchmark datasets show that the proposed algorithms outper- form state-of-the-art tools for model-free decompositions such as independent component analysis.
|
7 |
Analysis of free radical characteristics in biological systems based on EPR spectroscopy, employing blind source separation techniquesRen, Jiyun. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
|
8 |
Independent component analysis applications in CDMA systems/Kalkan, Olcay. Altınkaya, Mustafa Aziz January 2004 (has links)
Thesis (Master)--İzmir Institute of Technology, İzmir, 2004 / Includes bibliographical references (leaves. 56).
|
9 |
Robust binaural noise-reduction strategies with binaural-hearing-aid constraints: design, analysis and practical considerationsMarin, Jorge I. 22 May 2012 (has links)
The objective of the dissertation research is to investigate noise reduction methods for binaural hearing aids based on array and statistical signal processing and inspired by a human auditory model. In digital hearing aids, wide dynamic range compression (WDRC) is the most successful technique to deal with monaural hearing losses. This WDRC processing is usually performed after a monaural noise reduction algorithm. When hearing losses are present in both ears, i.e., a binaural hearing loss, independent monaural hearing aids have been shown not to be comfortable for most users, preferring a processing that involves synchronization between both hearing devices. In addition, psycho-acoustical studies have identified that under hostile environments, e.g., babble noise at very low SNR conditions, users prefer to use linear amplification rather than WDRC. In this sense, the noise reduction algorithm becomes an important component of a digital hearing aid to provide improvement in speech intelligibility and user comfort. Including a wireless link between both hearing aids offers new ways to implement more efficient methods to reduce the background noise and coordinate processing for the two ears. This approach, called binaural hearing aid, has been recently introduced in some commercial products but using very simple processing strategies. This research analyzes the existing binaural noise-reduction techniques, proposes novel perceptually-inspired methods based on blind source separation (BSS) and multichannel Wiener filter (MWF), and identifies different strategies for the real-time implementation of these methods. The proposed methods perform efficient spatial filtering, improve SNR and speech intelligibility, minimize block processing artifacts, and can be implemented in low-power architectures.
|
10 |
System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysisWada, Ted S. 28 June 2012 (has links)
We live in a dynamic world full of noises and interferences. The conventional acoustic echo cancellation (AEC) framework based on the least mean square (LMS) algorithm by itself lacks the ability to handle many secondary signals that interfere with the adaptive filtering process, e.g., local speech and background noise. In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the AEC problem.
We first propose the residual echo enhancement (REE) technique that utilizes the error recovery nonlinearity (ERN) to "enhances" the filter estimation error prior to the filter adaptation. The single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. SBSS optimized via independent component analysis (ICA) leads to the system combination of the LMS algorithm with the ERN that allows for continuous and stable adaptation even during double talk. Second, we extend the system perspective to the decorrelation problem for AEC, where we show that the REE procedure can be applied effectively in a multi-channel AEC (MCAEC) setting to indirectly assist the recovery of lost AEC performance due to inter-channel correlation, known generally as the "non-uniqueness" problem. We develop a novel, computationally efficient technique of frequency-domain resampling (FDR) that effectively alleviates the non-uniqueness problem directly while introducing minimal distortion to signal quality and statistics. We also apply the system approach to the multi-delay filter (MDF) that suffers from the inter-block correlation problem. Finally, we generalize the MCAEC problem in the SBSS framework and discuss many issues related to the implementation of an SBSS system. We propose a constrained batch-online implementation of SBSS that stabilizes the convergence behavior even in the worst case scenario of a single far-end talker along with the non-uniqueness condition on the far-end mixing system.
The proposed techniques are developed from a pragmatic standpoint, motivated by real-world problems in acoustic and audio signal processing. Generalization of the orthogonality principle to the system level of an AEC problem allows us to relate AEC to source separation that seeks to maximize the independence, hence implicitly the orthogonality, not only between the error signal and the far-end signal, but rather, among all signals involved. The system approach, for which the REE paradigm is just one realization, enables the encompassing of many traditional signal enhancement techniques in analytically consistent yet practically effective manner for solving the enhancement problem in a very noisy and disruptive acoustic mixing environment.
|
Page generated in 0.2059 seconds