• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 35
  • 20
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 189
  • 189
  • 117
  • 60
  • 56
  • 49
  • 47
  • 47
  • 40
  • 39
  • 29
  • 23
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Blind signal separation /

Lu, Jun. Luo, Zhi-Quan. January 1900 (has links)
Thesis (Ph.D.)--McMaster University, 2004. / Advisor: Zhi-Quan (Tom) Luo. Includes bibliographical references (leaves 90-97). Also available via World Wide Web.
2

A computer model of auditory stream segregation

Beauvois, Michael W. January 1991 (has links)
A simple computer model is described that takes a novel approach to the problem of accounting for perceptual coherence among successive pure tones of changing frequency by using simple physiological principles that operate at a peripheral, rather than a central level. The model is able to reproduce a number of streaming phenomena found in the literature using the same parameter values. These are: (1) the build-up of streaming over time; (2) the temporal coherence and fission boundaries of human listeners; (3) the ambiguous region; and (4) the trill threshold. In addition, the principle of excitation integration used in the model can be used to account for auditory grouping on the basis of the Gestalt perceptual principles of closure, proximity, continuity, and good continuation, as well as the pulsation threshold. The examples of Gestalt auditory grouping accounted for by the excitation integration principle indicate that the predictive power of the model would be considerably enhanced by the addition of a cross-channel grouping mechanism that worked on the basis of common on sets and offsets, as more complex stimuli could then be processed by the model.
3

Détection de motifs audio pour la séparation de sources guidée : application aux bandes-son de films / Audio motif spotting for guided source separation : application to movie soundtracks

Souviraà-Labastie, Nathan 23 November 2015 (has links)
Lorsque l'on manipule un signal audio, il est généralement utile d'opérer un isolement du ou des éléments sonores que l'on cherche à traiter. Cette étape est couramment appelée séparation de sources audio. Il existe de nombreuses techniques pour estimer ces sources et plus on prend en compte d'informations à leur sujet plus la séparation a des chances d'être réussie. Une façon d'incorporer des informations sur une source est l'utilisation d'un signal de référence qui va donner une première approximation de cette source. Cette thèse s'attache à explorer les aspects théoriques et appliqués de la séparation de sources audio guidée par signal de référence. La nouvelle approche proposée appelée SPOtted REference based Separation (SPORES) examine le cas particulier où les références sont obtenues automatiquement par détection de motif, c'est-à-dire par une recherche de contenu similaire. Pour qu'une telle approche soit utile, le contenu traité doit comporter une certaine redondance ou bien une large base de données doit être disponible. Heureusement, le contexte actuel nous permet bien souvent d'être dans une des deux situations et ainsi de retrouver ailleurs des motifs similaires. L'objectif premier de ce travail est de fournir un cadre théorique large qui une fois établi facilitera la mise au point efficace d'outils de traitement de contenus audio variés. Le second objectif est l'utilisation spécifique de cette approche au traitement de bandes-son de films avec par exemple comme application leur conversion en format surround 5.1 adapté aux systèmes home cinéma. / In audio signal processing, source separation consists in recovering the different audio sources that compose a given observed audio mixture. They are many techniques to estimate these sources and the more information are taken into account about them the more the separation is likely to be successful. One way to incorporate information on sources is the use of a reference signal which will give a first approximation of this source. This thesis aims to explore the theoretical and applied aspects of reference guided source separation. The proposed approach called SPOtted REference based Separation (SPORES) explore the particular case where the references are obtained automatically by motif spotting, i.e., by a search of similar content. Such an approach is useful for contents with a certain redundancy or if a large database is be available. Fortunately, the current context often puts us in one of these two situations and finding elsewhere similar motifs is possible. The primary objective of this study is to provide a broad theoretical framework that once established will facilitate the efficient development of processing tools for various audio content. The second objective is the specific use of this approach to the processing of movie soundtracks with application in 5.1 upmixing for instance.
4

Convex geometry-based blind separation of quasi-stationary sources / CUHK electronic theses & dissertations collection

January 2014 (has links)
Fu, Xiao. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 145-154). / Abstracts also in Chinese. / Title from PDF title page (viewed on 15, September, 2016).
5

A multi commodity recyclables collection model using partitioned vehicles /

Mohanty, Natali. January 2005 (has links)
Thesis (Ph. D.)--University of Rhode Island, 2005. / Typescript. Includes bibliographical references (leaves 122-126).
6

Mixture of beamformers for speech separation and extraction

Dmour, Mohammad A. January 2010 (has links)
In many audio applications, the signal of interest is corrupted by acoustic background noise, interference, and reverberation. The presence of these contaminations can significantly degrade the quality and intelligibility of the audio signal. This makes it important to develop signal processing methods that can separate the competing sources and extract a source of interest. The estimated signals may then be either directly listened to, transmitted, or further processed, giving rise to a wide range of applications such as hearing aids, noise-cancelling headphones, human-computer interaction, surveillance, and hands-free telephony. Many of the existing approaches to speech separation/extraction relied on beamforming techniques. These techniques approach the problem from a spatial point of view; a microphone array is used to form a spatial filter which can extract a signal from a specific direction and reduce the contamination of signals from other directions. However, when there are fewer microphones than sources (the underdetermined case), perfect attenuation of all interferers becomes impossible and only partial interference attenuation is possible. In this thesis, we present a framework which extends the use of beamforming techniques to underdetermined speech mixtures. We describe frequency domain non-linear mixture of beamformers that can extract a speech source from a known direction. Our approach models the data in each frequency bin via Gaussian mixture distributions, which can be learned using the expectation maximization algorithm. The model learning is performed using the observed mixture signals only, and no prior training is required. The signal estimator comprises of a set of minimum mean square error (MMSE), minimum variance distortionless response (MVDR), or minimum power distortionless response (MPDR) beamformers. In order to estimate the signal, all beamformers are concurrently applied to the observed signal, and the weighted sum of the beamformers’ outputs is used as the signal estimator, where the weights are the estimated posterior probabilities of the Gaussian mixture states. These weights are specific to each timefrequency point. The resulting non-linear beamformers do not need to know or estimate the number of sources, and can be applied to microphone arrays with two or more microphones with arbitrary array configuration. We test and evaluate the described methods on underdetermined speech mixtures. Experimental results for the non-linear beamformers in underdetermined mixtures with room reverberation confirm their capability to successfully extract speech sources.
7

Generative rhythmic models

Rae, Alexander. January 2009 (has links)
Thesis (M. S.)--Music, Georgia Institute of Technology, 2009. / Committee Chair: Chordia, Parag; Committee Member: Freeman, Jason; Committee Member: Weinberg, Gil.
8

Unsupervised Signal Deconvolution for Multiscale Characterization of Tissue Heterogeneity

Wang, Niya 29 June 2015 (has links)
Characterizing complex tissues requires precise identification of distinctive cell types, cell-specific signatures, and subpopulation proportions. Tissue heterogeneity, arising from multiple cell types, is a major confounding factor in studying individual subpopulations and repopulation dynamics. Tissue heterogeneity cannot be resolved directly by most global molecular and genomic profiling methods. While signal deconvolution has widespread applications in many real-world problems, there are significant limitations associated with existing methods, mainly unrealistic assumptions and heuristics, leading to inaccurate or incorrect results. In this study, we formulate the signal deconvolution task as a blind source separation problem, and develop novel unsupervised deconvolution methods within the Convex Analysis of Mixtures (CAM) framework, for characterizing multi-scale tissue heterogeneity. We also explanatorily test the application of Significant Intercellular Genomic Heterogeneity (SIGH) method. Unlike existing deconvolution methods, CAM can identify tissue-specific markers directly from mixed signals, a critical task, without relying on any prior knowledge. Fundamental to the success of our approach is a geometric exploitation of tissue-specific markers and signal non-negativity. Using a well-grounded mathematical framework, we have proved new theorems showing that the scatter simplex of mixed signals is a rotated and compressed version of the scatter simplex of pure signals and that the resident markers at the vertices of the scatter simplex are the tissue-specific markers. The algorithm works by geometrically locating the vertices of the scatter simplex of measured signals and their resident markers. The minimum description length (MDL) criterion is applied to determine the number of tissue populations in the sample. Based on CAM principle, we integrated nonnegative independent component analysis (nICA) and convex matrix factorization (CMF) methods, developed CAM-nICA/CMF algorithm, and applied them to multiple gene expression, methylation and protein datasets, achieving very promising results validated by the ground truth or gene enrichment analysis. We integrated CAM with compartment modeling (CM) and developed multi-tissue compartment modeling (MTCM) algorithm, tested on real DCE-MRI data derived from mouse models with consistent and plausible results. We also developed an open-source R-Java software package that implements various CAM based algorithms, including an R package approved by Bioconductor specifically for tumor-stroma deconvolution. While intercellular heterogeneity is often manifested by multiple clones with distinct sequences, systematic efforts to characterize intercellular genomic heterogeneity must effectively distinguish significant genuine clonal sequences from probabilistic fake derivatives. Based on the preliminary studies originally targeting immune T-cells, we tested and applied the SIGH algorithm to characterize intercellular heterogeneity directly from mixed sequencing reads. SIGH works by exploiting the statistical differences in both the sequencing error rates at different nucleobases and the read counts of fake sequences in relation to genuine clones of variable abundance. / Ph. D.
9

Feasibility of Passive Acoustic Detection of Coronary Artery Disease Utilizing Source Separation

Cooper, Daniel Boyd 19 January 2011 (has links)
Coronary artery disease (CAD) remains the leading cause of death in both the United States and the world at large. This is primarily due to the extreme difficulty associated with preemptive diagnosis of CAD. Currently, only about 20% of all patients are diagnosed with CAD prior to the occurrence of a heart attack. This is the result of limitations in current techniques, which are either noninvasive, extremely expensive, or have very poor correlation with the actual disease state of the patient. Phonoangiography is an alternative approach to the diagnosis of CAD that relies upon detection of the sound generated by turbulent flow downstream from occlusions. Although the technique is commonly used for the carotid arteries, in the case of the coronary arteries the technique is hampered by signal-to-noise problems as well as uncertainty regarding the spectral characteristics associated with CAD. To date, these signal processing difficulties have prevented the use of the technique clinically. This research introduces an alternative approach to the processing of phonoangiographic data based upon knowledge of the acoustic transfer within the chest. The validity of the proposed approach was examined using transfer functions which were calculated for 14 physiologically relevant locations within the chest using a 2-D Finite Element Model (FEM) generated from physiologic data. These transfer functions were then used to demonstrate the technique using test cases generated with the FEM. Finally, the vulnerability of the technique to noise was quantified through calculation of matrix condition numbers for the chest acoustic transfer at each frequency. These results show that while in general the technique is susceptible to noise; noise tolerance is greatly improved within the frequency range most likely to correspond to an occlusion. Taken together, these results suggest that the proposed technique has the potential to make phonoangiography viable as a screening technique for CAD. Such a technique would greatly reduce the cost of CAD, measured in terms of both financial cost as well as lives. / Master of Science
10

Fetal ECG Extraction Using Nonlinear Noise Reduction and Blind Source Separation

Yuki, Shingo 08 1900 (has links)
The fetal electrocardiogram contains within it, information regarding the health of the fetus. Currently, fetal ECG is recorded directly from the scalp of the baby during labour. However, it has been shown that fetal ECG can also be measured using surface electrodes attached to a pregnant mother's abdomen. The advantage of this method lies in the fact that fetal ECG can be measured noninvasively before the onset of labour. The difficulty lies in isolating the fetal ECG from extraneous signals that are simultaneously recorded with it. Several signal processing methodologies have been put forth in order to extract the fetal ECG component from a mixture of signals. Two recent techniques that have been put forth include a scheme that has previously been used to nonlinearly reduce noise in deterministically chaotic noise and the other uses a blind source separation technique called independent component analysis. In this thesis, we describe the significance of the fetal electrocardiogram as a diagnostic tool in medicine, a brief overview of the theory behind the nonlinear noise reduction technique and blind source separation, and results from having processed synthetic and real data using both techniques. We find that although the noise reduction technique performs adequately, the blind source separation process performs faster and more robustly against similar data. The two techniques can be used in tandem to arrive at an approximate fetal ECG signal, which can be further analyzed by calculating, for example, the fetal heart rate. / Thesis / Master of Engineering (ME)

Page generated in 0.1257 seconds