• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 35
  • 20
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 189
  • 189
  • 117
  • 60
  • 56
  • 49
  • 47
  • 47
  • 40
  • 39
  • 29
  • 23
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Music Visualization Using Source Separated Stereophonic Music

Chookaszian, Hannah Eileen 01 June 2022 (has links) (PDF)
This thesis introduces a music visualization system for stereophonic source separated music. Music visualization systems are a popular way to represent information from audio signals through computer graphics. Visualization can help people better understand music and its complex and interacting elements. This music visualization system extracts pitch, panning, and loudness features from source separated audio files to create the visual. Most state-of-the art visualization systems develop their visual representation of the music from either the fully mixed final song recording, where all of the instruments and vocals are combined into one file, or from the digital audio workstation (DAW) data containing multiple independent recordings of individual audio sources. Original source recordings are not always readily available to the public so music source separation (MSS) can be used to obtain estimated versions of the audio source files. This thesis surveys different approaches to MSS and music visualization as well as introduces a new music visualization system specifically for source separated music.
62

En fallstudie om hushållens källsortering i Augustenborg (A Case Study on Waste Management in Augustenborg)

Altundal, Sadiye, Gullberg, Meloujane January 2007 (has links)
Hållbar utveckling är en vision som hela världen eftersträvar att uppnå. Hushållsavfallshanter-ing är ett sätt för att uppnå en hållbar samhällsutveckling. Bra kunskap, information och sam-verkan mellan olika aktörer är ytterst nödvändigt för att kunna uppnå en hållbar samhällsut-veckling. Syftet med fallstudiet är att kartlägga en del av stadsdelen Augustenborg i Malmö för att se över hur hushållsavfallshantering fortskrider. Det är även vårt syfte att identifiera möjligheter och hinder för en ökad källsortering i området. Genom enkätundersökning och intervjuer har vi samlat information om hur olika faktorer påverkar avfallshantering i Augus-tenborg. Resultatet visar att det finns olika faktorer som påverkar avfallshanteringssystemet och dessa är inte bara det formella faktorer som till exempel regler och lagar men också andra faktorer som människans attityder har stor betydelse. / Sustainable development has become a major concern all over the world. Household waste management is one of the solutions to attain a sustainable development and community. It is today of great importance to have the proper knowledge, information and coordination be-tween different sectors and organizations in order to attain a sustainable development. This case study aims to investigate an area in Augustenborg, Malmö in order to analyze their existing waste management system. It is even our intention to identify the development pos-sibilities and obstacles of waste management in the area. Through survey and personal inter-views, we were able to gather information on different factors and variables that affect waste management system in Augustenborg. The results suggest that there are various factors that affect waste management, and these are not just formal factors, such as rules and regulations but also other factors, such as human behaviour.
63

Learning Statistical and Geometric Models from Microarray Gene Expression Data

Zhu, Yitan 01 October 2009 (has links)
In this dissertation, we propose and develop innovative data modeling and analysis methods for extracting meaningful and specific information about disease mechanisms from microarray gene expression data. To provide a high-level overview of gene expression data for easy and insightful understanding of data structure, we propose a novel statistical data clustering and visualization algorithm that is comprehensively effective for multiple clustering tasks and that overcomes some major limitations of existing clustering methods. The proposed clustering and visualization algorithm performs progressive, divisive hierarchical clustering and visualization, supported by hierarchical statistical modeling, supervised/unsupervised informative gene/feature selection, supervised/unsupervised data visualization, and user/prior knowledge guidance through human-data interactions, to discover cluster structure within complex, high-dimensional gene expression data. For the purpose of selecting suitable clustering algorithm(s) for gene expression data analysis, we design an objective and reliable clustering evaluation scheme to assess the performance of clustering algorithms by comparing their sample clustering outcome to phenotype categories. Using the proposed evaluation scheme, we compared the performance of our newly developed clustering algorithm with those of several benchmark clustering methods, and demonstrated the superior and stable performance of the proposed clustering algorithm. To identify the underlying active biological processes that jointly form the observed biological event, we propose a latent linear mixture model that quantitatively describes how the observed gene expressions are generated by a process of mixing the latent active biological processes. We prove a series of theorems to show the identifiability of the noise-free model. Based on relevant geometric concepts, convex analysis and optimization, gene clustering, and model stability analysis, we develop a robust blind source separation method that fits the model to the gene expression data and subsequently identify the underlying biological processes and their activity levels under different biological conditions. Based on the experimental results obtained on cancer, muscle regeneration, and muscular dystrophy gene expression data, we believe that the research work presented in this dissertation not only contributes to the engineering research areas of machine learning and pattern recognition, but also provides novel and effective solutions to potentially solve many biomedical research problems, for improving the understanding about disease mechanisms. / Ph. D.
64

Computational Dissection of Composite Molecular Signatures and Transcriptional Modules

Gong, Ting 22 January 2010 (has links)
This dissertation aims to develop a latent variable modeling framework with which to analyze gene expression profiling data for computational dissection of molecular signatures and transcriptional modules. The first part of the dissertation is focused on extracting pure gene expression signals from tissue or cell mixtures. The main goal of gene expression profiling is to identify the pure signatures of different cell types (such as cancer cells, stromal cells and inflammatory cells) and estimate the concentration of each cell type. In order to accomplish this, a new blind source separation method is developed, namely, nonnegative partially independent component analysis (nPICA), for tissue heterogeneity correction (THC). The THC problem is formulated as a constrained optimization problem and solved with a learning algorithm based on geometrical and statistical principles. The second part of the dissertation sought to identify gene modules from gene expression data to uncover important biological processes in different types of cells. A new gene clustering approach, nonnegative independent component analysis (nICA), is developed for gene module identification. The nICA approach is completed with an information-theoretic procedure for input sample selection and a novel stability analysis approach for proper dimension estimation. Experimental results showed that the gene modules identified by the nICA approach appear to be significantly enriched in functional annotations in terms of gene ontology (GO) categories. The third part of the dissertation moves from gene module level down to DNA sequence level to identify gene regulatory programs by integrating gene expression data and protein-DNA binding data. A sparse hidden component model is first developed for this problem, taking into account a well-known biological principle, i.e., a gene is most likely regulated by a few regulators. This is followed by the development of a novel computational approach, motif-guided sparse decomposition (mSD), in order to integrate the binding information and gene expression data. These computational approaches are primarily developed for analyzing high-throughput gene expression profiling data. Nevertheless, the proposed methods should be able to be extended to analyze other types of high-throughput data for biomedical research. / Ph. D.
65

Application of sound source separation methods to advanced spatial audio systems

Cobos Serrano, Máximo 03 December 2010 (has links)
This thesis is related to the field of Sound Source Separation (SSS). It addresses the development and evaluation of these techniques for their application in the resynthesis of high-realism sound scenes by means of Wave Field Synthesis (WFS). Because the vast majority of audio recordings are preserved in twochannel stereo format, special up-converters are required to use advanced spatial audio reproduction formats, such as WFS. This is due to the fact that WFS needs the original source signals to be available, in order to accurately synthesize the acoustic field inside an extended listening area. Thus, an object-based mixing is required. Source separation problems in digital signal processing are those in which several signals have been mixed together and the objective is to find out what the original signals were. Therefore, SSS algorithms can be applied to existing two-channel mixtures to extract the different objects that compose the stereo scene. Unfortunately, most stereo mixtures are underdetermined, i.e., there are more sound sources than audio channels. This condition makes the SSS problem especially difficult and stronger assumptions have to be taken, often related to the sparsity of the sources under some signal transformation. This thesis is focused on the application of SSS techniques to the spatial sound reproduction field. As a result, its contributions can be categorized within these two areas. First, two underdetermined SSS methods are proposed to deal efficiently with the separation of stereo sound mixtures. These techniques are based on a multi-level thresholding segmentation approach, which enables to perform a fast and unsupervised separation of sound sources in the time-frequency domain. Although both techniques rely on the same clustering type, the features considered by each of them are related to different localization cues that enable to perform separation of either instantaneous or real mixtures.Additionally, two post-processing techniques aimed at improving the isolation of the separated sources are proposed. The performance achieved by several SSS methods in the resynthesis of WFS sound scenes is afterwards evaluated by means of listening tests, paying special attention to the change observed in the perceived spatial attributes. Although the estimated sources are distorted versions of the original ones, the masking effects involved in their spatial remixing make artifacts less perceptible, which improves the overall assessed quality. Finally, some novel developments related to the application of time-frequency processing to source localization and enhanced sound reproduction are presented. / Cobos Serrano, M. (2009). Application of sound source separation methods to advanced spatial audio systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8969
66

Independent Component Analysis Enhancements for Source Separation in Immersive Audio Environments

Zhao, Yue 01 January 2013 (has links)
In immersive audio environments with distributed microphones, Independent Component Analysis (ICA) can be applied to uncover signals from a mixture of other signals and noise, such as in a cocktail party recording. ICA algorithms have been developed for instantaneous source mixtures and convolutional source mixtures. While ICA for instantaneous mixtures works when no delays exist between the signals in each mixture, distributed microphone recordings typically result various delays of the signals over the recorded channels. The convolutive ICA algorithm should account for delays; however, it requires many parameters to be set and often has stability issues. This thesis introduces the Channel Aligned FastICA (CAICA), which requires knowledge of the source distance to each microphone, but does not require knowledge of noise sources. Furthermore, the CAICA is combined with Time Frequency Masking (TFM), yielding even better SOI extraction even in low SNR environments. Simulations were conducted for ranking experiments tested the performance of three algorithms: Weighted Beamforming (WB), CAICA, CAICA with TFM. The Closest Microphone (CM) recording is used as a reference for all three. Statistical analyses on the results demonstrated superior performance for the CAICA with TFM. The algorithms were applied to experimental recordings to support the conclusions of the simulations. These techniques can be deployed in mobile platforms, used in surveillance for capturing human speech and potentially adapted to biomedical fields.
67

Sensitivity analysis of blind separation of speech mixtures

Unknown Date (has links)
Blind source separation (BSS) refers to a class of methods by which multiple sensor signals are combined with the aim of estimating the original source signals. Independent component analysis (ICA) is one such method that effectively resolves static linear combinations of independent non-Gaussian distributions. We propose a method that can track variations in the mixing system by seeking a compromise between adaptive and block methods by using mini-batches. The resulting permutation indeterminacy is resolved based on the correlation continuity principle. Methods employing higher order cumulants in the separation criterion are susceptible to outliers in the finite sample case. We propose a robust method based on low-order non-integer moments by exploiting the Laplacian model of speech signals. We study separation methods for even (over)-determined linear convolutive mixtures in the frequency domain based on joint diagonalization of matrices employing time-varying second order statistics. We investigate the sources affecting the sensitivity of the solution under the finite sample case such as the set size, overlap amount and cross-spectrum estimation methods. / by Savaskan Bulek. / Thesis (Ph.D.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
68

Source separation and analysis of piano music signals. / CUHK electronic theses & dissertations collection

January 2010 (has links)
We propose a Bayesian monaural source separation system to extract each individual tone from mixture signals of piano music performance. Specifically, tone extractions can be facilitated by model-based inference. Two signal models based on summation of sinusoidal waves were employed to represent piano tones. The first model is the traditional General Model, which is a variant of sinusoidal modeling, for representing a tone for high modeling quality; but this model often fails for mixtures of tones. The second model is an instrument-specific model tailored for the piano sound; its modeling quality is not as high as the traditional General Model, but its structure makes source separation easier. To exploit the benefits offered by both the traditional General Model and our proposed Piano Model, we used the hierarchical Bayesian framework to combine both models in the source separation process. These procedures allowed us to recover suitable parameters (frequencies, amplitudes, phases, intensities and fine-tuned onsets) for thorough analyses and characterizations of musical nuances. Isolated tones from a target recording were used to train the Piano Model, and the timing and pitch of individual music notes in the target recording were supplied to our proposed system for different experiments. Our results show that our proposed system gives robust and accurate separation of signal mixtures, and yields a separation quality significantly better than those reported in previous works. / What makes a good piano performance? An expressive piano performance owes its emotive power to the performer's skills in shaping the music with nuances. For the purpose of performance analysis, nuance can be defined as any subtle manipulation of sound parameters including attack, timing, pitch, loudness and timbre. A major obstacle to a systematic computational analysis of musical nuances is that it is often difficult to uncover relevant sound parameters from the complex audio signal of a piano music performance. A piano piece invariably involves simultaneous striking of multiple keys, and it is not obvious how one may extract the parameters of individual keys from the combined mixed signal. This problem of parameter extraction can be formulated as a source separation problem. Our research goal is to extract individual tones (frequencies, amplitudes and phases) from a mixture of piano tones. / Szeto, Wai Man. / Adviser: Wong Kim Hong. / Source: Dissertation Abstracts International, Volume: 73-03, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 120-128). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
69

Classification, feature extraction and prediction of side effects in prostate cancer radiotherapy / Classification, extraction de données et prédiction de la toxicité rectale en radiothérapie du cancer de la prostate

Fargeas, Aureline 29 June 2016 (has links)
Le cancer de la prostate est l'un des cancers les plus fréquents chez l'homme. L'un des traitements standard est la radiothérapie externe, qui consiste à délivrer un rayonnement d'ionisation à une cible clinique, en l'occurrence la prostate et les vésicules séminales. Les objectifs de la radiothérapie externe sont la délivrance d'une dose d'irradiation maximale à la tumeur tout en épargnant les organes voisins (principalement le rectum et la vessie) pour éviter des complications suite au traitement. Comprendre les relations dose/toxicité est une question centrale pour améliorer la fiabilité du traitement à l'étape de planification inverse. Des modèles prédictifs de toxicité pour le calcul des probabilités de complications des tissus sains (normal tissue complication probability, NTCP) ont été développés afin de prédire les événements de toxicité en utilisant des données dosimétriques. Les principales informations considérées sont les histogrammes dose-volume (HDV), qui fournissent une représentation globale de la distribution de dose en fonction de la dose délivrée par rapport au pourcentage du volume d'organe. Cependant, les modèles actuels présentent certaines limitations car ils ne sont pas totalement optimisés; la plupart d'entre eux ne prennent pas en compte les informations non-dosimétrique (les caractéristiques spécifiques aux patients, à la tumeur et au traitement). De plus, ils ne fournissent aucune compréhension des relations locales entre la dose et l'effet (dose-espace/effet relations) car ils n'exploitent pas l'information riche des distributions de planification de dose 3D. Dans un contexte de prédiction de l'apparition de saignement rectaux suite au traitement du cancer de la prostate par radiothérapie externe, les objectifs de cette thèse sont : i) d'extraire des informations pertinentes à partir de l'HDV et des variables non-dosimétriques, afin d'améliorer les modèles NTCP existants et ii) d'analyser les corrélations spatiales entre la dose locale et les effets secondaires permettant une caractérisation de la distribution de dose 3D à l'échelle de l'organe. Ainsi, les stratégies visant à exploiter les informations provenant de la planification (distributions de dose 3D et HDV) ont été proposées. Tout d'abord, en utilisant l'analyse en composantes indépendantes, un nouveau modèle prédictif de l'apparition de saignements rectaux, combinant d'une manière originale l'information dosimétrique et non-dosimétrique, a été proposé. Deuxièmement, nous avons mis au point de nouvelles approches visant à prendre conjointement profit des distributions de dose de planification 3D permettant de déceler la corrélation subtile entre la dose locale et les effets secondaires pour classer et/ou prédire les patients à risque de souffrir d'un saignement rectal, et d'identifier les régions qui peuvent être à l'origine de cet événement indésirable. Plus précisément, nous avons proposé trois méthodes stochastiques basées sur analyse en composantes principales, l'analyse en composantes indépendantes et la factorisation discriminante en matrices non-négatives, et une méthode déterministe basée sur la décomposition polyadique canonique de tableaux d'ordre 4 contenant la dose planifiée. Les résultats obtenus montrent que nos nouvelles approches présentent de meilleures performances générales que les méthodes prédictives de la littérature. / Prostate cancer is among the most common types of cancer worldwide. One of the standard treatments is external radiotherapy, which involves delivering ionizing radiation to a clinical target, in this instance the prostate and seminal vesicles. The goal of radiotherapy is to achieve a maximal local control while sparing neighboring organs (mainly the rectum and the bladder) to avoid normal tissue complications. Understanding the dose/toxicity relationships is a central question for improving treatment reliability at the inverse planning step. Normal tissue complication probability (NTCP) toxicity prediction models have been developed in order to predict toxicity events using dosimetric data. The main considered information are dose-volume histograms (DVH), which provide an overall representation of dose distribution based on the dose delivered per percentage of organ volume. Nevertheless, current dose-based models display limitations as they are not fully optimized; most of them do not include additional non-dosimetric information (patient, tumor and treatment characteristics). Furthermore, they do not provide any understanding of local relationships between dose and effect (dose-space/effect relationship) as they do not exploit the rich information from the 3D planning dose distributions. In the context of rectal bleeding prediction after prostate cancer external beam radiotherapy, the objectives of this thesis are: i) to extract relevant information from DVH and non-dosimetric variables, in order to improve existing NTCP models and ii) to analyze the spatial correlations between local dose and side effects allowing a characterization of 3D dose distribution at a sub-organ level. Thus, strategies aimed at exploiting the information from the radiotherapy planning (DVH and 3D planned dose distributions) were proposed. Firstly, based on independent component analysis, a new model for rectal bleeding prediction by combining dosimetric and non-dosimetric information in an original manner was proposed. Secondly, we have developed new approaches aimed at jointly taking advantage of the 3D planning dose distributions that may unravel the subtle correlation between local dose and side effects to classify and/or predict patients at risk of suffering from rectal bleeding, and identify regions which may be at the origin of this adverse event. More precisely, we proposed three stochastic methods based on principal component analysis, independent component analysis and discriminant nonnegative matrix factorization, and one deterministic method based on canonical polyadic decomposition of fourth order array containing planned dose. The obtained results show that our new approaches exhibit in general better performances than state-of-the-art predictive methods.
70

Blind source separation of single-sensor recordings : Application to ground reaction force signals / Séparation Aveugle de Sources des Signaux Monocanaux : Application aux Signaux de Force de Réaction de Terre

El halabi, Ramzi 19 October 2018 (has links)
Les signaux multicanaux sont des signaux captés à travers plusieurs canaux ou capteurs, portant chacun un mélange de sources, une partie desquelles est connue alors que le reste des sources reste inconnu. Les méthodes à l’aide desquelles l’isolement ou la séparation des sources est accomplie sont connues par les méthodes de séparation de sources en général, et si le degré d’inconnu est large, par la séparation aveugle des sources (SAS). Cependant, la SAS appliquée aux signaux multicanaux est en fait plus facile de point de vue mathématique que l’application de la SAS sur des signaux monocanaux, ou un seul capteur existe et tous les signaux arrivent au même point pour enfin produire un mélange de sources inconnues. Tel est le domaine de cette thèse. Nous avons développé une nouvelle technique de SAS : une combinaison de plusieurs méthodes de séparation et d’optimisation, basée sur la factorisation non-négative des matrices (NMF). Cette méthode peut être utilisée dans de nombreux domaines comme l’analyse des sons et de la parole, les variations de la bourse, et les séismographes. Néanmoins, ici, les signaux de force de réaction de terre verticaux (VGRF) monocanaux d’un groupe d’athlètes coureurs d’ultra-marathon sont analysés et séparés pour l’extraction du peak passif du peak actif d’une nouvelle manière adaptée à la nature de ces signaux. Les signaux VGRF sont des signaux cyclo-stationnaires caractérisés par des double-peaks, chacun étant très rapide et parcimonieux, indiquant les phases de course de l’athlète. L’analyse des peaks est extrêmement importante pour déterminer et prédire la condition du coureur : problème physiologique, problème anatomique, fatigue etc. De plus, un grand nombre de chercheurs ont prouvé que l’impact du pied postérieur avec la terre d’une manière brutale, l’analyse de ce phénomène peut nous ramener à une prédiction de blessure interne. Ils essayent même d’adopter une technique de course - Non-Heel-strike Running (NHS) - par laquelle ils obligent les coureurs à courir sur le pied-antérieur seulement. Afin d'étudier ce phénomène, la séparation du peak d’impact du VGRF permet d'isoler la source portant les informations patho-physiologiques et le degré de fatigue. Nous avons introduit de nouvelles méthodes de prétraitement et de traitement des signaux VGRF pour remplacer le filtrage de bruit traditionnel utilisé partout, et qui peut parfois détruire les peaks d’impact qui sont nos sources à séparer, base sur le concept de soustraction spectrale pour le filtrage, utilisée avec les signaux de parole, après l’application d’un algorithme d’échantillonnage intelligent et adaptatif qui décompose les signaux en pas isolés. Une analyse des signaux VGRF en fonction du temps a été faite pour la détection et la quantification de la fatigue des coureurs durant les 24 heures de course. Cette analyse a été accomplie au domaine fréquentiel/spectral où nous avons détecté un décalage clair du contenu fréquentiel avec la progression de la course indiquant la progression de la fatigue. Nous avons défini les signaux cyclosparse au domaine temporel, puis traduit cette définition à son équivalent au domaine temps-fréquence utilisant la transformée Fourier a court-temps (STFT). Cette représentation a été décomposée à travers une nouvelle méthode que l’on a appelé Cyclosparse Non-negative Matrix Factorisation (Cyclosparse-NMF), basée sur l’optimisation de la minimisation de la divergence Kullback-Leibler (KL) avec pénalisation liée à la périodicité et la parcimonie des sources, ayant comme but final d’extraire les sources cyclosparse du mélange monocanal appliquée aux signaux VGRF monocanaux. La méthode a été testée sur des signaux analytiques afin de prouver l’efficacité de l’algorithme. Les résultats se sont avéré satisfaisants, et le peak impact a été séparé du mélange VGRF monocanal. / The purpose of the presented work is to develop a customized Single-channel Blind Source Separation technique that aims to separate cyclostationary and transient pulse-like patterns/sources from a linear instantaneous mixture of unknown sources. For that endeavor, synthetic signals of the mentioned characteristic were created to confirm the separation success, in addition to real life signals acquired throughout an experiment in which experienced athletes were asked to participate in a 24-hour ultra-marathon in a lab environment on an instrumented treadmill through which their VGRF, which carries a cyclosparse Impact Peak, is continuously recorded with very short discontinuities during which blood is drawn for in-run testing, short enough not to provide rest to the athletes. The synthetic and VGRF signals were then pre-processed, processed for Impact Pattern extraction via a customized Single-channel Blind Source Separation technique that we termed Cyclo-sparse Non-negative Matrix Factorization and analyzed for fatigue assessment. As a result, the Impact Patterns for all of the participating athletes were extracted at 10 different time intervals indicating the progression of the ultra-marathon for 24 hours, and further analysis and comparison of the resulting signals proved major significance in the field of fatigue assessment; the Impact Pattern power monotonically increased for 90% of the subjects by an average of 24.4 15% with the progression of the ultra-marathon during the 24-hour period. Upon computation of the Impact Pattern separation algorithm, fatigue progression showed to be manifested by an increase in reliance on heel-strike impact to push to the bodyweight as a compensation for the decrease in muscle power during propulsion at toe-off. This study among other presented work in the field of VGRF processing forms methods that could be implemented in wearable devices to assess and track runners’ gait as a part of sports performance analysis, rehabilitation phase tracking and classification of healthy vs. unhealthy gait.

Page generated in 0.3731 seconds