• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 20
  • 13
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 115
  • 115
  • 115
  • 45
  • 36
  • 36
  • 31
  • 31
  • 22
  • 22
  • 18
  • 17
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Generative rhythmic models

Rae, Alexander 08 April 2009 (has links)
A system for generative rhythmic modeling is presented. The work aims to explore computational models of creativity, realizing them in a system designed for realtime generation of semi-improvisational music. This is envisioned as an attempt to develop musical intelligence in the context of structured improvisation, and by doing so to enable and encourage new forms of musical control and performance; the systems described in this work, already capable of realtime creation, have been designed with the explicit intention of embedding them in a variety of performance-based systems. A model of qaida, a solo tabla form, is presented, along with the results of an online survey comparing it to a professional tabla player's recording on dimensions of musicality, creativity, and novelty. The qaida model generates a bank of rhythmic variations by reordering subphrases. Selections from this bank are sequenced using a feature-based approach. An experimental extension into modeling layer- and loop-based forms of electronic music is presented, in which the initial modeling approach is generalized. Starting from a seed track, the layer-based model utilizes audio analysis techniques such as blind source separation and onset-based segmentation to generate layers which are shuffled and recombined to generate novel music in a manner analogous to the qaida model.
32

Optimal Transport Dictionary Learning and Non-negative Matrix Factorization / 最適輸送辞書学習と非負値行列因子分解

Rolet, Antoine 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23314号 / 情博第750号 / 新制||情||128(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 山本 章博, 教授 鹿島 久嗣, 教授 河原 達也 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
33

Compromising Random Linear Network Coding as a Cipher

Bethu, Sravya 15 June 2022 (has links)
No description available.
34

Learning Statistical and Geometric Models from Microarray Gene Expression Data

Zhu, Yitan 01 October 2009 (has links)
In this dissertation, we propose and develop innovative data modeling and analysis methods for extracting meaningful and specific information about disease mechanisms from microarray gene expression data. To provide a high-level overview of gene expression data for easy and insightful understanding of data structure, we propose a novel statistical data clustering and visualization algorithm that is comprehensively effective for multiple clustering tasks and that overcomes some major limitations of existing clustering methods. The proposed clustering and visualization algorithm performs progressive, divisive hierarchical clustering and visualization, supported by hierarchical statistical modeling, supervised/unsupervised informative gene/feature selection, supervised/unsupervised data visualization, and user/prior knowledge guidance through human-data interactions, to discover cluster structure within complex, high-dimensional gene expression data. For the purpose of selecting suitable clustering algorithm(s) for gene expression data analysis, we design an objective and reliable clustering evaluation scheme to assess the performance of clustering algorithms by comparing their sample clustering outcome to phenotype categories. Using the proposed evaluation scheme, we compared the performance of our newly developed clustering algorithm with those of several benchmark clustering methods, and demonstrated the superior and stable performance of the proposed clustering algorithm. To identify the underlying active biological processes that jointly form the observed biological event, we propose a latent linear mixture model that quantitatively describes how the observed gene expressions are generated by a process of mixing the latent active biological processes. We prove a series of theorems to show the identifiability of the noise-free model. Based on relevant geometric concepts, convex analysis and optimization, gene clustering, and model stability analysis, we develop a robust blind source separation method that fits the model to the gene expression data and subsequently identify the underlying biological processes and their activity levels under different biological conditions. Based on the experimental results obtained on cancer, muscle regeneration, and muscular dystrophy gene expression data, we believe that the research work presented in this dissertation not only contributes to the engineering research areas of machine learning and pattern recognition, but also provides novel and effective solutions to potentially solve many biomedical research problems, for improving the understanding about disease mechanisms. / Ph. D.
35

Computational Dissection of Composite Molecular Signatures and Transcriptional Modules

Gong, Ting 22 January 2010 (has links)
This dissertation aims to develop a latent variable modeling framework with which to analyze gene expression profiling data for computational dissection of molecular signatures and transcriptional modules. The first part of the dissertation is focused on extracting pure gene expression signals from tissue or cell mixtures. The main goal of gene expression profiling is to identify the pure signatures of different cell types (such as cancer cells, stromal cells and inflammatory cells) and estimate the concentration of each cell type. In order to accomplish this, a new blind source separation method is developed, namely, nonnegative partially independent component analysis (nPICA), for tissue heterogeneity correction (THC). The THC problem is formulated as a constrained optimization problem and solved with a learning algorithm based on geometrical and statistical principles. The second part of the dissertation sought to identify gene modules from gene expression data to uncover important biological processes in different types of cells. A new gene clustering approach, nonnegative independent component analysis (nICA), is developed for gene module identification. The nICA approach is completed with an information-theoretic procedure for input sample selection and a novel stability analysis approach for proper dimension estimation. Experimental results showed that the gene modules identified by the nICA approach appear to be significantly enriched in functional annotations in terms of gene ontology (GO) categories. The third part of the dissertation moves from gene module level down to DNA sequence level to identify gene regulatory programs by integrating gene expression data and protein-DNA binding data. A sparse hidden component model is first developed for this problem, taking into account a well-known biological principle, i.e., a gene is most likely regulated by a few regulators. This is followed by the development of a novel computational approach, motif-guided sparse decomposition (mSD), in order to integrate the binding information and gene expression data. These computational approaches are primarily developed for analyzing high-throughput gene expression profiling data. Nevertheless, the proposed methods should be able to be extended to analyze other types of high-throughput data for biomedical research. / Ph. D.
36

A Unified Statistical Approach to Fast and Robust Multichannel Speech Separation and Dereverberation / 高速かつ頑健な多チャンネル音声分離・残響除去のための統合的・統計的アプローチ

Sekiguchi, Kouhei 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23309号 / 情博第745号 / 新制||情||127(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)准教授 吉井 和佳, 教授 河原 達也, 教授 西野 恒, 教授 田中 利幸 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
37

Sensitivity analysis of blind separation of speech mixtures

Unknown Date (has links)
Blind source separation (BSS) refers to a class of methods by which multiple sensor signals are combined with the aim of estimating the original source signals. Independent component analysis (ICA) is one such method that effectively resolves static linear combinations of independent non-Gaussian distributions. We propose a method that can track variations in the mixing system by seeking a compromise between adaptive and block methods by using mini-batches. The resulting permutation indeterminacy is resolved based on the correlation continuity principle. Methods employing higher order cumulants in the separation criterion are susceptible to outliers in the finite sample case. We propose a robust method based on low-order non-integer moments by exploiting the Laplacian model of speech signals. We study separation methods for even (over)-determined linear convolutive mixtures in the frequency domain based on joint diagonalization of matrices employing time-varying second order statistics. We investigate the sources affecting the sensitivity of the solution under the finite sample case such as the set size, overlap amount and cross-spectrum estimation methods. / by Savaskan Bulek. / Thesis (Ph.D.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
38

Classification, feature extraction and prediction of side effects in prostate cancer radiotherapy / Classification, extraction de données et prédiction de la toxicité rectale en radiothérapie du cancer de la prostate

Fargeas, Aureline 29 June 2016 (has links)
Le cancer de la prostate est l'un des cancers les plus fréquents chez l'homme. L'un des traitements standard est la radiothérapie externe, qui consiste à délivrer un rayonnement d'ionisation à une cible clinique, en l'occurrence la prostate et les vésicules séminales. Les objectifs de la radiothérapie externe sont la délivrance d'une dose d'irradiation maximale à la tumeur tout en épargnant les organes voisins (principalement le rectum et la vessie) pour éviter des complications suite au traitement. Comprendre les relations dose/toxicité est une question centrale pour améliorer la fiabilité du traitement à l'étape de planification inverse. Des modèles prédictifs de toxicité pour le calcul des probabilités de complications des tissus sains (normal tissue complication probability, NTCP) ont été développés afin de prédire les événements de toxicité en utilisant des données dosimétriques. Les principales informations considérées sont les histogrammes dose-volume (HDV), qui fournissent une représentation globale de la distribution de dose en fonction de la dose délivrée par rapport au pourcentage du volume d'organe. Cependant, les modèles actuels présentent certaines limitations car ils ne sont pas totalement optimisés; la plupart d'entre eux ne prennent pas en compte les informations non-dosimétrique (les caractéristiques spécifiques aux patients, à la tumeur et au traitement). De plus, ils ne fournissent aucune compréhension des relations locales entre la dose et l'effet (dose-espace/effet relations) car ils n'exploitent pas l'information riche des distributions de planification de dose 3D. Dans un contexte de prédiction de l'apparition de saignement rectaux suite au traitement du cancer de la prostate par radiothérapie externe, les objectifs de cette thèse sont : i) d'extraire des informations pertinentes à partir de l'HDV et des variables non-dosimétriques, afin d'améliorer les modèles NTCP existants et ii) d'analyser les corrélations spatiales entre la dose locale et les effets secondaires permettant une caractérisation de la distribution de dose 3D à l'échelle de l'organe. Ainsi, les stratégies visant à exploiter les informations provenant de la planification (distributions de dose 3D et HDV) ont été proposées. Tout d'abord, en utilisant l'analyse en composantes indépendantes, un nouveau modèle prédictif de l'apparition de saignements rectaux, combinant d'une manière originale l'information dosimétrique et non-dosimétrique, a été proposé. Deuxièmement, nous avons mis au point de nouvelles approches visant à prendre conjointement profit des distributions de dose de planification 3D permettant de déceler la corrélation subtile entre la dose locale et les effets secondaires pour classer et/ou prédire les patients à risque de souffrir d'un saignement rectal, et d'identifier les régions qui peuvent être à l'origine de cet événement indésirable. Plus précisément, nous avons proposé trois méthodes stochastiques basées sur analyse en composantes principales, l'analyse en composantes indépendantes et la factorisation discriminante en matrices non-négatives, et une méthode déterministe basée sur la décomposition polyadique canonique de tableaux d'ordre 4 contenant la dose planifiée. Les résultats obtenus montrent que nos nouvelles approches présentent de meilleures performances générales que les méthodes prédictives de la littérature. / Prostate cancer is among the most common types of cancer worldwide. One of the standard treatments is external radiotherapy, which involves delivering ionizing radiation to a clinical target, in this instance the prostate and seminal vesicles. The goal of radiotherapy is to achieve a maximal local control while sparing neighboring organs (mainly the rectum and the bladder) to avoid normal tissue complications. Understanding the dose/toxicity relationships is a central question for improving treatment reliability at the inverse planning step. Normal tissue complication probability (NTCP) toxicity prediction models have been developed in order to predict toxicity events using dosimetric data. The main considered information are dose-volume histograms (DVH), which provide an overall representation of dose distribution based on the dose delivered per percentage of organ volume. Nevertheless, current dose-based models display limitations as they are not fully optimized; most of them do not include additional non-dosimetric information (patient, tumor and treatment characteristics). Furthermore, they do not provide any understanding of local relationships between dose and effect (dose-space/effect relationship) as they do not exploit the rich information from the 3D planning dose distributions. In the context of rectal bleeding prediction after prostate cancer external beam radiotherapy, the objectives of this thesis are: i) to extract relevant information from DVH and non-dosimetric variables, in order to improve existing NTCP models and ii) to analyze the spatial correlations between local dose and side effects allowing a characterization of 3D dose distribution at a sub-organ level. Thus, strategies aimed at exploiting the information from the radiotherapy planning (DVH and 3D planned dose distributions) were proposed. Firstly, based on independent component analysis, a new model for rectal bleeding prediction by combining dosimetric and non-dosimetric information in an original manner was proposed. Secondly, we have developed new approaches aimed at jointly taking advantage of the 3D planning dose distributions that may unravel the subtle correlation between local dose and side effects to classify and/or predict patients at risk of suffering from rectal bleeding, and identify regions which may be at the origin of this adverse event. More precisely, we proposed three stochastic methods based on principal component analysis, independent component analysis and discriminant nonnegative matrix factorization, and one deterministic method based on canonical polyadic decomposition of fourth order array containing planned dose. The obtained results show that our new approaches exhibit in general better performances than state-of-the-art predictive methods.
39

Blind source separation of single-sensor recordings : Application to ground reaction force signals / Séparation Aveugle de Sources des Signaux Monocanaux : Application aux Signaux de Force de Réaction de Terre

El halabi, Ramzi 19 October 2018 (has links)
Les signaux multicanaux sont des signaux captés à travers plusieurs canaux ou capteurs, portant chacun un mélange de sources, une partie desquelles est connue alors que le reste des sources reste inconnu. Les méthodes à l’aide desquelles l’isolement ou la séparation des sources est accomplie sont connues par les méthodes de séparation de sources en général, et si le degré d’inconnu est large, par la séparation aveugle des sources (SAS). Cependant, la SAS appliquée aux signaux multicanaux est en fait plus facile de point de vue mathématique que l’application de la SAS sur des signaux monocanaux, ou un seul capteur existe et tous les signaux arrivent au même point pour enfin produire un mélange de sources inconnues. Tel est le domaine de cette thèse. Nous avons développé une nouvelle technique de SAS : une combinaison de plusieurs méthodes de séparation et d’optimisation, basée sur la factorisation non-négative des matrices (NMF). Cette méthode peut être utilisée dans de nombreux domaines comme l’analyse des sons et de la parole, les variations de la bourse, et les séismographes. Néanmoins, ici, les signaux de force de réaction de terre verticaux (VGRF) monocanaux d’un groupe d’athlètes coureurs d’ultra-marathon sont analysés et séparés pour l’extraction du peak passif du peak actif d’une nouvelle manière adaptée à la nature de ces signaux. Les signaux VGRF sont des signaux cyclo-stationnaires caractérisés par des double-peaks, chacun étant très rapide et parcimonieux, indiquant les phases de course de l’athlète. L’analyse des peaks est extrêmement importante pour déterminer et prédire la condition du coureur : problème physiologique, problème anatomique, fatigue etc. De plus, un grand nombre de chercheurs ont prouvé que l’impact du pied postérieur avec la terre d’une manière brutale, l’analyse de ce phénomène peut nous ramener à une prédiction de blessure interne. Ils essayent même d’adopter une technique de course - Non-Heel-strike Running (NHS) - par laquelle ils obligent les coureurs à courir sur le pied-antérieur seulement. Afin d'étudier ce phénomène, la séparation du peak d’impact du VGRF permet d'isoler la source portant les informations patho-physiologiques et le degré de fatigue. Nous avons introduit de nouvelles méthodes de prétraitement et de traitement des signaux VGRF pour remplacer le filtrage de bruit traditionnel utilisé partout, et qui peut parfois détruire les peaks d’impact qui sont nos sources à séparer, base sur le concept de soustraction spectrale pour le filtrage, utilisée avec les signaux de parole, après l’application d’un algorithme d’échantillonnage intelligent et adaptatif qui décompose les signaux en pas isolés. Une analyse des signaux VGRF en fonction du temps a été faite pour la détection et la quantification de la fatigue des coureurs durant les 24 heures de course. Cette analyse a été accomplie au domaine fréquentiel/spectral où nous avons détecté un décalage clair du contenu fréquentiel avec la progression de la course indiquant la progression de la fatigue. Nous avons défini les signaux cyclosparse au domaine temporel, puis traduit cette définition à son équivalent au domaine temps-fréquence utilisant la transformée Fourier a court-temps (STFT). Cette représentation a été décomposée à travers une nouvelle méthode que l’on a appelé Cyclosparse Non-negative Matrix Factorisation (Cyclosparse-NMF), basée sur l’optimisation de la minimisation de la divergence Kullback-Leibler (KL) avec pénalisation liée à la périodicité et la parcimonie des sources, ayant comme but final d’extraire les sources cyclosparse du mélange monocanal appliquée aux signaux VGRF monocanaux. La méthode a été testée sur des signaux analytiques afin de prouver l’efficacité de l’algorithme. Les résultats se sont avéré satisfaisants, et le peak impact a été séparé du mélange VGRF monocanal. / The purpose of the presented work is to develop a customized Single-channel Blind Source Separation technique that aims to separate cyclostationary and transient pulse-like patterns/sources from a linear instantaneous mixture of unknown sources. For that endeavor, synthetic signals of the mentioned characteristic were created to confirm the separation success, in addition to real life signals acquired throughout an experiment in which experienced athletes were asked to participate in a 24-hour ultra-marathon in a lab environment on an instrumented treadmill through which their VGRF, which carries a cyclosparse Impact Peak, is continuously recorded with very short discontinuities during which blood is drawn for in-run testing, short enough not to provide rest to the athletes. The synthetic and VGRF signals were then pre-processed, processed for Impact Pattern extraction via a customized Single-channel Blind Source Separation technique that we termed Cyclo-sparse Non-negative Matrix Factorization and analyzed for fatigue assessment. As a result, the Impact Patterns for all of the participating athletes were extracted at 10 different time intervals indicating the progression of the ultra-marathon for 24 hours, and further analysis and comparison of the resulting signals proved major significance in the field of fatigue assessment; the Impact Pattern power monotonically increased for 90% of the subjects by an average of 24.4 15% with the progression of the ultra-marathon during the 24-hour period. Upon computation of the Impact Pattern separation algorithm, fatigue progression showed to be manifested by an increase in reliance on heel-strike impact to push to the bodyweight as a compensation for the decrease in muscle power during propulsion at toe-off. This study among other presented work in the field of VGRF processing forms methods that could be implemented in wearable devices to assess and track runners’ gait as a part of sports performance analysis, rehabilitation phase tracking and classification of healthy vs. unhealthy gait.
40

Iterative issues of ICA, quality of separation and number of sources: a study for biosignal applications

Naik, Ganesh Ramachandra, ganesh.naik@rmit.edu.au January 2009 (has links)
This thesis has evaluated the use of Independent Component Analysis (ICA) on Surface Electromyography (sEMG), focusing on the biosignal applications. This research has identified and addressed the following four issues related to the use of ICA for biosignals: • The iterative nature of ICA • The order and magnitude ambiguity problems of ICA • Estimation of number of sources based on dependency and independency nature of the signals • Source separation for non-quadratic ICA (undercomplete and overcomplete) This research first establishes the applicability of ICA for sEMG and also identifies the shortcomings related to order and magnitude ambiguity. It has then developed, a mitigation strategy for these issues by using a single unmixing matrix and neural network weight matrix corresponding to the specific user. The research reports experimental verification of the technique and also the investigation of the impact of inter-subject and inter-experimental variations. The results demonstrate that while using sEMG without separation gives only 60% accuracy, and sEMG separated using traditional ICA gives an accuracy of 65%, this approach gives an accuracy of 99% for the same experimental data. Besides the marked improvement in accuracy, the other advantages of such a system are that it is suitable for real time operations and is easy to train by a lay user. The second part of this thesis reports research conducted to evaluate the use of ICA for the separation of bioelectric signals when the number of active sources may not be known. The work proposes the use of value of the determinant of the Global matrix generated using sparse sub band ICA for identifying the number of active sources. The results indicate that the technique is successful in identifying the number of active muscles for complex hand gestures. The results support the applications such as human computer interface. This thesis has also developed a method of determining the number of independent sources in a given mixture and has also demonstrated that using this information, it is possible to separate the signals in an undercomplete situation and reduce the redundancy in the data using standard ICA methods. The experimental verification has demonstrated that the quality of separation using this method is better than other techniques such as Principal Component Analysis (PCA) and selective PCA. This has number of applications such as audio separation and sensor networks.

Page generated in 0.1713 seconds