Spelling suggestions: "subject:"semisupervised machine learning"" "subject:"semissupervised machine learning""
1 |
Gene fusions in cancer: Classification of fusion events and regulation patterns of fusion pathway neighborsHughes, Katelyn 05 May 2016 (has links)
Cancer is a leading cause of death worldwide, resulting in an estimated 1.6 million mortalities and 600,000 new cases in the US alone in 2015. Gene fusions, hybrid genes formed from two originally separated genes, are known drivers of cancer. However, gene fusions have also been found in healthy cells due to routine errors in replication. This project aims to understand the role of gene fusion in cancer. Specifically, we seek to achieve two goals. First, we would like to develop a computational method that predicts if a gene fusion event is associated with the cancer or healthy sample. Second, we would like to use this information to determine and characterize molecular mechanisms behind the gene fusion events. Recent studies have attempted to address these problems, but without explicit consideration of the fact that there are overlapping fusion events in both cancer and healthy cells. Here, we address this problem using FUsion Enriched Learning of CANcer Mutations (FUELCAN), a semi-supervised model, which classifies all overlapping fusion events as unlabeled to start. The model is trained using the known cancer and healthy samples and tested using the unlabeled dataset. Unlabeled data is classified as associated with healthy or cancer samples and the top 20 data points are put back into the training set. The process continues until all have been appropriately classified. Three datasets were analyzed from Acute Lymphoblastic Leukemia (ALL), breast cancer and colorectal cancer. We obtained similar results for both supervised and semi-supervised classification. To improve our model, we assessed the functional landscape of gene fusion events and observed that the pathway neighbors of both gene fusion partners are differentially expressed in each cancer dataset. The significant neighbors are also shown to have direct connections to cancer pathways and functions, indicating that these gene fusions are important for cancer development. Future directions include applying the acquired transcriptomic knowledge to our machine learning algorithm, counting transcription factors and kinases within the gene fusion events and their neighbors and assessing the differences between upstream and downstream effects within the pathway neighbors.
|
2 |
Gravitropic Signal Transduction: A Systems Approach to Gene DiscoveryShen, Kaiyu 12 June 2014 (has links)
No description available.
|
3 |
A semi-supervised approach to dialogue act classification using K-Means+HMM / En delvis övervakad metod för klassificering av dialoghandlingar: K-Means+HMMSigova, Elizaveta January 2016 (has links)
Dialogue act (DA) classification is an important step in the process of developing dialog systems. DA classification is a problem usually solved by supervised machine learning (ML) approaches that all require hand labeled data. Since hand labeling data is a resource-intensive task, many have proposed to focus on unsupervised or semi-supervised ML approaches to solve the problem of DA classification. This master’s thesis explores a novel method for semi-supervised approach to DA classification: K-Means+HMM. The method combines K- Means and Hidden Markov Model (HMM) modeling in addition to abstracting away the words in the utterances to their part-of-speech (POS) tags and the utterances to their cluster labels produced by K-Means prior to HMM training. The focus are the following hypotheses: H1) incorporating context of the utterances leads to better results (HMM is a method specifically used for sequential data and thus incorporates context, while K-Means does not); H2) increasing the number of clusters in K-Means+HMM leads to better results; H3) increasing the number of examples of cluster labels and hand labeled DAs pairs in K-Means+HMM leads to better results (the examples of pairs are used to create the emission probabilities used to define the HMM). One of the conclusions is that K-Means performs better than K-Means+HMM (the result for K-Means measured with one-to-one accuracy is 35.0%, while the result for K-Means+HMM is 31.6%) given 14 clusters and one example pair. However, when the number of examples is increased to 15 the result is 40.5% for K-Means+HMM; the biggest improvement is when the number of examples is increased to 20 resulting in 44% one-to-one accuracy. That is, K-Means+HMM outperforms K-Means provided that a certain number of examples is given. Another conclusion is that the number of examples has a much larger impact on the results - compared to the number of clusters - thus perhaps concluding that the statement “there is no data like labeled data” holds. / Klassificering av dialoghandlingar är ett viktigt steg i processen för utveckling av dialogsystem. Klassificering av dialoghandlingar är ett problem som vanligtvis löses med hjälp av övervakade maskininlärningsmetoder som alla behöver uppmärkt data. Eftersom uppmärkning av data är en resurskrävande uppgift har många föreslagit att fokusera på oövervakade eller delvis övervakade maskininlärningsmetoder för att lösa problemet av klassificering av dialoghandlingar. Denna masteruppsats utforskar en ny delvis övervakad maskininläningsmetod för klassificering av dialoghandlingar: K-Means+HMM. Föru- tom att metoden kombinerar K-Means och Hidden Markiv Model (HMM) modellering, abstraheras orden i yttranden till deras ordklasstaggar och yttranden till deras klusteretiketter som produceras av K-Means före HMM träningen. Projektets fokus är följande tre hypoteser: H1) en intergration av yttrandenas kontext leder till ett bättre resultat (HMM är en metod som används specifikt för sekventiell data och den integrerar således kontexten, medan K-Means gör inte det); H2) ökning av antalet kluster i K- Means+HMM leder till bättre resultat; H3) ökning av antalet exempel av par av klusteretiketter och dialoghandligar uppmärkta för hand i K- Means+HMM leder till bättre resultat (parexemplen används för att skapa emissionssannolikheter som definierar HMM). En av slutsatserna är att K-Means presterar bättre än K-Means+HMM (resultatet för K-means mätt med en-till-en noggrannhet är 35,0%, medan resultatet för K-Means+HMM är 31,6%) givet 14 kluster och ett exempelpar. Däremot, när antalet av exempelpar ökar till 15 ökar resultatet för K-Means+HMM till 40,5%. Den största ökningen är när antalet exempelpar är 20, vilket ger ett resulat på 44% en-till-en noggrannhet. Med andra ord, presterar K-Means+HMM bätre än K-Means då att ett visst antal exempelpar är tillgängligt. En annan slutsats är att antalet av exempelpar har en mycket större effekt på resultaten jämfört med antalet kluster, vilket då möjligtvis leder till slutsatsen att “det finns ingen bättre data än uppmärkt data”.
|
4 |
Non-negative matrix decomposition approaches to frequency domain analysis of music audio signalsWood, Sean 12 1900 (has links)
On étudie l’application des algorithmes de décomposition matricielles tel que la Factorisation Matricielle Non-négative (FMN), aux représentations fréquentielles de signaux audio musicaux. Ces algorithmes, dirigés par une fonction d’erreur de reconstruction, apprennent un ensemble de fonctions de base et un ensemble de coef- ficients correspondants qui approximent le signal d’entrée. On compare l’utilisation de trois fonctions d’erreur de reconstruction quand la FMN est appliquée à des gammes monophoniques et harmonisées: moindre carré, divergence Kullback-Leibler, et une mesure de divergence dépendente de la phase, introduite récemment. Des nouvelles méthodes pour interpréter les décompositions résultantes sont présentées et sont comparées aux méthodes utilisées précédemment qui nécessitent des connaissances du domaine acoustique. Finalement, on analyse la capacité de généralisation des fonctions de bases apprises par rapport à trois paramètres musicaux: l’amplitude, la durée et le type d’instrument. Pour ce faire, on introduit deux algorithmes d’étiquetage des fonctions de bases qui performent mieux que l’approche précédente dans la majorité de nos tests, la tâche d’instrument avec audio monophonique étant la seule exception importante. / We study the application of unsupervised matrix decomposition algorithms such as Non-negative Matrix Factorization (NMF) to frequency domain representations of music audio signals. These algorithms, driven by a given reconstruction error function, learn a set of basis functions and a set of corresponding coefficients that approximate the input signal. We compare the use of three reconstruction error functions when NMF is applied to monophonic and harmonized musical scales: least squares, Kullback-Leibler divergence, and a recently introduced “phase-aware” divergence measure. Novel supervised methods for interpreting the resulting decompositions are presented and compared to previously used methods that rely on domain knowledge. Finally, the ability of the learned basis functions to generalize across musical parameter values including note amplitude, note duration and instrument type, are analyzed. To do so, we introduce two basis function labeling algorithms that outperform the previous labeling approach in the majority of our tests, instrument type with monophonic audio being the only notable exception.
|
5 |
Non-negative matrix decomposition approaches to frequency domain analysis of music audio signalsWood, Sean 12 1900 (has links)
On étudie l’application des algorithmes de décomposition matricielles tel que la Factorisation Matricielle Non-négative (FMN), aux représentations fréquentielles de signaux audio musicaux. Ces algorithmes, dirigés par une fonction d’erreur de reconstruction, apprennent un ensemble de fonctions de base et un ensemble de coef- ficients correspondants qui approximent le signal d’entrée. On compare l’utilisation de trois fonctions d’erreur de reconstruction quand la FMN est appliquée à des gammes monophoniques et harmonisées: moindre carré, divergence Kullback-Leibler, et une mesure de divergence dépendente de la phase, introduite récemment. Des nouvelles méthodes pour interpréter les décompositions résultantes sont présentées et sont comparées aux méthodes utilisées précédemment qui nécessitent des connaissances du domaine acoustique. Finalement, on analyse la capacité de généralisation des fonctions de bases apprises par rapport à trois paramètres musicaux: l’amplitude, la durée et le type d’instrument. Pour ce faire, on introduit deux algorithmes d’étiquetage des fonctions de bases qui performent mieux que l’approche précédente dans la majorité de nos tests, la tâche d’instrument avec audio monophonique étant la seule exception importante. / We study the application of unsupervised matrix decomposition algorithms such as Non-negative Matrix Factorization (NMF) to frequency domain representations of music audio signals. These algorithms, driven by a given reconstruction error function, learn a set of basis functions and a set of corresponding coefficients that approximate the input signal. We compare the use of three reconstruction error functions when NMF is applied to monophonic and harmonized musical scales: least squares, Kullback-Leibler divergence, and a recently introduced “phase-aware” divergence measure. Novel supervised methods for interpreting the resulting decompositions are presented and compared to previously used methods that rely on domain knowledge. Finally, the ability of the learned basis functions to generalize across musical parameter values including note amplitude, note duration and instrument type, are analyzed. To do so, we introduce two basis function labeling algorithms that outperform the previous labeling approach in the majority of our tests, instrument type with monophonic audio being the only notable exception.
|
Page generated in 0.0921 seconds