Spelling suggestions: "subject:"cachine learning"" "subject:"amachine learning""
51 |
Learning with trigonometric polynomials /Zhao, Yulong, January 2009 (has links) (PDF)
Thesis (M.Phil.)--City University of Hong Kong, 2009. / "Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Master of Philosophy." Includes bibliographical references (leaves 38-41)
|
52 |
Programming by demonstration : a machine learning approach /Lau, Tessa. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 96-105).
|
53 |
Knowledge frontier discovery a thesis presented to the faculty of the Graduate School, Tennessee Technological University /Honeycutt, Matthew Burton, January 2009 (has links)
Thesis (M.S.)--Tennessee Technological University, 2009. / Title from title page screen (viewed on Feb. 24, 2010). Bibliography: leaves 78-83.
|
54 |
Scalable kernel methods for machine learningKulis, Brian Joseph 09 October 2012 (has links)
Machine learning techniques are now essential for a diverse set of applications in computer vision, natural language processing, software analysis, and many other domains. As more applications emerge and the amount of data continues to grow, there is a need for increasingly powerful and scalable techniques. Kernel methods, which generalize linear learning methods to non-linear ones, have become a cornerstone for much of the recent work in machine learning and have been used successfully for many core machine learning tasks such as clustering, classification, and regression. Despite the recent popularity in kernel methods, a number of issues must be tackled in order for them to succeed on large-scale data. First, kernel methods typically require memory that grows quadratically in the number of data objects, making it difficult to scale to large data sets. Second, kernel methods depend on an appropriate kernel function--an implicit mapping to a high-dimensional space--which is not clear how to choose as it is dependent on the data. Third, in the context of data clustering, kernel methods have not been demonstrated to be practical for real-world clustering problems. This thesis explores these questions, offers some novel solutions to them, and applies the results to a number of challenging applications in computer vision and other domains. We explore two broad fundamental problems in kernel methods. First, we introduce a scalable framework for learning kernel functions based on incorporating prior knowledge from the data. This frame-work scales to very large data sets of millions of objects, can be used for a variety of complex data, and outperforms several existing techniques. In the transductive setting, the method can be used to learn low-rank kernels, whose memory requirements are linear in the number of data points. We also explore extensions of this framework and applications to image search problems, such as object recognition, human body pose estimation, and 3-d reconstructions. As a second problem, we explore the use of kernel methods for clustering. We show a mathematical equivalence between several graph cut objective functions and the weighted kernel k-means objective. This equivalence leads to the first eigenvector-free algorithm for weighted graph cuts, which is thousands of times faster than existing state-of-the-art techniques while using significantly less memory. We benchmark this algorithm against existing methods, apply it to image segmentation, and explore extensions to semi-supervised clustering. / text
|
55 |
Structured exploration for reinforcement learningJong, Nicholas K. 18 December 2012 (has links)
Reinforcement Learning (RL) offers a promising approach towards achieving the dream of autonomous agents that can behave intelligently in the real world. Instead of requiring humans to determine the correct behaviors or sufficient knowledge in advance, RL algorithms allow an agent to acquire the necessary knowledge through direct experience with its environment. Early algorithms guaranteed convergence to optimal behaviors in limited domains, giving hope that simple, universal mechanisms would allow learning agents to succeed at solving a wide variety of complex problems. In practice, the field of RL has struggled to apply these techniques successfully to the full breadth and depth of real-world domains.
This thesis extends the reach of RL techniques by demonstrating the synergies among certain key developments in the literature. The first of these developments is model-based exploration, which facilitates theoretical convergence guarantees in finite problems by explicitly reasoning about an agent's certainty in its understanding of its environment. A second branch of research studies function approximation, which generalizes RL to infinite problems by artificially limiting the degrees of freedom in an agent's representation of its environment. The final major advance that this thesis incorporates is hierarchical decomposition, which seeks to improve the efficiency of learning by endowing an agent's knowledge and behavior with the gross structure of its environment.
Each of these ideas has intuitive appeal and sustains substantial independent research efforts, but this thesis defines the first RL agent that combines all their benefits in the general case. In showing how to combine these techniques effectively, this thesis investigates the twin issues of generalization and exploration, which lie at the heart of efficient learning. This thesis thus lays the groundwork for the next generation of RL algorithms, which will allow scientific agents to know when it suffices to estimate a plan from current data and when to accept the potential cost of running an experiment to gather new data. / text
|
56 |
Machine learning methods for computational biologyLi, Limin, 李丽敏 January 2010 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
|
57 |
Cross-domain subspace learningSi, Si, 斯思 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
58 |
Anomaly detection with Machine learning : Quality assurance of statistical data in the Aid communityBlomquist, Hanna, Möller, Johanna January 2015 (has links)
The overall purpose of this study was to find a way to identify incorrect data in Sida’s statistics about their contributions. A contribution is the financial support given by Sida to a project. The goal was to build an algorithm that determines if a contribution has a risk to be inaccurate coded, based on supervised classification methods within the area of Machine Learning. A thorough data analysis process was done in order to train a model to find hidden patterns in the data. Descriptive features containing important information about the contributions were successfully selected and used for this task. These included keywords that were retrieved from descriptions of the contributions. Two Machine learning methods, Adaboost and Support Vector Machines, were tested for ten classification models. Each model got evaluated depending on their accuracy of predicting the target variable into its correct class. A misclassified component was more likely to be incorrectly coded and was also seen as an anomaly. The Adaboost method performed better and more steadily on the majority of the models. Six classification models built with the Adaboost method were combined to one final ensemble classifier. This classifier was verified with new unseen data and an anomaly score was calculated for each component. The higher the score, the higher the risk of being anomalous. The result was a ranked list, where the most anomalous components were prioritized for further investigation of staff at Sida.
|
59 |
Inductive machine learning with bias林謀楷, Lam, Mau-kai. January 1994 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
60 |
PREDICTION OF CHROMATIN STATES USING DNA SEQUENCE PROPERTIESBahabri, Rihab R. 06 1900 (has links)
Activities of DNA are to a great extent controlled epigenetically through the internal struc- ture of chromatin. This structure is dynamic and is influenced by different modifications of histone proteins. Various combinations of epigenetic modification of histones pinpoint to different functional regions of the DNA determining the so-called chromatin states. How- ever, the characterization of chromatin states by the DNA sequence properties remains largely unknown. In this study we aim to explore whether DNA sequence patterns in the human genome can characterize different chromatin states.
Using DNA sequence motifs we built binary classifiers for each chromatic state to eval- uate whether a given genomic sequence is a good candidate for belonging to a particular chromatin state. Of four classification algorithms (C4.5, Naive Bayes, Random Forest, and SVM) used for this purpose, the decision tree based classifiers (C4.5 and Random Forest) yielded best results among those we evaluated. Our results suggest that in general these models lack sufficient predictive power, although for four chromatin states (insulators, het- erochromatin, and two types of copy number variation) we found that presence of certain motifs in DNA sequences does imply an increased probability that such a sequence is one of these chromatin states.
|
Page generated in 0.1144 seconds