Spelling suggestions: "subject:"supervised 1earning."" "subject:"supervised c1earning.""
41 |
Bayesian optimization for selecting training and validation data for supervised machine learning : using Gaussian processes both to learn the relationship between sets of training data and model performance, and to estimate model performance over the entire problem domain / Bayesiansk optimering för val av träning- och valideringsdata för övervakad maskininlärningBergström, David January 2019 (has links)
Validation and verification in machine learning is an open problem which becomes increasingly important as its applications becomes more critical. Amongst the applications are autonomous vehicles and medical diagnostics. These systems all needs to be validated before being put into use or else the consequences might be fatal. This master’s thesis focuses on improving both learning and validating machine learning models in cases where data can either be generated or collected based on a chosen position. This can for example be taking and labeling photos at the position or running some simulation which generates data from the chosen positions. The approach is twofold. The first part concerns modeling the relationship between any fixed-size set of positions and some real valued performance measure. The second part involves calculating such a performance measure by estimating the performance over a region of positions. The result is two different algorithms, both variations of Bayesian optimization. The first algorithm models the relationship between a set of points and some performance measure while also optimizing the function and thus finding the set of points which yields the highest performance. The second algorithm uses Bayesian optimization to approximate the integral of performance over the region of interest. The resulting algorithms are validated in two different simulated environments. The resulting algorithms are applicable not only to machine learning but can also be used to optimize any function which takes a set of positions and returns a value, but are more suitable when the function is expensive to evaluate.
|
42 |
Stable Mixing of Complete and Incomplete InformationCorduneanu, Adrian, Jaakkola, Tommi 08 November 2001 (has links)
An increasing number of parameter estimation tasks involve the use of at least two information sources, one complete but limited, the other abundant but incomplete. Standard algorithms such as EM (or em) used in this context are unfortunately not stable in the sense that they can lead to a dramatic loss of accuracy with the inclusion of incomplete observations. We provide a more controlled solution to this problem through differential equations that govern the evolution of locally optimal solutions (fixed points) as a function of the source weighting. This approach permits us to explicitly identify any critical (bifurcation) points leading to choices unsupported by the available complete data. The approach readily applies to any graphical model in O(n^3) time where n is the number of parameters. We use the naive Bayes model to illustrate these ideas and demonstrate the effectiveness of our approach in the context of text classification problems.
|
43 |
Validating Co-Training Models for Web Image ClassificationZhang, Dell, Lee, Wee Sun 01 1900 (has links)
Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present. / Singapore-MIT Alliance (SMA)
|
44 |
Graph based semi-supervised learning in computer visionHuang, Ning, January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Biomedical Engineering." Includes bibliographical references (p. 54-55).
|
45 |
Kernel methods in supervised and unsupervised learning /Tsang, Wai-Hung. January 2003 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 46-49). Also available in electronic version. Access restricted to campus users.
|
46 |
Bayesian minimum expected risk estimation of distributions for statistical learning /Srivastava, Santosh. January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 120-127).
|
47 |
Revisiting output coding for sequential supervised learning /Hao, Guohua. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 38-40). Also available on the World Wide Web.
|
48 |
Support vector classification analysis of resting state functional connectivity fMRICraddock, Richard Cameron. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Hu, Xiaoping; Committee Co-Chair: Vachtsevanos, George; Committee Member: Butera, Robert; Committee Member: Gurbaxani, Brian; Committee Member: Mayberg, Helen; Committee Member: Yezzi, Anthony. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
49 |
Parameter incremental learning algorithm for neural networksWan, Sheng, January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains x, 97 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 81-83).
|
50 |
Empirical Effective Dimension and Optimal Rates for Regularized Least Squares AlgorithmCaponnetto, Andrea, Rosasco, Lorenzo, Vito, Ernesto De, Verri, Alessandro 27 May 2005 (has links)
This paper presents an approach to model selection for regularized least-squares on reproducing kernel Hilbert spaces in the semi-supervised setting. The role of effective dimension was recently shown to be crucial in the definition of a rule for the choice of the regularization parameter, attaining asymptotic optimal performances in a minimax sense. The main goal of the present paper is showing how the effective dimension can be replaced by an empirical counterpart while conserving optimality. The empirical effective dimension can be computed from independent unlabelled samples. This makes the approach particularly appealing in the semi-supervised setting.
|
Page generated in 0.0626 seconds