Return to search

Bayesian Optimization and Semiparametric Models with Applications to Assistive Technology

Advances in machine learning are having a profound impact on disciplines spanning the sciences. Assistive technology and health informatics are fields for which minor improvements achieved through leveraging more advanced machine learning algorithms can translate to major real world impact. However, successful application of machine learning currently requires broad domain knowledge to determine which model is appropriate for a given task, and model specific expertise to configure a model to a problem of interest. A major motivation for this thesis was: How can we make machine learning more accessible to assistive technology and health informatics researchers? Naturally, a complementary goal is to make machine learning more accessible in general. Specifically, in this thesis we explore how to automate the role of a machine learning expert through automatically adapting models and adjusting parameters to a given task of interest. This thesis consists of a number of contributions towards solving this challenging open problem in machine learning and these are empirically validated on four real-world applications.

Through an interesting theoretical link between two seemingly disparate latent variable models, we create a hybrid model that allows one to flexibly interpolate over a parametric unsupervised neural network, a classification neural network and a non-parametric Gaussian process. We demonstrate empirically that this non-parametrically guided autoencoder allows one to learn a latent representation that is more useful for a given task of interest.

We establish methods for automatically configuring machine learning model hyperparameters using Bayesian optimization. We develop Bayesian methods for integrating over parameters, explore the use of different priors over functions, and develop methods to run experiments in parallel. We demonstrate empirically that these methods find better hyperparameters on recent benchmark problems spanning machine learning in significantly less experiments than the methods employed by the problems' authors. We further establish methods for incorporating parameter dependent variable cost in the optimization procedure. These methods find better hyperparameters in less cost, such as time, or within bounded cost, such as before a deadline. Additionally, we develop a constrained Bayesian optimization variant and demonstrate its superiority over the standard procedure in the presence of unknown constraints.

Identiferoai:union.ndltd.org:TORONTO/oai:tspace.library.utoronto.ca:1807/43732
Date14 January 2014
CreatorsSnoek, Jasper Roland
ContributorsAlex, Mihailidis
Source SetsUniversity of Toronto
Languageen_ca
Detected LanguageEnglish
TypeThesis

Page generated in 0.0025 seconds