• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Why Machine Learning Works

Montanez, George D. 01 December 2017 (has links)
To better understand why machine learning works, we cast learning problems as searches and characterize what makes searches successful. We prove that any search algorithm can only perform well on a narrow subset of problems, and show the effects of dependence on raising the probability of success for searches. We examine two popular ways of understanding what makes machine learning work, empirical risk minimization and compression, and show how they fit within our search frame-work. Leveraging the “dependence-first” view of learning, we apply this knowledge to areas of unsupervised time-series segmentation and automated hyperparameter optimization, developing new algorithms with strong empirical performance on real-world problem classes.
2

Machine Learning, Optimization, and Anti-Training with Sacrificial Data

Valenzuela, Michael Lawrence January 2016 (has links)
Traditionally the machine learning community has viewed the No Free Lunch (NFL) theorems for search and optimization as a limitation. I review, analyze, and unify the NFL theorem with the many frameworks to arrive at necessary conditions for improving black-box optimization, model selection, and machine learning in general. I review meta-learning literature to determine when and how meta-learning can benefit machine learning. We generalize meta-learning, in context of the NFL theorems, to arrive at a novel technique called Anti-Training with Sacrificial Data (ATSD). My technique applies at the meta level to arrive at domain specific algorithms and models. I also show how to generate sacrificial data. An extensive case study is presented along with simulated annealing results to demonstrate the efficacy of the ATSD method.
3

A COMPARISON OF SIMULATION OPTIMIZATION TECHNIQUES IN SOLVING SINGLE-OBJECTIVE, CONSTRAINED, DISCRETE VARIABLE PROBLEMS

Krumpe, Norman Joseph 31 October 2005 (has links)
No description available.
4

Universal Induction and Optimisation: No Free Lunch

Everitt, Tom January 2013 (has links)
No description available.
5

No Free Lunch et recherche de solutions structurantes en coloration

Martin, Jean-Noel 09 December 2010 (has links) (PDF)
Nous présentons d'abord les théorèmes du No Free Lunch en nous basant sur le papier de D.H. Wolpert et W.G. Macready (version IEEE 1997) mais aussi les multiples réactions que ces résultats ont provoquées dans la communauté de l'optimisation. Convaincus dès lors de l'intérêt d'une approche globale des problèmes et de la nécessité de la recherche de propriétés générales - et spécialement des invariances par symétries -, nous tentons ensuite de mettre en oeuvre cette méthode dans le cadre de la coloration de graphes simples et non orientés. Ce champ est retenu en raison de son intérêt propre, mais aussi pour son caractère de modèle fécond dans de multiples problèmes d'optimisation. Nous faisons émerger la notion de décomposition d'un graphe en cliques maximales et celle de suites constructives qui permettent de reconstruire un graphe à partir de ses composants élémentaires (primary cliques), véritables équivalents des nombres premiers pour les entiers naturels. Nous produisons un algorithme principal et en étudions deux cas singuliers; ensemble ils fournissent une partition de l'ensemble des colorations valides du graphe étudié. Par suite nous retrouvons le polynôme chromatique de manière formelle, indépendamment du nombre de couleurs disponibles. Nous établissons une correspondance de Galois entre colorations valides et sous-graphes engendrés par des familles emboîtées de cliques maximales pourvu qu'elles soient des décompositions complètes de sous-graphes croissants du graphe total.
6

General-purpose optimization through information maximization

Lockett, Alan Justin 05 July 2012 (has links)
The primary goal of artificial intelligence research is to develop a machine capable of learning to solve disparate real-world tasks autonomously, without relying on specialized problem-specific inputs. This dissertation suggests that such machines are realistic: If No Free Lunch theorems were to apply to all real-world problems, then the world would be utterly unpredictable. In response, the dissertation proposes the information-maximization principle, which claims that the optimal optimization methods make the best use of the information available to them. This principle results in a new algorithm, evolutionary annealing, which is shown to perform well especially in challenging problems with irregular structure. / text
7

A Bayesian Decision Theoretical Approach to Supervised Learning, Selective Sampling, and Empirical Function Optimization

Carroll, James Lamond 10 March 2010 (has links) (PDF)
Many have used the principles of statistics and Bayesian decision theory to model specific learning problems. It is less common to see models of the processes of learning in general. One exception is the model of the supervised learning process known as the "Extended Bayesian Formalism" or EBF. This model is descriptive, in that it can describe and compare learning algorithms. Thus the EBF is capable of modeling both effective and ineffective learning algorithms. We extend the EBF to model un-supervised learning, semi-supervised learning, supervised learning, and empirical function optimization. We also generalize the utility model of the EBF to deal with non-deterministic outcomes, and with utility functions other than 0-1 loss. Finally, we modify the EBF to create a "prescriptive" learning model, meaning that, instead of describing existing algorithms, our model defines how learning should optimally take place. We call the resulting model the Unified Bayesian Decision Theoretical Model, or the UBDTM. WE show that this model can serve as a cohesive theory and framework in which a broad range of questions can be analyzed and studied. Such a broadly applicable unified theoretical framework is one of the major missing ingredients of machine learning theory. Using the UBDTM, we concentrate on supervised learning and empirical function optimization. We then use the UBDTM to reanalyze many important theoretical issues in Machine Learning, including No-Free-Lunch, utility implications, and active learning. We also point forward to future directions for using the UBDTM to model learnability, sample complexity, and ensembles. We also provide practical applications of the UBDTM by using the model to train a Bayesian variation to the CMAC supervised learner in closed form, to perform a practical empirical function optimization task, and as part of the guiding principles behind an ongoing project to create an electronic and print corpus of tagged ancient Syriac texts using active learning.

Page generated in 0.0647 seconds