• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 15
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Non-parametric Bayesian models for structured output prediction

Bratières, Sébastien January 2018 (has links)
Structured output prediction is a machine learning tasks in which an input object is not just assigned a single class, as in classification, but multiple, interdependent labels. This means that the presence or value of a given label affects the other labels, for instance in text labelling problems, where output labels are applied to each word, and their interdependencies must be modelled. Non-parametric Bayesian (NPB) techniques are probabilistic modelling techniques which have the interesting property of allowing model capacity to grow, in a controllable way, with data complexity, while maintaining the advantages of Bayesian modelling. In this thesis, we develop NPB algorithms to solve structured output problems. We first study a map-reduce implementation of a stochastic inference method designed for the infinite hidden Markov model, applied to a computational linguistics task, part-of-speech tagging. We show that mainstream map-reduce frameworks do not easily support highly iterative algorithms. The main contribution of this thesis consists in a conceptually novel discriminative model, GPstruct. It is motivated by labelling tasks, and combines attractive properties of conditional random fields (CRF), structured support vector machines, and Gaussian process (GP) classifiers. In probabilistic terms, GPstruct combines a CRF likelihood with a GP prior on factors; it can also be described as a Bayesian kernelized CRF. To train this model, we develop a Markov chain Monte Carlo algorithm based on elliptical slice sampling and investigate its properties. We then validate it on real data experiments, and explore two topologies: sequence output with text labelling tasks, and grid output with semantic segmentation of images. The latter case poses scalability issues, which are addressed using likelihood approximations and an ensemble method which allows distributed inference and prediction. The experimental validation demonstrates: (a) the model is flexible and its constituent parts are modular and easy to engineer; (b) predictive performance and, most crucially, the probabilistic calibration of predictions are better than or equal to that of competitor models, and (c) model hyperparameters can be learnt from data.
2

Factorial Hidden Markov Models for full and weakly supervised supertagging

Ramanujam, Srivatsan 2009 August 1900 (has links)
For many sequence prediction tasks in Natural Language Processing, modeling dependencies between individual predictions can be used to improve prediction accuracy of the sequence as a whole. Supertagging, involves assigning lexical entries to words based on lexicalized grammatical theory such as Combinatory Categorial Grammar (CCG). Previous work has used Bayesian HMMs to learn taggers for both POS tagging and supertagging separately. Modeling them jointly has the potential to produce more robust and accurate supertaggers trained with less supervision and thereby potentially help in the creation of useful models for new languages and domains. Factorial Hidden Markov Models (FHMM) support joint inference for multiple sequence prediction tasks. Here, I use them to jointly predict part-of-speech tag and supertag sequences with varying levels of supervision. I show that supervised training of FHMM models improves performance compared to standard HMMs, especially when labeled training material is scarce. Secondly, FHMMs trained from tag dictionaries rather than labeled examples also perform better than a standard HMM. Finally, I show that an FHMM and a maximum entropy Markov model can complement each other in a single step co-training setup that improves the performance of both models when there is limited labeled training material available. / text
3

Hierarchical Bayesian Models of Verb Learning in Children

Parisien, Christopher 11 January 2012 (has links)
The productivity of language lies in the ability to generalize linguistic knowledge to new situations. To understand how children can learn to use language in novel, productive ways, we must investigate how children can find the right abstractions over their input, and how these abstractions can actually guide generalization. In this thesis, I present a series of hierarchical Bayesian models that provide an explicit computational account of how children can acquire and generalize highly abstract knowledge of the verb lexicon from the language around them. By applying the models to large, naturalistic corpora of child-directed speech, I show that these models capture key behaviours in child language development. These models offer the power to investigate developmental phenomena with a degree of breadth and realism unavailable in existing computational accounts of verb learning. By most accounts, children rely on strong regularities between form and meaning to help them acquire abstract verb knowledge. Using a token-level clustering model, I show that by attending to simple syntactic features of potential verb arguments in the input, children can acquire abstract representations of verb argument structure that can reasonably distinguish the senses of a highly polysemous verb. I develop a novel hierarchical model that acquires probabilistic representations of verb argument structure, while also acquiring classes of verbs with similar overall patterns of usage. In a simulation of verb learning within a broad, naturalistic context, I show how this abstract, probabilistic knowledge of alternations can be generalized to new verbs to support learning. I augment this verb class model to acquire associations between form and meaning in verb argument structure, and to generalize this knowledge appropriately via the syntactic and semantic aspects of verb alternations. The model captures children's ability to use the alternation pattern of a novel verb to infer aspects of the verb's meaning, and to use the meaning of a novel verb to predict the range of syntactic forms in which the verb may participate. These simulations also provide new predictions of children's linguistic development, emphasizing the value of this model as a useful framework to investigate verb learning in a complex linguistic environment.
4

Hierarchical Bayesian Models of Verb Learning in Children

Parisien, Christopher 11 January 2012 (has links)
The productivity of language lies in the ability to generalize linguistic knowledge to new situations. To understand how children can learn to use language in novel, productive ways, we must investigate how children can find the right abstractions over their input, and how these abstractions can actually guide generalization. In this thesis, I present a series of hierarchical Bayesian models that provide an explicit computational account of how children can acquire and generalize highly abstract knowledge of the verb lexicon from the language around them. By applying the models to large, naturalistic corpora of child-directed speech, I show that these models capture key behaviours in child language development. These models offer the power to investigate developmental phenomena with a degree of breadth and realism unavailable in existing computational accounts of verb learning. By most accounts, children rely on strong regularities between form and meaning to help them acquire abstract verb knowledge. Using a token-level clustering model, I show that by attending to simple syntactic features of potential verb arguments in the input, children can acquire abstract representations of verb argument structure that can reasonably distinguish the senses of a highly polysemous verb. I develop a novel hierarchical model that acquires probabilistic representations of verb argument structure, while also acquiring classes of verbs with similar overall patterns of usage. In a simulation of verb learning within a broad, naturalistic context, I show how this abstract, probabilistic knowledge of alternations can be generalized to new verbs to support learning. I augment this verb class model to acquire associations between form and meaning in verb argument structure, and to generalize this knowledge appropriately via the syntactic and semantic aspects of verb alternations. The model captures children's ability to use the alternation pattern of a novel verb to infer aspects of the verb's meaning, and to use the meaning of a novel verb to predict the range of syntactic forms in which the verb may participate. These simulations also provide new predictions of children's linguistic development, emphasizing the value of this model as a useful framework to investigate verb learning in a complex linguistic environment.
5

New Importance Sampling Densities

Hörmann, Wolfgang January 2005 (has links) (PDF)
To compute the expectation of a function with respect to a multivariate distribution naive Monte Carlo is often not feasible. In such cases importance sampling leads to better estimates than the rejection method. A new importance sampling distribution, the product of one-dimensional table mountain distributions with exponential tails, turns out to be flexible and useful for Bayesian integration problems. To obtain a heavy-tailed importance sampling distribution a new radius transform for the above distribution is suggested. Together with a linear transform the new importance sampling distributions lead to simple and fast integration algorithms with reliable error bounds. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
6

Hitters vs. Pitchers: A Comparison of Fantasy Baseball Player Performances Using Hierarchical Bayesian Models

Huddleston, Scott D. 17 April 2012 (has links) (PDF)
In recent years, fantasy baseball has seen an explosion in popularity. Major League Baseball, with its long, storied history and the enormous quantity of data available, naturally lends itself to the modern-day recreational activity known as fantasy baseball. Fantasy baseball is a game in which participants manage an imaginary roster of real players and compete against one another using those players' real-life statistics to score points. Early forms of fantasy baseball began in the early 1960s, but beginning in the 1990s, the sport was revolutionized due to the advent of powerful computers and the Internet. The data used in this project come from an actual fantasy baseball league which uses a head-to-head, points-based scoring system. The data consist of the weekly point totals that were accumulated over the first three-fourths of the 2011 regular season by the top 110 hitters and top 70 pitchers in Major League Baseball. The purpose of this project is analyze the relative value of pitchers versus hitters in this league using hierarchical Bayesian models. Three models will be compared, one which differentiates between hitters and pitchers, another which also differentiates between starting pitchers and relief pitchers, and a third which makes no distinction whatsoever between hitters and pitchers. The models will be compared using the deviance information criterion (DIC). The best model will then be used to predict weekly point totals for the last fourth of the 2011 season. Posterior predictive densities will be compared to actual weekly scores.
7

Bayesian Logistic Regression Models for Software Fault Localization

Richmond, James Howard 26 June 2012 (has links)
No description available.
8

Small-Variance Asymptotics for Bayesian Models

Jiang, Ke 25 May 2017 (has links)
No description available.
9

Case and covariate influence: implications for model assessment

Duncan, Kristin A. 12 October 2004 (has links)
No description available.
10

A Statistical Model of Recreational Trails

Predoehl, Andrew January 2016 (has links)
We present a statistical model of recreational trails, and a method to infer trail routes from geophysical data, namely aerial imagery and terrain elevation. We learn a set of textures (textons) that characterize the imagery, and use the textons to segment each image into super-pixels. We also model each texton's probability of generating trail pixels, and the direction of such trails. From terrain elevation, we model the magnitude and direction of terrain gradient on-trail and off-trail. These models lead to a likelihood function for image and elevation. Consistent with Bayesian reasoning, we combine the likelihood with a prior model of trail length and smoothness, yielding a posterior distribution for trails, given an image. We search for good values of this posterior using both a novel stochastic variation of Dijkstra's algorithm, and an MCMC-inspired sampler. Our experiments, on trail images and groundtruth collected in the western continental USA, show substantial improvement over those of the previous best trail-finding methods.

Page generated in 0.2623 seconds