• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 193
  • 31
  • 18
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 555
  • 555
  • 214
  • 196
  • 106
  • 101
  • 73
  • 67
  • 67
  • 67
  • 66
  • 57
  • 54
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

ベイス推定に基づく音楽アライメント / Bayesian Music Alignment

前澤, 陽 23 March 2015 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19106号 / 情博第552号 / 新制||情||98 / 32057 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 田中 利幸, 講師 吉井 和佳 / 学位規則第4条第1項該当
142

Statistical Learning of Some Complex Systems: From Dynamic Systems to Market Microstructure

Tong, Xiao Thomas 27 September 2013 (has links)
A complex system is one with many parts, whose behaviors are strongly dependent on each other. There are two interesting questions about complex systems. One is to understand how to recover the true structure of a complex system from noisy data. The other is to understand how the system interacts with its environment. In this thesis, we address these two questions by studying two distinct complex systems: dynamic systems and market microstructure. To address the first question, we focus on some nonlinear dynamic systems. We develop a novel Bayesian statistical method, Gaussian Emulator, to estimate the parameters of dynamic systems from noisy data, when the data are either fully or partially observed. Our method shows that estimation accuracy is substantially improved and computation is faster, compared to the numerical solvers. To address the second question, we focus on the market microstructure of hidden liquidity. We propose some statistical models to explain the hidden liquidity under different market conditions. Our statistical results suggest that hidden liquidity can be reliably predicted given the visible state of the market. / Statistics
143

Bayesian Inference Approaches for Particle Trajectory Analysis in Cell Biology

Monnier, Nilah 28 August 2013 (has links)
Despite the importance of single particle motion in biological systems, systematic inference approaches to analyze particle trajectories and evaluate competing motion models are lacking. An automated approach for robust evaluation of motion models that does not require manual intervention is highly desirable to enable analysis of datasets from high-throughput imaging technologies that contain hundreds or thousands of trajectories of biological particles, such as membrane receptors, vesicles, chromosomes or kinetochores, mRNA particles, or whole cells in developing embryos. Bayesian inference is a general theoretical framework for performing such model comparisons that has proven successful in handling noise and experimental limitations in other biological applications. The inherent Bayesian penalty on model complexity, which avoids overfitting, is particularly important for particle trajectory analysis given the highly stochastic nature of particle diffusion. This thesis presents two complementary approaches for analyzing particle motion using Bayesian inference. The first method, MSD-Bayes, discriminates a wide range of motion models--including diffusion, directed motion, anomalous and confined diffusion--based on mean- square displacement analysis of a set of particle trajectories, while the second method, HMM-Bayes, identifies dynamic switching between diffusive and directed motion along individual trajectories using hidden Markov models. These approaches are validated on biological particle trajectory datasets from a wide range of experimental systems, demonstrating their broad applicability to research in cell biology.
144

Function-on-Function Regression with Public Health Applications

Meyer, Mark John 06 June 2014 (has links)
Medical research currently involves the collection of large and complex data. One such type of data is functional data where the unit of measurement is a curve measured over a grid. Functional data comes in a variety of forms depending on the nature of the research. Novel methodologies are required to accommodate this growing volume of functional data alongside new testing procedures to provide valid inferences. In this dissertation, I propose three novel methods to accommodate a variety of questions involving functional data of multiple forms. I consider three novel methods: (1) a function-on-function regression for Gaussian data; (2) a historical functional linear models for repeated measures; and (3) a generalized functional outcome regression for ordinal data. For each method, I discuss the existing shortcomings of the literature and demonstrate how my method fills those gaps. The abilities of each method are demonstrated via simulation and data application.
145

Accelerating Markov chain Monte Carlo via parallel predictive prefetching

Angelino, Elaine Lee 21 October 2014 (has links)
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores. / Engineering and Applied Sciences
146

The effects of three different priors for variance parameters in the normal-mean hierarchical model

Chen, Zhu, 1985- 01 December 2010 (has links)
Many prior distributions are suggested for variance parameters in the hierarchical model. The “Non-informative” interval of the conjugate inverse-gamma prior might cause problems. I consider three priors – conjugate inverse-gamma, log-normal and truncated normal for the variance parameters and do the numerical analysis on Gelman’s 8-schools data. Then with the posterior draws, I compare the Bayesian credible intervals of parameters using the three priors. I use predictive distributions to do predictions and then discuss the differences of the three priors suggested. / text
147

Top-Down Bayesian Modeling and Inference for Indoor Scenes

Del Pero, Luca January 2013 (has links)
People can understand the content of an image without effort. We can easily identify the objects in it, and figure out where they are in the 3D world. Automating these abilities is critical for many applications, like robotics, autonomous driving and surveillance. Unfortunately, despite recent advancements, fully automated vision systems for image understanding do not exist. In this work, we present progress restricted to the domain of images of indoor scenes, such as bedrooms and kitchens. These environments typically have the "Manhattan" property that most surfaces are parallel to three principal ones. Further, the 3D geometry of a room and the objects within it can be approximated with simple geometric primitives, such as 3D blocks. Our goal is to reconstruct the 3D geometry of an indoor environment while also understanding its semantic meaning, by identifying the objects in the scene, such as beds and couches. We separately model the 3D geometry, the camera, and an image likelihood, to provide a generative statistical model for image data. Our representation captures the rich structure of an indoor scene, by explicitly modeling the contextual relationships among its elements, such as the typical size of objects and their arrangement in the room, and simple physical constraints, such as 3D objects do not intersect. This ensures that the predicted image interpretation will be globally coherent geometrically and semantically, which allows tackling the ambiguities caused by projecting a 3D scene onto an image, such as occlusions and foreshortening. We fit this model to images using MCMC sampling. Our inference method combines bottom-up evidence from the data and top-down knowledge from the 3D world, in order to explore the vast output space efficiently. Comprehensive evaluation confirms our intuition that global inference of the entire scene is more effective than estimating its individual elements independently. Further, our experiments show that our approach is competitive and often exceeds the results of state-of-the-art methods.
148

Bayesian Data Association for Temporal Scene Understanding

Brau Avila, Ernesto January 2013 (has links)
Understanding the content of a video sequence is not a particularly difficult problem for humans. We can easily identify objects, such as people, and track their position and pose within the 3D world. A computer system that could understand the world through videos would be extremely beneficial in applications such as surveillance, robotics, biology. Despite significant advances in areas like tracking and, more recently, 3D static scene understanding, such a vision system does not yet exist. In this work, I present progress on this problem, restricted to videos of objects that move in smoothly and which are relatively easily detected, such as people. Our goal is to identify all the moving objects in the scene and track their physical state (e.g., their 3D position or pose) in the world throughout the video. We develop a Bayesian generative model of a temporal scene, where we separately model data association, the 3D scene and imaging system, and the likelihood function. Under this model, the video data is the result of capturing the scene with the imaging system, and noisily detecting video features. This formulation is very general, and can be used to model a wide variety of scenarios, including videos of people walking, and time-lapse images of pollen tubes growing in vitro. Importantly, we model the scene in world coordinates and units, as opposed to pixels, allowing us to reason about the world in a natural way, e.g., explaining occlusion and perspective distortion. We use Gaussian processes to model motion, and propose that it is a general and effective way to characterize smooth, but otherwise arbitrary, trajectories. We perform inference using MCMC sampling, where we fit our model of the temporal scene to data extracted from the videos. We address the problem of variable dimensionality by estimating data association and integrating out all scene variables. Our experiments show our approach is competitive, producing results which are comparable to state-of-the-art methods.
149

Latent Conditional Individual-Level Models and Related Topics in Infectious Disease Modeling

Deeth, Lorna E. 15 October 2012 (has links)
Individual-level models are a class of complex statistical models, often fitted within a Bayesian Markov chain Monte Carlo framework, that have been effectively used to model the spread of infectious diseases. The ability of these models to incorporate individual-level covariate information allows them to be highly flexible, and to account for such characteristics as population heterogeneity. However, these models can be subject to inherent uncertainties often found in infectious disease data. As well, their complex nature can lead to a significant computational expense when fitting these models to epidemic data, particularly for large populations. An individual-level model that incorporates a latent grouping structure into the modeling procedure, based on some heterogeneous population characteristics, is investigated. The dependence of this latent conditional individual-level model on a discrete latent grouping variable alleviates the need for explicit, although possibly unreliable, covariate information. A simulation study is used to assess the posterior predictive ability of this model, in comparison to individual-level models that utilize the full covariate information, or that assume population homogeneity. These models are also applied to data from the 2001 UK foot-and-mouth disease epidemic. When attempting to compare complex models fitted within the Bayesian framework, the identification of appropriate model selection tools would be beneficial. The use of deviance information criterion (DIC) as model comparison tool, particularly for the latent conditional individual-level models, is investigated. A simulation study is used to compare five variants of the DIC, and the ability of each DIC variant to select the true model is determined. Finally, an investigation into methods to reduce the computational burden associated with individual-level models is carried out, based on an individual-level model that also incorporates population heterogeneity through a discrete grouping variable. A simulation study is used to determine the effect of reducing the overall population size by aggregating the data into spatial clusters. Reparameterized individual-level models, accounting for the aggregation effect, are fitted to the aggregated data. The effect of data aggregation on the ability of two reparameterized individual-level models to identify a covariate effect, as well as on the computational expense of the model fitting procedure, is explored.
150

Issues of Computational Efficiency and Model Approximation for Spatial Individual-Level Infectious Disease Models

Dobbs, Angie 06 January 2012 (has links)
Individual-level models (ILMs) are models that can use the spatial-temporal nature of disease data to capture the disease dynamics. Parameter estimation is usually done via Markov chain Monte Carlo (MCMC) methods, but correlation between model parameters negatively affects MCMC mixing. Introducing a normalization constant to alleviate the correlation results in MCMC convergence over fewer iterations, however this negatively effects computation time. It is important that model fitting is done as efficiently as possible. An upper-truncated distance kernel is introduced to quicken the computation of the likelihood, but this causes a loss in goodness-of-fit. The normalization constant and upper-truncated distance kernel are evaluated as components in various ILMs via a simulation study. The normalization constant is seen not to be worthwhile, as the effect of increased computation time is not outweighed by the reduced correlation. The upper-truncated distance kernel reduces computation time but worsens model fit as the truncation distance decreases. / Studies have been funded by OMAFRA & NSERC, with computing equipment provided by CSI.

Page generated in 0.0953 seconds