• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Effects of sample size, ability distribution, and the length of Markov Chain Monte Carlo burn-in chains on the estimation of item and testlet parameters

Orr, Aline Pinto 25 July 2011 (has links)
Item Response Theory (IRT) models are the basis of modern educational measurement. In order to increase testing efficiency, modern tests make ample use of groups of questions associated with a single stimulus (testlets). This violates the IRT assumption of local independence. However, a set of measurement models, testlet response theory (TRT), has been developed to address such dependency issues. This study investigates the effects of varying sample sizes and Markov Chain Monte Carlo burn-in chain lengths on the accuracy of estimation of a TRT model’s item and testlet parameters. The following outcome measures are examined: Descriptive statistics, Pearson product-moment correlations between known and estimated parameters, and indices of measurement effectiveness for final parameter estimates. / text
182

Bayesian Data Association for Temporal Scene Understanding

Brau Avila, Ernesto January 2013 (has links)
Understanding the content of a video sequence is not a particularly difficult problem for humans. We can easily identify objects, such as people, and track their position and pose within the 3D world. A computer system that could understand the world through videos would be extremely beneficial in applications such as surveillance, robotics, biology. Despite significant advances in areas like tracking and, more recently, 3D static scene understanding, such a vision system does not yet exist. In this work, I present progress on this problem, restricted to videos of objects that move in smoothly and which are relatively easily detected, such as people. Our goal is to identify all the moving objects in the scene and track their physical state (e.g., their 3D position or pose) in the world throughout the video. We develop a Bayesian generative model of a temporal scene, where we separately model data association, the 3D scene and imaging system, and the likelihood function. Under this model, the video data is the result of capturing the scene with the imaging system, and noisily detecting video features. This formulation is very general, and can be used to model a wide variety of scenarios, including videos of people walking, and time-lapse images of pollen tubes growing in vitro. Importantly, we model the scene in world coordinates and units, as opposed to pixels, allowing us to reason about the world in a natural way, e.g., explaining occlusion and perspective distortion. We use Gaussian processes to model motion, and propose that it is a general and effective way to characterize smooth, but otherwise arbitrary, trajectories. We perform inference using MCMC sampling, where we fit our model of the temporal scene to data extracted from the videos. We address the problem of variable dimensionality by estimating data association and integrating out all scene variables. Our experiments show our approach is competitive, producing results which are comparable to state-of-the-art methods.
183

Learning 3-D Models of Object Structure from Images

Schlecht, Joseph January 2010 (has links)
Recognizing objects in images is an effortless task for most people.Automating this task with computers, however, presents a difficult challengeattributable to large variations in object appearance, shape, and pose. The problemis further compounded by ambiguity from projecting 3-D objects into a 2-D image.In this thesis we present an approach to resolve these issues by modeling objectstructure with a collection of connected 3-D geometric primitives and a separatemodel for the camera. From sets of images we simultaneously learn a generative,statistical model for the object representation and parameters of the imagingsystem. By learning 3-D structure models we are going beyond recognitiontowards quantifying object shape and understanding its variation.We explore our approach in the context of microscopic images of biologicalstructure and single view images of man-made objects composed of block-likeparts, such as furniture. We express detected features from both domains asstatistically generated by an image likelihood conditioned on models for theobject structure and imaging system. Our representation of biological structurefocuses on Alternaria, a genus of fungus comprising ellipsoid and cylindershaped substructures. In the case of man-made furniture objects, we representstructure with spatially contiguous assemblages of blocks arbitrarilyconstructed according to a small set of design constraints.We learn the models with Bayesian statistical inference over structure andcamera parameters per image, and for man-made objects, across categories, suchas chairs. We develop a reversible-jump MCMC sampling algorithm to exploretopology hypotheses, and a hybrid of Metropolis-Hastings and stochastic dynamicsto search within topologies. Our results demonstrate that we can infer both 3-Dobject and camera parameters simultaneously from images, and that doing soimproves understanding of structure in images. We further show how 3-D structuremodels can be inferred from single view images, and that learned categoryparameters capture structure variation that is useful for recognition.
184

Visualizing Geospatial Uncertainty in Marine Animal Tracks

Mostafi, Maswood Hasan 12 April 2011 (has links)
Electronically collected animal movement data has been analyzed either statistically or visually using generic geographical information systems. The area of statistical analysis in this field has made progress over the last decade. However, visualizing the movement and behavior remains an open research problem. We have designed and implemented an interactive visualization system, MarineVis, to visualize geospatial uncertainty in the trajectories of marine animals. Using MarineVis, researchers are able to access, analyze and visualize marine animal data and oceanographic data with a variety of approaches. In this thesis, we discuss the MarineVis design structure, rendering techniques, and other visualization techniques which are used by existing software such as IDV to which we compare and contrast the visualization features of our system. Finally, directions of future work related to MarineVis are proposed which will inspire others to further study the challenging but amazingly interesting and exciting research field of marine visualization. / Marine animal movement is a fundamental yet poorly understood process. One of the reasons is because our understanding of movement is affected by the measurement error during the observation and process noise. Differentiating real movement behavior from observation error in data remains difficult and challenging. Methods that acknowledge uncertainty in movement pathways when estimating constantly changing animal movement have been lacking until this time. However with the arrival of state-space models, this problem is partially solved as SSMs acknowledge this problem by allowing unobservable true states to be estimated from data observed with errors which arise from imprecise observations. State-space models use Markov Chain Monte Carlo methods which generate samples from a distribution by constructing a Markov Chain where the current state only depends on the immediately preceding state. The task of fitting SSMs to data is challenging and requires large computational effort and expertise in statistics. With the arrival of the WinBUGs software, this formidable task becomes relatively easy. Though using the WinBUGs software researchers try to visualize the tracks and behaviors, new problems appear. One of the problems is that when marine animals come back to certain places or animals' tracks cross each other several times, the tracks become cluttered and users are not able to understand the direction. Another problem of visualizing the confidence intervals generated using SSMs is that images generated using other systems are static in nature and therefore lack interactivity. Information becomes cluttered when too much data appear. Users are not able to differentiate tracks, confidence intervals or the information they would like to visualize. Acknowledging these, we have designed and implemented an interactive visualization system, MarineVis, where these problems are overcome. Using our system the confidence intervals generated using the SSMs, can be visualized more clearly and the direction of the turtle tracks can be understood easily. Our system does not occlude the underlying terrain as much because the glyphs are localized at the sample points rather than being spread out around the entire path. Our system encodes both direction and position rather than just position. Users can interactively limit the view of data points as a subset of available data points on a path, in clustered regions, to reduce congestion, and can animate the progression of the animal along its trajectory which is absent in existing approaches. All these results are visualized over NASA World Wind maps that facilitates the understanding of the tracks.
185

Bayesian Modeling and Adaptive Monte Carlo with Geophysics Applications

Wang, Jianyu January 2013 (has links)
<p>The first part of the thesis focuses on the development of Bayesian modeling motivated by geophysics applications. In Chapter 2, we model the frequency of pyroclastic flows collected from the Soufriere Hills volcano. Multiple change points within the dataset reveal several limitations of existing methods in literature. We propose Bayesian hierarchical models (BBH) by introducing an extra level of hierarchy with hyper parameters, adding a penalty term to constrain close consecutive rates, and using a mixture prior distribution to more accurately match certain circumstances in reality. We end the chapter with a description of the prediction procedure, which is the biggest advantage of the BBH in comparison with other existing methods. In Chapter 3, we develop new statistical techniques to model and relate three complex processes and datasets: the process of extrusion of magma into the lava dome, the growth of the dome as measured by its height, and the rockfalls as an indication of the dome's instability. First, we study the dynamic Negative Binomial branching process and use it to model the rockfalls. Moreover, a generalized regression model is proposed to regress daily rockfall numbers on the extrusion rate and dome height. Furthermore, we solve an inverse problem from the regression model and predict extrusion rate based on rockfalls and dome height.</p><p>The other focus of the thesis is adaptive Markov chain Monte Carlo (MCMC) method. In Chapter 4, we improve upon the Wang-Landau (WL) algorithm. The WL algorithm is an adaptive sampling scheme that modifies the target distribution to enable the chain to visit low-density regions of the state space. However, the approach relies heavily on a partition of the state space that is left to the user to specify. As a result, the implementation and the use of the algorithm are time-consuming and less automatic. We propose an automatic, adaptive partitioning scheme which continually refines the initial partition as needed during sampling. We show that this overcomes the limitations of the input user-specified partition, making the algorithm significantly more automatic and user-friendly while also making the performance dramatically more reliable and robust. In Chapter 5, we consider the convergence and autocorrelation aspects of MCMC. We propose an Exploration/Exploitation (XX) approach to constructing adaptive MCMC algorithms, which combines adaptation schemes of distinct types. The exploration piece uses adaptation strategies aiming at exploring new regions of the target distribution and thus improving the rate of convergence to equilibrium. The exploitation piece involves an adaptation component which decreases autocorrelation for sampling among regions already discovered. We demonstrate that the combined XX algorithm significantly outperforms either original algorithm on difficult multimodal sampling problems.</p> / Dissertation
186

Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification

Olalotiti-Lawal, Feyisayo 16 December 2013 (has links)
Rapid economic evaluations of investment alternatives in the oil and gas industry are typically contingent on fast and credible evaluations of reservoir models to make future forecasts. It is often important to also quantify inherent risks and uncertainties in these evaluations. These ideally require several full-scale numerical simulations which is time consuming, impractical, if not impossible to do with conventional (Finite Difference) simulators in real life situations. In this research, the aim will be to improve on the efficiencies associated with these tasks. This involved exploring the applications of Fast Marching Methods (FMM) in both conventional and unconventional reservoir characterization problems. In this work, we first applied the FMM for rapidly ranking multiple equi-probable geologic models. We demonstrated the suitability of drainage volume, efficiently calculated using FMM, as a surrogate parameter for field-wide cumulative oil production (FOPT). The probability distribution function (PDF) of the surrogate parameter was point-discretized to obtain 3 representative models for full simulations. Using the results from the simulations, the PDF of the reservoir performance parameter was constructed. Also, we investigated the applicability of a higher-order-moment-preserving approach which resulted in better uncertainty quantification over the traditional model selection methods. Next we applied the FMM for a hydraulically fractured tight oil reservoir model calibration problem. We specifically applied the FMM geometric pressure approximation as a proxy for rapidly evaluating model proposals in a two-stage Markov Chain Monte Carlo (MCMC) algorithm. Here, we demonstrated the FMM-based proxy as a suitable proxy for evaluating model proposals. We obtained results showing a significant improvement in the efficiency compared to conventional single stage MCMC algorithm. Also in this work, we investigated the possibility of enhancing the computational efficiency for calculating the pressure field for both conventional and unconventional reservoirs using FMM. Good approximations of the steady state pressure distributions were obtained for homogeneous conventional waterflood systems. In unconventional system, we also recorded slight improvement in computational efficiency using FMM pressure approximations as initial guess in pressure solvers.
187

Assessment of Eagle Ford Shale Oil and Gas Resources

Gong, Xinglai 16 December 2013 (has links)
The Eagle Ford play in south Texas is currently one of the hottest plays in the United States. In 2012, the average Eagle Ford rig count (269 rigs) was 15% of the total US rig count. Assessment of the oil and gas resources and their associated uncertainties in the early stages is critical for optimal development. The objectives of my research were to develop a probabilistic methodology that can reliably quantify the reserves and resources uncertainties in unconventional oil and gas plays, and to assess Eagle Ford shale oil and gas reserves, contingent resources, and prospective resources. I first developed a Bayesian methodology to generate probabilistic decline curves using Markov Chain Monte Carlo (MCMC) that can quantify the reserves and resources uncertainties in unconventional oil and gas plays. I then divided the Eagle Ford play from the Sligo Shelf Margin to the San Macros Arch into 8 different production regions based on fluid type, performance and geology. I used a combination of the Duong model switching to the Arps model with b = 0.3 at the minimum decline rate to model the linear flow to boundary-dominated flow behavior often observed in shale plays. Cumulative production after 20 years predicted from Monte Carlo simulation combined with reservoir simulation was used as prior information in the Bayesian decline-curve methodology. Probabilistic type decline curves for oil and gas were then generated for all production regions. The wells were aggregated probabilistically within each production region and arithmetically between production regions. The total oil reserves and resources range from a P_(90) of 5.3 to P_(10) of 28.7 billion barrels of oil (BBO), with a P_(50) of 11.7 BBO; the total gas reserves and resources range from a P_(90) of 53.4 to P_(10) of 313.5 trillion cubic feet (TCF), with a P_(50) of 121.7 TCF. These reserves and resources estimates are much higher than the U.S. Energy Information Administration’s 2011 recoverable resource estimates of 3.35 BBO and 21 TCF. The results of this study provide a critical update on the reserves and resources estimates and their associated uncertainties for the Eagle Ford shale formation of South Texas.
188

Bayesian Models for Multilingual Word Alignment

Östling, Robert January 2015 (has links)
In this thesis I explore Bayesian models for word alignment, how they can be improved through joint annotation transfer, and how they can be extended to parallel texts in more than two languages. In addition to these general methodological developments, I apply the algorithms to problems from sign language research and linguistic typology. In the first part of the thesis, I show how Bayesian alignment models estimated with Gibbs sampling are more accurate than previous methods for a range of different languages, particularly for languages with few digital resources available—which is unfortunately the state of the vast majority of languages today. Furthermore, I explore how different variations to the models and learning algorithms affect alignment accuracy. Then, I show how part-of-speech annotation transfer can be performed jointly with word alignment to improve word alignment accuracy. I apply these models to help annotate the Swedish Sign Language Corpus (SSLC) with part-of-speech tags, and to investigate patterns of polysemy across the languages of the world. Finally, I present a model for multilingual word alignment which learns an intermediate representation of the text. This model is then used with a massively parallel corpus containing translations of the New Testament, to explore word order features in 1001 languages.
189

Bayesian latent class metric conjoint analysis. A case study from the Austrian mineral water market.

Otter, Thomas, Tüchler, Regina, Frühwirth-Schnatter, Sylvia January 2002 (has links) (PDF)
This paper presents the fully Bayesian analysis of the latent class model using a new approach towards MCMC estimation in the context of mixture models. The approach starts with estimating unidentified models for various numbers of classes. Exact Bayes' factors are computed by the bridge sampling estimator to compare different models and select the number of classes. Estimation of the unidentified model is carried out using the random permutation sampler. From the unidentified model estimates for model parameters that are not class specific are derived. Then, the exploration of the MCMC output from the unconstrained model yields suitable identifiability constraints. Finally, the constrained version of the permutation sampler is used to estimate group specific parameters. Conjoint data from the Austrian mineral water market serve to illustrate the method. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
190

Monitoring and Improving Markov Chain Monte Carlo Convergence by Partitioning

VanDerwerken, Douglas January 2015 (has links)
<p>Since Bayes' Theorem was first published in 1762, many have argued for the Bayesian paradigm on purely philosophical grounds. For much of this time, however, practical implementation of Bayesian methods was limited to a relatively small class of "conjugate" or otherwise computationally tractable problems. With the development of Markov chain Monte Carlo (MCMC) and improvements in computers over the last few decades, the number of problems amenable to Bayesian analysis has increased dramatically. The ensuing spread of Bayesian modeling has led to new computational challenges as models become more complex and higher-dimensional, and both parameter sets and data sets become orders of magnitude larger. This dissertation introduces methodological improvements to deal with these challenges. These include methods for enhanced convergence assessment, for parallelization of MCMC, for estimation of the convergence rate, and for estimation of normalizing constants. A recurring theme across these methods is the utilization of one or more chain-dependent partitions of the state space.</p> / Dissertation

Page generated in 0.0371 seconds