• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2049
  • 601
  • 263
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 10
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4149
  • 815
  • 761
  • 732
  • 723
  • 722
  • 714
  • 661
  • 582
  • 451
  • 433
  • 416
  • 413
  • 370
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Bayesian inference in geodesy /

Bossler, John David January 1972 (has links)
No description available.
372

Bayes allocation and sequential estimation in stratified populations /

Wright, Tommy January 1977 (has links)
No description available.
373

Bayesian statistics in auditing : a comparison of probability elicitation techniques and sample size decisions /

Crosby, Michael A. January 1978 (has links)
No description available.
374

A Comparative Analysis of Bayesian Nonparametric Variational Inference Algorithms for Speech Recognition

Steinberg, John January 2013 (has links)
Nonparametric Bayesian models have become increasingly popular in speech recognition tasks such as language and acoustic modeling due to their ability to discover underlying structure in an iterative manner. These methods do not require a priori assumptions about the structure of the data, such as the number of mixture components, and can learn this structure directly. Dirichlet process mixtures (DPMs) are a widely used nonparametric Bayesian method which can be used as priors to determine an optimal number of mixture components and their respective weights in a Gaussian mixture model (GMM). Because DPMs potentially require an infinite number of parameters, inference algorithms are needed to make posterior calculations tractable. The focus of this work is an evaluation of three of these Bayesian variational inference algorithms which have only recently become computationally viable: Accelerated Variational Dirichlet Process Mixtures (AVDPM), Collapsed Variational Stick Breaking (CVSB), and Collapsed Dirichlet Priors (CDP). To eliminate other effects on performance such as language models, a phoneme classification task is chosen to more clearly assess the viability of these algorithms for acoustic modeling. Evaluations were conducted on the CALLHOME English and Mandarin corpora, consisting of two languages that, from a human perspective, are phonologically very different. It is shown in this work that these inference algorithms yield error rates comparable to a baseline Gaussian mixture model (GMM) but with a factor of up to 20 fewer mixture components. AVDPM is shown to be the most attractive choice because it delivers the most compact models and is computationally efficient, enabling its application to big data problems. / Electrical and Computer Engineering
375

Inference of Constitutive Relations and Uncertainty Quantification in Electrochemistry

Krishnaswamy Sethurajan, Athinthra 04 1900 (has links)
This study has two parts. In the first part we develop a computational approach to the solution of an inverse modelling problem concerning the material properties of electrolytes used in Lithium-ion batteries. The dependence of the diffusion coefficient and the transference number on the concentration of Lithium ions is reconstructed based on the concentration data obtained from an in-situ NMR imaging experiment. This experiment is modelled by a system of 1D time-dependent Partial Differential Equations (PDE) describing the evolution of the concentration of Lithium ions with prescribed initial concentration and fluxes at the boundary. The material properties that appear in this model are reconstructed by solving a variational optimization problem in which the least-square error between the experimental and simulated concentration values is minimized. The uncertainty of the reconstruction is characterized by assuming that the material properties are random variables and their probability distribution estimated using a novel combination of Monte-Carlo approach and Bayesian statistics. In the second part of this study, we carefully analyze a number of secondary effects such as ion pairing and dendrite growth that may influence the estimation of the material properties and develop mathematical models to include these effects. We then use reconstructions of material properties based on inverse modelling along with their uncertainty estimates as a framework to validate or invalidate the models. The significance of certain secondary effects is assessed based on the influence they have on the reconstructed material properties. / Thesis / Doctor of Philosophy (PhD)
376

Active Sonar Tracking Under Realistic Conditions

Liu, Ben January 2019 (has links)
This thesis focuses on the problem of underwater target tracking with consideration for realistic conditions using active sonar. This thesis addresses the following specific problems: 1) underwater detection in three dimensional (3D) space using multipath detections and an uncertain sound speed profile in heavy clutter, 2) tracking a group of divers whose motion is dependent on each other using sonar detections corrupted by unknown structured background clutter, 3) extended target tracking (ETT) with a high-resolution sonar in the presence of multipath detection and measurement origin uncertainty. Unrealistic assumptions about the environmental conditions may degrade the performance of underwater tracking algorithms. Hence, underwater target tracking with realistic conditions is addressed by integrating the environment-induced uncertainties or constraints into the trackers. First, an iterated Bayesian framework is formulated using the ray-tracing model and an extension of the Maximum Likelihood Probabilistic Data Association (ML-PDA) algorithm to make use of multipath information. With the ray-tracing model, the algorithm can handle more realistic sound speed profile (SSP) instead of using the commonly-assumed constant velocity model or isogradient SSP. Also, by using the iterated framework, we can simultaneously estimate the SSP and target state in uncertain multipath environments. Second, a new diver dynamic motion (DDM) model is integrated into the Probability Hypothesis Density (PHD) to track the dependent motion diver targets. The algorithm is implemented with Gaussian Mixtures (GM) to ensure low computational complexity. The DDM model not only includes inter-target interactions but also the environmental influences (e.g., water flow). Furthermore, a log-Gaussian Cox process (LGCP) model is seamlessly integrated into the proposed filter to distinguish the target-originated measurement and false alarms. The final topic of interest is to address the ETT problem with multipath detections and clutter, which is practically relevant but barely addressed in the literature. An improved filter, namely MP-ET-PDA, with the classical probabilistic data association (PDA) filter and random matrices (RM) is proposed. The optimal estimates can be provided by MP-ET-PDA filter by considering all possible association events. To deal with the high computational load resulting from the data association, a Variational Bayesian (VB) clustering-aided MP-ET-PDA is proposed to provide near real-time processing capability. The traditional Cramer-Rao Lower Bound (CRLB), which is the inverse of the Fisher information matrix (FIM), quantifies the best achievable accuracy of the estimates. For the estimation problems, the corresponding theoretical bounds are derived for performance evaluation under realistic underwater conditions. / Thesis / Doctor of Philosophy (PhD)
377

Model Uncertainty & Model Averaging Techniques

Amini Moghadam, Shahram 24 August 2012 (has links)
The primary aim of this research is to shed more light on the issue of model uncertainty in applied econometrics in general and cross-country growth as well as happiness and well-being regressions in particular. Model uncertainty consists of three main types: theory uncertainty, focusing on which principal determinants of economic growth or happiness should be included in a model; heterogeneity uncertainty, relating to whether or not the parameters that describe growth or happiness are identical across countries; and functional form uncertainty, relating to which growth and well-being regressors enter the model linearly and which ones enter nonlinearly. Model averaging methods including Bayesian model averaging and Frequentist model averaging are the main statistical tools that incorporate theory uncertainty into the estimation process. To address functional form uncertainty, a variety of techniques have been proposed in the literature. One suggestion, for example, involves adding regressors that are nonlinear functions of the initial set of theory-based regressors or adding regressors whose values are zero below some threshold and non-zero above that threshold. In recent years, however, there has been a rising interest in using nonparametric framework to address nonlinearities in growth and happiness regressions. The goal of this research is twofold. First, while Bayesian approaches are dominant methods used in economic empirics to average over the model space, I take a fresh look into Frequentist model averaging techniques and propose statistical routines that computationally ease the implementation of these methods. I provide empirical examples showing that Frequentist estimators can compete with their Bayesian peers. The second objective is to use recently-developed nonparametric techniques to overcome the issue of functional form uncertainty while analyzing the variance of distribution of per capita income. Nonparametric paradigm allows for addressing nonlinearities in growth and well-being regressions by relaxing both the functional form assumptions and traditional assumptions on the structure of error terms. / Ph. D.
378

Gaussian Processes for Power System Monitoring, Optimization, and Planning

Jalali, Mana 26 July 2022 (has links)
The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. Gaussian process regression is a machine learning paradigm that provides closed-form predictions with quantified uncertainties. The key property of Gaussian processes is the natural ability to integrate the sensitivity of the labels with respect to features, yielding improved accuracy. This dissertation tailors Gaussian process regression for three applications in power systems. First, a physics-informed approach is introduced to infer the grid dynamics using synchrophasor data with minimal network information. The suggested method is useful for a wide range of applications, including prediction, extrapolation, and anomaly detection. Further, the proposed framework accommodates heterogeneous noisy measurements with missing entries. Second, a learn-to-optimize scheme is presented using Gaussian process regression that predicts the optimal power flow minimizers given grid conditions. The main contribution is leveraging sensitivities to expedite learning and achieve data efficiency without compromising computational efficiency. Third, Bayesian optimization is applied to solve a bi-level minimization used for strategic investment in electricity markets. This method relies on modeling the cost of the outer problem as a Gaussian process and is applicable to non-convex and hard-to-evaluate objective functions. The designed algorithm shows significant improvement in speed while attaining a lower cost than existing methods. / Doctor of Philosophy / The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. This dissertation focuses on three practically important problems stemming from the power system modernization. First, a novel approach is proposed that improves power system monitoring, which is the first and necessary step for the stable operation of the network. The suggested method applies to a wide range of applications and is adaptable to use heterogeneous and noisy measurements with missing entries. The second problem focuses on predicting the minimizers of an optimization task. Moreover, a computationally efficient framework is put forth to expedite this process. The third part of this dissertation identifies investment portfolios for electricity markets that yield maximum revenue and minimum cost.
379

Ecosystem Models in a Bayesian State Space Framework

Smith Jr, John William 17 June 2022 (has links)
Bayesian approaches are increasingly being used to embed mechanistic process models used into statistical state space frameworks for environmental prediction and forecasting applications. In this study, I focus on Bayesian State Space Models (SSMs) for modeling the temporal dynamics of carbon in terrestrial ecosystems. In Chapter 1, I provide an introduction to Ecological Forecasting, State Space Models, and the challenges of using State Space Models for Ecosystems. In Chapter 2, we provide a brief background on State Space Models and common methods of parameter estimation. In Chapter 3, we simulate data from an example model (DALECev) using driver data from the Talladega National Ecosystem Observatory Network (NEON) site and perform a simulation study to investigate its performance under varying frequencies of observation data. We show that as observation frequency decreases, the effective sample size of our precision estimates becomes too small to reliably make inference. We introduce a method of tuning the time resolution of the latent process, so that we can still use high-frequency flux data, and show that this helps to increase sampling efficiency of the precision parameters. Finally, we show that data cloning is a suitable method for assessing the identifiability of parameters in ecosystem models. In Chapter 4, we introduce a method for embedding positive process models into lognormal SSMs. Our approach, based off of moment matching, allows practitioners to embed process models with arbitrary variance structures into lognormally distributed stochastic process and observation components of a state space model. We compare and contrast the interpretations of our lognormal models to two existing approaches, the Gompertz and Moran-Ricker SSMs. We use our method to create four state space models based off the Gompertz and Moran-Ricker process models, two with a density dependent variance structure for the process and observations and two with a constant variance structure for the process and observations. We design and conduct a simulation study to compare the forecast performance of our four models to their counterparts under model mis-specification. We find that when the observation precision is estimated, the Gompertz model and its density dependent moment matching counterpart have the best forecasting performance under model mis-specification when measured by the average Ignorance score (IGN) and Continuous Ranked Probability Score (CRPS), even performing better than the true generating model across thirty different synthetic datasets. When observation precisions were fixed, all models except for the Gompertz displayed a significant improvement in forecasting performance for IGN, CRPS, or both. Our method was then tested on data from the NOAA Dengue Forecasting Challenge, where we found that our novel constant variance lognormal models had the best performance measured by CRPS, and also had the best performance for both CRPS and IGN for one and two week forecast horizons. This shows the importance of having a flexible method to embed sensible dynamics, as constant variance lognormal SSMs are not frequently used but perform better than the density dependent models here. In Chapter 5, we apply our lognormal moment matching method to embed the DALEC2 ecosystem model into the process component of a state space model using NEON data from University of Notre Dame Environmental Research Center (UNDE). Two different fitting methods are considered for our difficult problem: the updated Iterated Filtering algorithm (IF2) and the Particle Marginal Metropolis Hastings (PMMH) algorithm. We find that the IF2 algorithm is a more efficient algorithm than PMMH for our problem. Our IF2 global search finds candidate parameter values in thirty hours, while PMMH takes 82 hours and accepts only .12% of proposed samples. The parameter values obtained from our IF2 global search show good potential for out of sample prediction for Leaf Area Index and Net Ecosystem Exchange, although both have room for improvement in future work. Overall, the work done here helps to inform the application of state space models to ecological forecasting applications where data are not available for all stocks and transfers at the operational timestep for the ecosystem model, where large numbers of process parameters and long time series provide computational challenges, and where process uncertainty estimation is desired. / Doctor of Philosophy / With ecosystem carbon uptake expected to play a large role in climate change projections, it is important that we make our forecasts as informed as possible and account for as many sources of variation as we can. In this dissertation, we examine a statistical modeling framework called the State Space Model (SSM), and apply it to models of terrestrial ecosystem carbon. The SSM helps to capture numerous sources of variability that can contribute to the overall predictability of a physical process. We discuss challenges of using this framework for ecosystem models, and provide solutions to a number of problems that may arise when using SSMs. We develop methodology for ensuring that these models mimic the well defined upper and lower bounds of the physical processes that we are interested in. We use both real and synthetic data to test that our methods perform as desired, and provide key insights about their performance.
380

Methods of Model Uncertainty: Bayesian Spatial Predictive Synthesis

Cabel, Danielle 05 1900 (has links)
This dissertation develops a new method of modeling uncertainty with spatial data called Bayesian spatial predictive synthesis (BSPS) and compares its predictive accuracy to established methods. Spatial data are often non-linear, complex, and difficult to capture with a single model. Existing methods such as model selection or simple model ensembling fail to consider the critical spatially varying model uncertainty problem; different models perform better or worse in different regions. BSPS can capture the model uncertainty by specifying a latent factor coefficient model that varies spatially as a synthesis function. This allows the model coefficients to vary across a region to achieve flexible spatial model ensembling. This method is derived from the theoretically best approximation of the data generating process (DGP), where the predictions are exact minimax. Two Markov chain Monte Carlo (MCMC) based algorithms are implemented in the BSPS framework for full uncertainty quantification, along with a variational Bayes strategy for faster point inference. This method is also extended for general responses. The examples in this dissertation include multiple simulation studies and two real world data applications. Through these examples, the performance and predictive power of BSPS is shown against various standard spatial models, ensemble methods, and machine learning methods. BSPS is able to maintain predictive accuracy as well as maintain interpretability of the prediction mechanisms. / Statistics

Page generated in 0.0518 seconds