• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 334
  • 146
  • 79
  • 73
  • 54
  • 47
  • 46
  • 44
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Reconstructing Historical Earthquake-Induced Tsunamis: Case Study of 1820 Event Near South Sulawesi, Indonesia

Paskett, Taylor Jole 13 July 2022 (has links) (PDF)
We build on the method introduced by Ringer, et al., applying it to an 1820 event that happened near South Sulawesi, Indonesia. We utilize other statistical models to aid our Metropolis-Hastings sampler, including a Gaussian process which informs the prior. We apply the method to multiple possible fault zones to determine which fault is the most likely source of the earthquake and tsunami. After collecting nearly 80,000 samples, we find that between the two most likely fault zones, the Walanae fault zone matches the anecdotal accounts much better than Flores. However, to support the anecdotal data, both samplers tend toward powerful earthquakes that may not be supported by the faults in question. This indicates that even further research is warranted. It may indicate that some other type of event took place, such as a multiple-fault rupture or landslide tsunami.
252

Measuring Skill Importance in Women's Soccer and Volleyball

Allan, Michelle L. 11 March 2009 (has links) (PDF)
The purpose of this study is to demonstrate how to measure skill importance for two sports: soccer and volleyball. A division I women's soccer team filmed each home game during a competitive season. Every defensive, dribbling, first touch, and passing skill was rated and recorded for each team. It was noted whether each sequence of plays led to a successful shot. A hierarchical Bayesian logistic regression model is implemented to determine how the performance of the skill affects the probability of a successful shot. A division I women's volleyball team rated each skill (serve, pass, set, etc.) and recorded rally outcomes during home games in a competitive season. The skills were only rated when the ball was on the home team's side of the net. Events followed one of these three patterns: serve-outcome, pass-set-attack-outcome, or dig-set-attack-outcome. We analyze the volleyball data using two different techniques, Markov chains and Bayesian logistic regression. These sequences of events are assumed to be first-order Markov chains. This means the quality of the current skill only depends on the quality of the previous skill. The count matrix is assumed to follow a multinomial distribution, so a Dirichlet prior is used to estimate each row of the count matrix. Bayesian simulation is used to produce the unconditional posterior probability (e.g., a perfect serve results in a point). The volleyball logistic regression model uses a Bayesian approach to determine how the performance of the skill affects the probability of a successful outcome. The posterior distributions produced from each of the models are used to calculate importance scores. The soccer data importance scores revealed that passing, first touch, and dribbling skills are the most important to the primary team. The Markov chain model for the volleyball data indicates setting 3–5 feet off the net increases the probability of a successful outcome. The logistic regression model for the volleyball data reveals that serves have a high importance score because of their steep slope. Importance scores can be used to assist coaches in allocating practice time, developing new strategies, and analyzing each player's skill performance.
253

A Topics Analysis Model for Health Insurance Claims

Webb, Jared Anthony 18 October 2013 (has links) (PDF)
Mathematical probability has a rich theory and powerful applications. Of particular note is the Markov chain Monte Carlo (MCMC) method for sampling from high dimensional distributions that may not admit a naive analysis. We develop the theory of the MCMC method from first principles and prove its relevance. We also define a Bayesian hierarchical model for generating data. By understanding how data are generated we may infer hidden structure about these models. We use a specific MCMC method called a Gibbs' sampler to discover topic distributions in a hierarchical Bayesian model called Topics Over Time. We propose an innovative use of this model to discover disease and treatment topics in a corpus of health insurance claims data. By representing individuals as mixtures of topics, we are able to consider their future costs on an individual level rather than as part of a large collective.
254

Modeling social norms in real-world agent-based simulations

Beheshti, Rahmatollah 01 January 2015 (has links)
Studying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors. Studies on norms are much older than the age of computer science, since normative studies have been a classic topic in sociology, psychology, philosophy and law. Various theories have been put forth about the functioning of social norms. Although an extensive amount of research on norms has been performed during the recent years, there remains a significant gap between current models and models that can explain real-world normative behaviors. Most of the existing work on norms focuses on abstract applications, and very few realistic normative simulations of human societies can be found. The contributions of this dissertation include the following: 1) a new hybrid technique based on agent-based modeling and Markov Chain Monte Carlo is introduced. This method is used to prepare a smoking case study for applying normative models. 2) This hybrid technique is described using category theory, which is a mathematical theory focusing on relations rather than objects. 3) The relationship between norm emergence in social networks and the theory of tipping points is studied. 4) A new lightweight normative architecture for studying smoking cessation trends is introduced. This architecture is then extended to a more general normative framework that can be used to model real-world normative behaviors. The final normative architecture considers cognitive and social aspects of norm formation in human societies. Normative architectures based on only one of these two aspects exist in the literature, but a normative architecture that effectively includes both of these two is missing.
255

Bayesian Analysis of Partitioned Demand Models

Smith, Adam Nicholas 26 October 2017 (has links)
No description available.
256

Some Bayesian Methods in the Estimation of Parameters in the Measurement Error Models and Crossover Trial

Wang, Guojun 31 March 2004 (has links)
No description available.
257

Incorporation of Genetic Marker Information in Estimating Modelparameters for Complex Traits with Data From Large Complex Pedigrees

Luo, Yuqun 20 December 2002 (has links)
No description available.
258

Bayesian Analysis of Temporal and Spatio-temporal Multivariate Environmental Data

El Khouly, Mohamed Ibrahim 09 May 2019 (has links)
High dimensional space-time datasets are available nowadays in various aspects of life such as economy, agriculture, health, environment, etc. Meanwhile, it is challenging to reveal possible connections between climate change and weather extreme events such as hurricanes or tornadoes. In particular, the relationship between tornado occurrence and climate change has remained elusive. Moreover, modeling multivariate spatio-temporal data is computationally expensive. There is great need to computationally feasible models that account for temporal, spatial, and inter-variables dependence. Our research focuses on those areas in two ways. First, we investigate connections between changes in tornado risk and the increase in atmospheric instability over Oklahoma. Second, we propose two multiscale spatio-temporal models, one for multivariate Gaussian data, and the other for matrix-variate Gaussian data. Those frameworks are novel additions to the existing literature on Bayesian multiscale models. In addition, we have proposed parallelizable MCMC algorithms to sample from the posterior distributions of the model parameters with enhanced computations. / Doctor of Philosophy / Over 1000 tornadoes are reported every year in the United States causing massive losses in lives and possessions according to the National Oceanic and Atmospheric Administration. Therefore, it is worthy to investigate possible connections between climate change and tornado occurrence. However, there are massive environmental datasets in three or four dimensions (2 or 3 dimensional space, and time), and the relationship between tornado occurrence and climate change has remained elusive. Moreover, it is computationally expensive to analyze those high dimensional space-time datasets. In part of our research, we have found a significant relationship between occurrence of strong tornadoes over Oklahoma and meteorological variables. Some of those meteorological variables have been affected by ozone depletion and emissions of greenhouse gases. Additionally, we propose two Bayesian frameworks to analyze multivariate space-time datasets with fast and feasible computations. Finally, our analyses indicate different patterns of temperatures at atmospheric altitudes with distinctive rates over the United States.
259

Uncertainty Quantification, State and Parameter Estimation in Power Systems Using Polynomial Chaos Based Methods

Xu, Yijun 31 January 2019 (has links)
It is a well-known fact that a power system contains many sources of uncertainties. These uncertainties coming from the loads, the renewables, the model and the measurement, etc, are influencing the steady state and dynamic response of the power system. Facing this problem, traditional methods, such as the Monte Carlo method and the Perturbation method, are either too time consuming or suffering from the strong nonlinearity in the system. To solve these, this Dissertation will mainly focus on developing the polynomial chaos based method to replace the traditional ones. Using it, the uncertainties from the model and the measurement are propagated through the polynomial chaos bases at a set of collocation points. The approximated polynomial chaos coefficients contain the statistical information. The method can greatly accelerate the calculation efficiency while not losing the accuracy, even when the system is highly stressed. In this dissertation, both the forward problem and the inverse problem of uncertainty quantification will be discussed. The forward problems will include the probabilistic power flow problem and statistical power system dynamic simulations. The generalized polynomial chaos method, the adaptive polynomial chaos-ANOVA method and the multi-element polynomial chaos method will be introduced and compared. The case studies show that the proposed methods have great performances in the statistical analysis of the large-scale power systems. The inverse problems will include the state and parameter estimation problem. A novel polynomial-chaos-based Kalman filter will be proposed. The comparison studies with other traditional Kalman filter demonstrate the good performances of the proposed Kalman filter. We further explored the area dynamic parameter estimation problem under the Bayesian inference framework. The polynomial-chaos-expansions are treated as the response surface of the full dynamic solver. Combing with hybrid Markov chain Monte Carlo method, the proposed method yields very high estimation accuracy while greatly reducing the computing time. For both the forward problem and the inverse problems, the polynomial chaos based methods haven shown great advantages over the traditional methods. These computational techniques can improve the efficiency and accuracy in power system planning, guarantee the rationality and reliability in power system operations, and, finally, speed up the power system dynamic security assessment. / PHD / It is a well-known fact that a power system state is inherently stochastic. Sources of stochasticity include load random variations, renewable energy intermittencies, and random outages of generating units, lines, and transformers, to cite a few. These stochasticities translate into uncertainties in the models that are assumed to describe the steady-sate and dynamic behavior of a power system. Now, these models are themselves approximate since they are based on some assumptions that are typically violated in practice. Therefore, it does not come as a surprise if recent research activities in power systems are focusing on how to cope with uncertainties when dealing with power system planning, monitoring and control. This Dissertation is developing polynomial-chaos-based method in quantifying, and managing these uncertainties. Three major topics, including uncertainty quantification, state estimation and parameter estimation are discussed. The developed method can improve the efficiency and accuracy in power system planning, guarantee the rationality and reliability in power system operations in dealing with the uncertainties, and, finally, enhancing the resilience of the power systems.
260

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>

Page generated in 0.0524 seconds