Spelling suggestions: "subject:"markov, chain"" "subject:"markov, shain""
231 |
Exploring stellar magnetic activities with Bayesian inference / ベイズ推論による恒星磁気活動の探究Ikuta, Kai 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第23006号 / 理博第4683号 / 新制||理||1672(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 野上 大作, 教授 一本 潔, 教授 太田 耕司 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
|
232 |
Consecutive Covering Arrays and a New Randomness TestGodbole, A. P., Koutras, M. V., Milienos, F. S. 01 May 2010 (has links)
A k × n array with entries from an "alphabet" A = { 0, 1, ..., q - 1 } of size q is said to form a t-covering array (resp. orthogonal array) if each t × n submatrix of the array contains, among its columns, at least one (resp. exactly one) occurrence of each t-letter word from A (we must thus have n = qt for an orthogonal array to exist and n ≥ qt for a t -covering array). In this paper, we continue the agenda laid down in Godbole et al. (2009) in which the notion of consecutive covering arrays was defined and motivated; a detailed study of these arrays for the special case q = 2, has also carried out by the same authors. In the present article we use first a Markov chain embedding method to exhibit, for general values of q, the probability distribution function of the random variable W = Wk, n, t defined as the number of sets of t consecutive rows for which the submatrix in question is missing at least one word. We then use the Chen-Stein method (Arratia et al., 1989, 1990) to provide upper bounds on the total variation error incurred while approximating L (W) by a Poisson distribution Po (λ) with the same mean as W. Last but not least, the Poisson approximation is used as the basis of a new statistical test to detect run-based discrepancies in an array of q-ary data.
|
233 |
Modeling the Spread of Infectious Disease Using Genetic Information Within a Marked Branching ProcessLeman, Scotland C., Levy, Foster, Walker, Elaine S. 20 December 2009 (has links)
Accurate assessment of disease dynamics requires a quantification of many unknown parameters governing disease transmission processes. While infection control strategies within hospital settings are stringent, some disease will be propagated due to human interactions (patient-to-patient or patient-to- caregiver-topatient). In order to understand infectious transmission rates within the hospital, it is necessary to isolate the amount of disease that is endemic to the outside environment. While discerning the origins of disease is difficult when using ordinary spatio-temporal data (locations and time of disease detection), genotypes that are common to pathogens, with common sources, aid in distinguishing nosocomial infections from independent arrivals of the disease. The purpose of this study was to demonstrate a Bayesian modeling procedure for identifying nosocomial infections, and quantify the rate of these transmissions. We will demonstrate our method using a 10-year history of Morexella catarhallis. Results will show the degree to which pathogen-specific, genotypic information impacts inferences about the nosocomial rate of infection.
|
234 |
Peptide Refinement by Using a Stochastic SearchLewis, Nicole H., Hitchcock, David B., Dryden, Ian L., Rose, John R. 01 November 2018 (has links)
Identifying a peptide on the basis of a scan from a mass spectrometer is an important yet highly challenging problem. To identify peptides, we present a Bayesian approach which uses prior information about the average relative abundances of bond cleavages and the prior probability of any particular amino acid sequence. The scoring function proposed is composed of two overall distance measures, which measure how close an observed spectrum is to a theoretical scan for a peptide. Our use of our scoring function, which approximates a likelihood, has connections to the generalization presented by Bissiri and co-workers of the Bayesian framework. A Markov chain Monte Carlo algorithm is employed to simulate candidate choices from the posterior distribution of the peptide sequence. The true peptide is estimated as the peptide with the largest posterior density.
|
235 |
A Method for Reconstructing Historical Destructive Earthquakes Using Bayesian InferenceRinger, Hayden J. 04 August 2020 (has links)
Seismic hazard analysis is concerned with estimating risk to human populations due to earthquakes and the other natural disasters that they cause. In many parts of the world, earthquake-generated tsunamis are especially dangerous. Assessing the risk for seismic disasters relies on historical data that indicate which fault zones are capable of supporting significant earthquakes. Due to the nature of geologic time scales, the era of seismological data collection with modern instruments has captured only a part of the Earth's seismic hot zones. However, non-instrumental records, such as anecdotal accounts in newspapers, personal journals, or oral tradition, provide limited information on earthquakes that occurred before the modern era. Here, we introduce a method for reconstructing the source earthquakes of historical tsunamis based on anecdotal accounts. We frame the reconstruction task as a Bayesian inference problem by making a probabilistic interpretation of the anecdotal records. Utilizing robust models for simulating earthquakes and tsunamis provided by the software package GeoClaw, we implement a Metropolis-Hastings sampler for the posterior distribution on source earthquake parameters. In this work, we present our analysis of the 1852 Banda Arc earthquake and tsunami as a case study for the method. Our method is implemented as a Python package, which we call tsunamibayes. It is available, open-source, on GitHub: https://github.com/jwp37/tsunamibayes.
|
236 |
Improving Nitrogen Management in Corn- Wheat-Soybean Rotations Using Site Specific Management in Eastern VirginiaPeng, Wei 13 November 2001 (has links)
Nitrogen (N) is a key nutrient input to crops and one of the major pollutants to the environment from agriculture in the United States. Recent developments in site-specific management (SSM) technology have the potential to reduce both N overapplication and underapplication and increase farmers' net returns. In Virginia, due to the high variability of within-field yield-limiting factors such as soil physical properties and fertility, the adoption of SSM is hindered by high gridsampling cost. Many Virginia corn-wheat-soybean farms have practiced generating yield maps using yield monitors for several years even though few variable applications based on yield maps were reported. It is unknown if the information generated by yield monitors under actual production situations can be used to direct N management for increased net returns in this area.
The overall objective of the study is to analyze the economic and environmental impact of alternative management strategies for N in corn and wheat production based on site-specific information in eastern Virginia. Specifically, evaluations were made of three levels of site-specific information regarding crop N requirements combined with variable and uniform N application. The three levels of information are information about the yield potential of the predominant soil type within the field, information about yield potentials of all soils within the field (soil zones), information about yield potentials of smaller sub-field units which are aggregated into functional zones. Effects of information on expected net returns and net N (applied N that is not removed by the crop) were evaluated for corn-wheat-soybean fields in eastern Virginia. Ex post and ex ante evaluations of information were carried out.
Historical weather data and farm-level yield data were used to generate yield sequences for individual fields. A Markov chain model was used to describe both temporal and spatial yield variation. Soil maps were used to divide a field into several soil management units. Cluster analysis was used to group subfield units into functional zones based on yield monitor data. Yield monitor data were used to evaluate ex post information and variable application values for 1995-1999, and ex ante information and variable application values for 1999.
Ex post analysis results show that soil zone information increased N input but decreased net return, while functional zone information decreased N input and increased net returns. Variable application decreased N input compared with uniform application. Variable application based on soil zone information reduced net return due to cost of overapplication or underapplication. Variable application based on functional information increased net return.
Ex ante results show that information on spatial variability was not able to increase farmers?net return due to the cost of variable N application and information. Variable rate application decreases N input relative to uniform application. However, imprecision in the spatial predictor makes the variable application unprofitable due to an imbalance between costs of under- and over-application of N. Sensitivity analysis showed that value of information was positive when temporal uncertainty was eliminated.
The ex post results of this study suggest there is potential to improve efficiency of N use and farmers?net returns with site specific management techniques. The ex ante results suggest that site specific management improvements should be tested under conditions faced by farmers including imperfect information about temporal and spatial yield variability. / Ph. D.
|
237 |
A Comparative Analysis of the Use of a Markov Chain Versus a Binomial Probability Model in Estimating the Probability of Consecutive Rainless DaysHomeyer, Jack Wilfred 01 May 1974 (has links)
The Markov chain process for predicting the occurence of a sequence of rainless days, a standard technique, is critically examined in light of the basic underlying assumptions that must be made each time it is used. This is then compared to a simple binomial model wherein an event is defined to be a series of rainless days of desired length. Computer programs to perform the required calculations are then presented and compared as to complexity and operating characteristics. Finally, an example of applying both programs to real data is presented and further comparisons are drawn between the two techniques.
|
238 |
Modern Monte Carlo Methods and Their Application in Semiparametric RegressionThomas, Samuel Joseph 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The essence of Bayesian data analysis is to ascertain posterior distributions. Posteriors
generally do not have closed-form expressions for direct computation in practical applications.
Analysts, therefore, resort to Markov Chain Monte Carlo (MCMC) methods for the generation
of sample observations that approximate the desired posterior distribution. Standard MCMC
methods simulate sample values from the desired posterior distribution via random proposals.
As a result, the mechanism used to generate the proposals inevitably determines the
efficiency of the algorithm. One of the modern MCMC techniques designed to explore
the high-dimensional space more efficiently is Hamiltonian Monte Carlo (HMC), based on
the Hamiltonian differential equations. Inspired by classical mechanics, these equations
incorporate a latent variable to generate MCMC proposals that are likely to be accepted.
This dissertation discusses how such a powerful computational approach can be used for
implementing statistical models. Along this line, I created a unified computational procedure
for using HMC to fit various types of statistical models. The procedure that I proposed can
be applied to a broad class of models, including linear models, generalized linear models,
mixed-effects models, and various types of semiparametric regression models. To facilitate
the fitting of a diverse set of models, I incorporated new parameterization and decomposition
schemes to ensure the numerical performance of Bayesian model fitting without sacrificing
the procedure’s general applicability. As a concrete application, I demonstrate how to use the
proposed procedure to fit a multivariate generalized additive model (GAM), a nonstandard
statistical model with a complex covariance structure and numerous parameters. Byproducts of the research include two software packages that all practical data analysts to use the
proposed computational method to fit their own models. The research’s main methodological
contribution is the unified computational approach that it presents for Bayesian model
fitting that can be used for standard and nonstandard statistical models. Availability of
such a procedure has greatly enhanced statistical modelers’ toolbox for implementing new
and nonstandard statistical models.
|
239 |
Modelling Renewable Energy Generation Forecasts on Luzon : A Minor Field Study on Statistical Inference Methods in the Environmental SciencesLinde, Tufva January 2023 (has links)
This project applies statistical inference methods to energy data from the island of Luzon in the Philippines. The goal of the project is to explore different ways of creating predictive models and to understand the assumptions that are made about reality when a certain model is selected. The main models discussed in the project are Simple Linear Regression and Markov Chain Models. The predictions were used to assess Luzon's progress towards the sustainable development goals. All models considered in this project suggest that they are not on target to meet the sustainability goal.
|
240 |
Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic InferenceSpade, David Allen 29 August 2013 (has links)
No description available.
|
Page generated in 0.0383 seconds