• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 286
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 664
  • 664
  • 358
  • 358
  • 149
  • 146
  • 100
  • 71
  • 66
  • 66
  • 65
  • 62
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Bayesian analysis of structural change in trend

Zheng, Pingping January 2002 (has links)
No description available.
32

Analysis of Memory Interference in Buffered Multi-processor Systems in Presence of Hot Spots and Favorite Memories

Sen, Sanjoy Kumar 08 1900 (has links)
In this thesis, a discrete Markov chain model for analyzing memory interference in multiprocessors, is presented.
33

Methods for Bayesian inversion of seismic data

Walker, Matthew James January 2015 (has links)
The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image.
34

Using Markov chain to describe the progression of chronic disease

Davis, Sijia January 1900 (has links)
Master of Science / Department of Statistics / Abigail Jager / A discrete-time Markov chain with stationary transition probabilities is often used for the purpose of investigating treatment programs and health care protocols for chronic disease. Suppose the patients of a certain chronic disease are observed over equally spaced time intervals. If we classify the chronic disease into n distinct health states, the movement through these health states over time then represents a patient’s disease history. We can use a discrete-time Markov chain to describe such movement using the transition probabilities between the health states. The purpose of this study was to investigate the case when the observation interval coincided with the cycle length of the Markov chain as well as the case when the observational interval and the cycle length did not coincide. In particular, we are interested in how the estimated transition matrix behaves as the ratio of observation interval and cycle length changes. Our results suggest that more estimation problems arose for small sample sizes as the length of observational interval increased, and that the deviation from the known transition probability matrix got larger as the length of observational interval increased. With increasing sample size, there were fewer estimation problems and the deviation from the known transition probability matrix was reduced.
35

Detecção automática de fibrilação atrial através de modelos Markovianos. / Atrial fibrillation automatic detection through Markov models.

Brambila, Ana Paula 27 March 2008 (has links)
A fibrilação atrial (FA) é um dos tipos mais freqüentes de arritmia cardíaca e é caracterizada principalmente pela aleatoriedade na ocorrência dos batimentos do coração. Sob este aspecto, a fibrilação atrial pode ser considerada um processo estocástico e por isso tem sido freqüentemente modelada através de cadeias de Markov. Seguindo trabalhos anteriores sobre este tópico, este trabalho modela seqüências temporais de batimentos cardíacos como um processo markoviano de três estados para detecção automática de FA. O modelo foi treinado e desenvolvido através dos sinais da base de dados MIT-BIH. Outro método mais consolidado na detecção de FA, denominado \"Razão RR\", também foi implementado, com o objetivo de comparar os resultados do Modelo Markoviano. A avaliação de desempenho para ambos os métodos implementados fo i realizada medindo-se a sensibilidade (Se) e o valor preditivo positivo (+P) para a detecção de FA. Estes dois métodos - Modelos Markovianos e \"Razão RR\" - tiveram seus coeficientes e limiares otimizados com o objetivo de maximizar, ao mesmo tempo, os valores de Se e +P. Após a otimização, ambos os métodos foram testados com uma nova base de dados, independente da base de dados de desenvolvimento. Os resultados obtidos com a base de dados de teste foram Se=84,940% e +P=81,579%, consolidando os Modelos Markoviano s para detecção de batimentos aleatórios. / Atrial fibrillation (AF) is one of the most common cardiac arrhythmia and it is mainly characterized by the presence of random RR intervals. In this way, atrial fibrillation has been studied as a stochastic process and it has been often modeled through Markov chains. Following previous studies on this subject, this work models time sequences of heartbeats as a three states Markov process for AF automatic detection. The model was trained and developed using signals from MIT-BIH database. Another consolidated method for AF detection, called \"RR Ratios\", was also applied to compare Markov Model\'s results. The performance evaluation of both methods was measured through sensitivity (Se) and positive predictive (+P) for AF detection. These two methods - Markov Model and \"RR Ratio\" - had their coefficients and thresholds optimized in order to maximize the values of Se and +P at the same time. After optimization, both methods were tested with another database, independent of development database. The obtained results were Se = 84,940% and +P = 81,579%, consolidating Markov Models for detecting random heartbeats.
36

A Bayesian approach to phylogenetic networks

Radice, Rosalba January 2011 (has links)
Traditional phylogenetic inference assumes that the history of a set of taxa can be explained by a tree. This assumption is often violated as some biological entities can exchange genetic material giving rise to non-treelike events often called reticulations. Failure to consider these events might result in incorrectly inferred phylogenies, and further consequences, for example stagnant and less targeted drug development. Phylogenetic networks provide a flexible tool which allow us to model the evolutionary history of a set of organisms in the presence of reticulation events. In recent years, a number of methods addressing phylogenetic network reconstruction and evaluation have been introduced. One of suchmethods has been proposed byMoret et al. (2004). They defined a phylogenetic network as a directed acyclic graph obtained by positing a set of edges between pairs of the branches of an underlying tree to model reticulation events. Recently, two works by Jin et al. (2006), and Snir and Tuller (2009), respectively, using this definition of phylogenetic network, have appeared. Both works demonstrate the potential of using maximum likelihood estimation for phylogenetic network reconstruction. We propose a Bayesian approach to the estimation of phylogenetic network parameters. We allow for different phylogenies to be inferred at different parts of our DNA alignment in the presence of reticulation events, at the species level, by using the idea that a phylogenetic network can be naturally decomposed into trees. A Markov chainMonte Carlo algorithmis provided for posterior computation of the phylogenetic network parameters. Also a more general algorithm is proposed which allows the data to dictate how many phylogenies are required to explain the data. This can be achieved by using stochastic search variable selection. Both algorithms are tested on simulated data and also demonstrated on the ribosomal protein gene rps11 data from five flowering plants. The proposed approach can be applied to a wide variety of problems which aim at exploring the possibility of reticulation events in the history of a set of taxa.
37

A Bayesian Analysis of a Multiple Choice Test

Luo, Zhisui 24 April 2013 (has links)
In a multiple choice test, examinees gain points based on how many correct responses they got. However, in this traditional grading, it is assumed that questions in the test are replications of each other. We apply an item response theory model to estimate students' abilities characterized by item's feature in a midterm test. Our Bayesian logistic Item response theory model studies the relation between the probability of getting a correct response and the three parameters. One parameter measures the student's ability and the other two measure an item's difficulty and its discriminatory feature. In this model the ability and the discrimination parameters are not identifiable. To address this issue, we construct a hierarchical Bayesian model to nullify the effects of non-identifiability. A Gibbs sampler is used to make inference and to obtain posterior distributions of the three parameters. For a "nonparametric" approach, we implement the item response theory model using a Dirichlet process mixture model. This new approach enables us to grade and cluster students based on their "ability" automatically. Although Dirichlet process mixture model has very good clustering property, it suffers from expensive and complicated computations. A slice sampling algorithm has been proposed to accommodate this issue. We apply our methodology to a real dataset obtained on a multiple choice test from WPI’s Applied Statistics I (Spring 2012) that illustrates how a student's ability relates to the observed scores.
38

Deterioration model for ports in the Republic of Korea using Markov chain Monte Carlo with multiple imputation

Jeon, Juncheol January 2019 (has links)
Condition of infrastructure is deteriorated over time as it gets older. It is the deterioration model that predicts how and when facilities will deteriorate over time. In most infrastructure management system, the deterioration model is a crucial element. Using the deterioration model, it is very helpful to estimate when repair will be carried out, how much will be needed for the maintenance of the entire facilities, and what maintenance costs will be required during the life cycle of the facility. However, the study of deterioration model for civil infrastructures of ports is still in its infancy. In particular, there is almost no related research in South Korea. Thus, this study aims to develop a deterioration model for civil infrastructure of ports in South Korea. There are various methods such as Deterministic, Stochastic, and Artificial Intelligence to develop deterioration model. In this research, Markov model using Markov chain theory, one of the Stochastic methods, is used to develop deterioration model for ports in South Korea. Markov chain is a probabilistic process among states. i.e., in Markov chain, transition among states follows some probability which is called as the transition probability. The key process of developing Markov model is to find this transition probability. This process is called calibration. In this study, the existing methods, Optimization method and Markov Chain Monte Carlo (MCMC), are reviewed, and methods to improve for these are presented. In addition, in this study, only a small amount of data are used, which causes distortion of the model. Thus, supplement techniques are presented to overcome the small size of data. In order to address the problem of the existing methods and the lack of data, the deterioration model developed by the four calibration methods: Optimization, Optimization with Bootstrap, MCMC (Markov Chain Monte Carlo), and MCMC with Multiple imputation, are finally proposed in this study. In addition, comparison between four models are carried out and good performance model is proposed. This research provides deterioration model for port in South Korea, and more accurate calibration technique is suggested. Furthermore, the method of supplementing insufficient data has been combined with existing calibration techniques.
39

Bayesian Models for Repeated Measures Data Using Markov Chain Monte Carlo Methods

Li, Yuanzhi 01 May 2016 (has links)
Bayesian models for repeated measures data are fitted to three different data an analysis projects. Markov Chain Monte Carlo (MCMC) methodology is applied to each case with Gibbs sampling and / or an adaptive Metropolis-Hastings (MH ) algorithm used to simulate the posterior distribution of parameters. We implement a Bayesian model with different variance-covariance structures to an audit fee data set. Block structures and linear models for variances are used to examine the linear trend and different behaviors before and after regulatory change during year 2004-2005. We proposed a Bayesian hierarchical model with latent teacher effects, to determine whether teacher professional development (PD) utilizing cyber-enabled resources lead to meaningful student learning outcomes measured by 8th grade student end-of-year scores (CRT scores) for students with teachers who underwent PD. Bayesian variable selection methods are applied to select teacher learning instrument variables to predict teacher effects. We fit a Bayesian two-part model with the first-part a multivariate probit model and the second-p art a log-normal regression to a repeated measures health care data set to analyze the relationship between Body Mass Index (BMI) and health care expenditures and the correlation between the probability of expenditures and dollar amount spent given expenditures. Models were fitted to a training set and predictions were made on both the training set and the test set.
40

Modeling, Analysis and Design of Wireless Sensor Network Protocols

Park, Pangun January 2011 (has links)
Wireless sensor networks (WSNs) have a tremendous potential to improve the efficiencyof many systems, for instance, in building automation and process control.Unfortunately, the current technology does not offer guaranteed energy efficiencyand reliability for closed-loop stability. The main contribution of this thesis is toprovide a modeling, analysis, and design framework for WSN protocols used in controlapplications. The protocols are designed to minimize the energy consumption ofthe network, while meeting reliability and delay requirements from the applicationlayer. The design relies on the analytical modeling of the protocol behavior.First, modeling of the slotted random access scheme of the IEEE 802.15.4medium access control (MAC) is investigated. For this protocol, which is commonlyemployed in WSN applications, a Markov chain model is used to derive theanalytical expressions of reliability, delay, and energy consumption. By using thismodel, an adaptive IEEE 802.15.4 MAC protocol is proposed. The protocol designis based on a constrained optimization problem where the objective function is theenergy consumption of the network, subject to constraints on reliability and packetdelay. The protocol is implemented and experimentally evaluated on a test-bed. Experimentalresults show that the proposed algorithm satisfies reliability and delayrequirements while ensuring a longer lifetime of the network under both stationaryand transient network conditions.Second, modeling and analysis of a hybrid IEEE 802.15.4 MAC combining theadvantages of a random access with contention with a time division multiple access(TDMA) without contention are presented. A Markov chain is used to model thestochastic behavior of random access and the deterministic behavior of TDMA.The model is validated by both theoretical analysis and Monte Carlo simulations.Using this new model, the network performance in terms of reliability, averagepacket delay, average queueing delay, and throughput is evaluated. It is shown thatthe probability density function of the number of received packets per superframefollows a Poisson distribution. Furthermore, it is determined under which conditionsthe time slot allocation mechanism of the IEEE 802.15.4 MAC is stable.Third, a new protocol for control applications, denoted Breath, is proposedwhere sensor nodes transmit information via multi-hop routing to a sink node. Theprotocol is based on the modeling of randomized routing, MAC, and duty-cycling.Analytical and experimental results show that Breath meets reliability and delayrequirements while exhibiting a nearly uniform distribution of the work load. TheBreath protocol has been implemented and experimentally evaluated on a test-bed.Finally, it is shown how the proposed WSN protocols can be used in controlapplications. A co-design between communication and control application layers isstudied by considering a constrained optimization problem, for which the objectivefunction is the energy consumption of the network and the constraints are thereliability and delay derived from the control cost. It is shown that the optimaltraffic load when either the communication throughput or control cost are optimizedis similar. / QC 20110217

Page generated in 0.0436 seconds