• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 707
  • 707
  • 374
  • 374
  • 153
  • 152
  • 105
  • 79
  • 69
  • 69
  • 66
  • 65
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Multi-GPU Compute Solution for Optimized Genomic Selection Analysis

Devore, Trevor 01 June 2014 (has links) (PDF)
Many modern-day Bioinformatics algorithms rely heavily on statistical models to analyze their biological data. Some of these statistical models lend themselves nicely to standard high performance computing optimizations such as parallelism, while others do not. One such algorithm is Markov Chain Monte Carlo (MCMC). In this thesis, we present a heterogeneous compute solution for optimizing GenSel, a genetic selection analysis tool. GenSel utilizes a MCMC algorithm to perform Bayesian inference using Gibbs sampling. Optimizing an MCMC algorithm is a difficult problem because it is inherently sequential, containing a loop carried dependence between each Markov Chain iteration. The optimization presented in this thesis utilizes GPU computing to exploit the data-level parallelism within each of these iterations. In addition, it allows for the efficient management of memory, the pipelining of CUDA kernels, and the use of multiple GPUs. The optimizations presented show performance improvements of up to 1.84 times that of the original algorithm.
32

Markov Chains as Tools for Jazz Improvisation Analysis

Franz, David Matthew 13 July 1998 (has links)
This thesis describes an exploratory application of a statistical analysis and modeling technique (Markov chains) for the modeling of jazz improvisation with the intended subobjective of providing increased insight into an improviser's style and creativity through the postulation of quantitative measures of style and creativity based on the constructed Markovian analysis techniques. Using Visual Basic programming language, Markov chains of orders one to three are created using transcriptions of improvised solos by John Coltrane in his song Giant Steps. Still considered as statistical data, the Markov chains are examined and information is extracted from them through the development of several statistical tools for musical analysis. Two general categories of tools for analysis are developed: Subtraction matrices and graphical comparisons of distributions. Using these tools and the raw Markov chain data for musical analysis, quantitative measures for creativity and style are postulated. These measures are based on previously developed models and definitions of creativity and style taken from the literature. The information acquired from the implementation of the analysis tools is applied to the models in order to provide a theoretical basis for the development of the quantitative measures and a framework for the interpretation of the information. Guilford's Structure of Intellect model is used for developing creativity measures and Heen's model of the constructs of style analysis is used for determining measures of style. Overall, this research found that Markov chains provide distinct and useful information for musical analysis in the domain of jazz improvisation. Many examples of Markov chains are enumerated and tools for analysis are developed that implement the Markov chains. It is then explained how Markov chains and the tools for their analysis can be interpreted to determine quantitative measures of creativity and style. Finally, this thesis presents conclusions on Markov chain portrayals, new analysis tools and procedures, quantitative measures of creativity and style, and, in sum, that Markovian modeling is in fact a reasonable and useful modeling approach for this application. / Master of Science
33

Markov chains for sampling matchings

Matthews, James January 2008 (has links)
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
34

On Stochastic Volatility Models as an Alternative to GARCH Type Models

Nilsson, Oscar January 2016 (has links)
For the purpose of modelling and prediction of volatility, the family of Stochastic Volatility (SV) models is an alternative to the extensively used ARCH type models. SV models differ in their assumption that volatility itself follows a latent stochastic process. This reformulation of the volatility process makes however model estimation distinctly more complicated for the SV type models, which in this paper is conducted through Markov Chain Monte Carlo methods. The aim of this paper is to assess the standard SV model and the SV model assuming t-distributed errors and compare the results with their corresponding GARCH(1,1) counterpart. The data examined cover daily closing prices of the Swedish stock index OMXS30 for the period 2010-01-05 to 2016- 03-02. The evaluation show that both SV models outperform the two GARCH(1,1) models, where the SV model with assumed t-distributed error distribution give the smallest forecast errors.
35

A statistical model for locating regulatory regions in novel DNA sequences

Byng, Martyn Charles January 2001 (has links)
No description available.
36

Bayesian analysis of structural change in trend

Zheng, Pingping January 2002 (has links)
No description available.
37

Design and Analysis of Sequential Clinical Trials using a Markov Chain Transition Rate Model with Conditional Power

Pond, Gregory Russell 01 August 2008 (has links)
Background: There are a plethora of potential statistical designs which can be used to evaluate efficacy of a novel cancer treatment in the phase II clinical trial setting. Unfortunately, there is no consensus as to which design one should prefer, nor even which definition of efficacy should be used and the primary endpoint conclusion can vary depending on which design is chosen. It would be useful if an all-encompassing methodology was possible which could evaluate all the different designs simultaneously and allow investigators an understanding of the trial results under the varying scenarios. Methods: Finite Markov chain imbedding is a method which can be used in the setting of phase II oncology clinical trials but never previously evaluated in this scenario. Simple variations to the transition matrix or end-state probability definitions can be performed which allow for evaluation of multiple designs and endpoints for a single trial. A computer program is written in R which allows for computation of p-values and conditional power, two common statistical measures used for evaluation of trial results. A simulation study is performed on data arising from an actual phase II clinical trial performed recently in which the study conclusion regarding the efficacy of the potential treatment was debatable. Results: Finite Markov chain imbedding is shown to be useful for evaluating phase II oncology clinical trial results. The R code written for evaluating the simulation study is demonstrated to be fast and useful for investigating different trial designs. Further detail regarding the clinical trial results are presented, including the potential prolongation of stable disease of the treatment, which is a potentially useful marker of efficacy for this cytostatic agent. Conclusions: This novel methodology may prove to be an useful investigative technique for the evaluation of phase II oncology clinical trial data. Future studies which have disputable conclusions might become less controversial with the aid of finite Markov chain imbedding and the possible multiple evaluations which is now viable. Better understanding of activity for a given treatment might expedite the drug development process or help distinguish active from inactive treatments
38

Analysis of Memory Interference in Buffered Multi-processor Systems in Presence of Hot Spots and Favorite Memories

Sen, Sanjoy Kumar 08 1900 (has links)
In this thesis, a discrete Markov chain model for analyzing memory interference in multiprocessors, is presented.
39

Methods for Bayesian inversion of seismic data

Walker, Matthew James January 2015 (has links)
The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image.
40

Using Markov chain to describe the progression of chronic disease

Davis, Sijia January 1900 (has links)
Master of Science / Department of Statistics / Abigail Jager / A discrete-time Markov chain with stationary transition probabilities is often used for the purpose of investigating treatment programs and health care protocols for chronic disease. Suppose the patients of a certain chronic disease are observed over equally spaced time intervals. If we classify the chronic disease into n distinct health states, the movement through these health states over time then represents a patient’s disease history. We can use a discrete-time Markov chain to describe such movement using the transition probabilities between the health states. The purpose of this study was to investigate the case when the observation interval coincided with the cycle length of the Markov chain as well as the case when the observational interval and the cycle length did not coincide. In particular, we are interested in how the estimated transition matrix behaves as the ratio of observation interval and cycle length changes. Our results suggest that more estimation problems arose for small sample sizes as the length of observational interval increased, and that the deviation from the known transition probability matrix got larger as the length of observational interval increased. With increasing sample size, there were fewer estimation problems and the deviation from the known transition probability matrix was reduced.

Page generated in 0.0501 seconds