• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 667
  • 667
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

A Study of Inverses of Thinned Renewal Processes.

Huang, Chuen-Dow 26 June 2002 (has links)
We study the properties of thinning and Markov chain thinning of renewal processes. Among others, we investigate whether some special renewal processes can be obtained through Markov chain thinning.
292

Optimal filter design approaches to statistical process control for autocorrelated processes

Chin, Chang-Ho 01 November 2005 (has links)
Statistical Process Control (SPC), and in particular control charting, is widely used to achieve and maintain control of various processes in manufacturing. A control chart is a graphical display that plots quality characteristics versus the sample number or the time line. Interest in effective implementation of control charts for autocorrelated processes has increased in recent years. However, because of the complexities involved, few systematic design approaches have thus far been developed. Many control charting methods can be viewed as the charting of the output of a linear filter applied to the process data. In this dissertation, we generalize the concept of linear filters for control charts and propose new control charting schemes, the general linear filter (GLF) and the 2nd-order linear filter, based on the generalization. In addition, their optimal design methodologies are developed, where the filter parameters are optimally selected to minimize the out-of-control Average Run Length (ARL) while constraining the in-control ARL to some desired value. The optimal linear filters are compared with other methods in terms of ARL performance, and a number of their interesting characteristics are discussed for various types of mean shifts (step, spike, sinusoidal) and various ARMA process models (i.i.d., AR(1), ARMA(1,1)). Also, in this work, a new discretization approach for substantially reducing the computational time and memory use for the Markov chain method of calculating the ARL is proposed. Finally, a gradient-based optimization strategy for searching optimal linear filters is illustrated.
293

MULTI-STATE MODELS FOR INTERVAL CENSORED DATA WITH COMPETING RISK

Wei, Shaoceng 01 January 2015 (has links)
Multi-state models are often used to evaluate the effect of death as a competing event to the development of dementia in a longitudinal study of the cognitive status of elderly subjects. In this dissertation, both multi-state Markov model and semi-Markov model are used to characterize the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk. Firstly, a multi-state Markov model with three transient states: intact cognition, mild cognitive impairment (M.C.I.) and global impairment (G.I.) and one absorbing state: dementia is used to model the cognitive panel data. A Weibull model and a Cox proportional hazards (Cox PH) model are used to fit the time to death based on age at entry and the APOE4 status. A shared random effect correlates this survival time with the transition model. Secondly, we further apply a Semi-Markov process in which we assume that the wait- ing times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for the likelihood based estimation. At the end of this dissertation we extend a non-parametric “local EM algorithm” to obtain a smooth estimator of the cause-specific hazard function (CSH) in the presence of competing risk. All the proposed methods are justified by simulation studies and applications to the Nun Study data, a longitudinal study of late life cognition in a cohort of 461 subjects.
294

Predictions Within and Across Aquatic Systems using Statistical Methods and Models / Prediktioner inom och mellan akvatiska system med statistiska metoder och modeller

Dimberg, Peter H. January 2015 (has links)
Aquatic ecosystems are an essential source for life and, in many regions, are exploited to a degree which deteriorates their ecological status. Today, more than 50 % of the European lakes suffer from an ecological status which is unsatisfactory. Many of these lakes require abatement actions to improve their status, and mathematical models have a great potential to predict and evaluate different abatement actions and their outcome. Several statistical methods and models exist which can be used for these purposes; however, many of the models are not constructed using a sufficient amount or quality of data, are too complex to be used by most managers, or are too site specific. Therefore, the main aim of this thesis was to present different statistical methods and models which are easy to use by managers, are general, and provide insights for the development of similar methods and models. To reach the main aim of the thesis several different statistical and modelling procedures were investigated and applied, such as genetic programming (GP), multiple regression, Markov Chains, and finally, well-used criteria for the r2 and p-value for the development of a method to determine temporal-trends. The statistical methods and models were mainly based on the variables chlorophyll-a (chl-a) and total phosphorus (TP) concentrations, but some methods and models can be directly transferred to other variables. The main findings in this thesis were that multiple regressions overcome the performance of GP to predict summer chl-a concentrations and that multiple regressions can be used to generally describe the chl-a seasonality with TP summer concentrations and the latitude as independent variables. Also, it is possible to calculate probabilities, using Markov Chains, of exceeding certain chl-a concentrations in future months. Results showed that deep water concentrations were in general closely related to the surface water concentrations along with morphometric parameters; these independent variables can therefore be used in mass-balance models to estimate the mass in deep waters. A new statistical method was derived and applied to confirm whether variables have changed over time or not for cases where other traditional methods have failed. Finally, it is concluded that the statistical methods and models developed in this thesis will increase the understanding for predictions within and across aquatic systems.
295

Bayesian Inference in Large Data Problems

Quiroz, Matias January 2015 (has links)
In the last decade or so, there has been a dramatic increase in storage facilities and the possibility of processing huge amounts of data. This has made large high-quality data sets widely accessible for practitioners. This technology innovation seriously challenges traditional modeling and inference methodology. This thesis is devoted to developing inference and modeling tools to handle large data sets. Four included papers treat various important aspects of this topic, with a special emphasis on Bayesian inference by scalable Markov Chain Monte Carlo (MCMC) methods. In the first paper, we propose a novel mixture-of-experts model for longitudinal data. The model and inference methodology allows for manageable computations with a large number of subjects. The model dramatically improves the out-of-sample predictive density forecasts compared to existing models. The second paper aims at developing a scalable MCMC algorithm. Ideas from the survey sampling literature are used to estimate the likelihood on a random subset of data. The likelihood estimate is used within the pseudomarginal MCMC framework and we develop a theoretical framework for such algorithms based on subsets of the data. The third paper further develops the ideas introduced in the second paper. We introduce the difference estimator in this framework and modify the methods for estimating the likelihood on a random subset of data. This results in scalable inference for a wider class of models. Finally, the fourth paper brings the survey sampling tools for estimating the likelihood developed in the thesis into the delayed acceptance MCMC framework. We compare to an existing approach in the literature and document promising results for our algorithm. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Submitted. Paper 2: Submitted. Paper 3: Manuscript. Paper 4: Manuscript.</p>
296

System Studies and Simulations of Distributed Photovoltaics in Sweden

Widén, Joakim January 2010 (has links)
Grid-connected photovoltaic (PV) capacity is increasing worldwide, mainly due to extensive subsidy schemes for renewable electricity generation. A majority of newly installed systems are distributed small-scale systems located in distribution grids, often at residential customers. Recent developments suggest that such distributed PV generation (PV-DG) could gain more interest in Sweden in the near future. With prospects of decreasing system prices, an extensive integration does not seem impossible. In this PhD thesis the opportunities for utilisation of on-site PV generation and the consequences of a widespread introduction are studied. The specific aims are to improve modelling of residential electricity demand to provide a basis for simulations, to study load matching and grid interaction of on-site PV and to add to the understanding of power system impacts. Time-use data (TUD) provided a realistic basis for residential load modelling. Both a deterministic and a stochastic approach for generating different types of end-use profiles were developed. The models are capable of realistically reproducing important electric load properties such as diurnal and seasonal variations, short time-scale fluctuations and random load coincidence. The load matching capability of residential on-site PV was found to be low by default but possible to improve to some extent by different measures. Net metering reduces the economic effects of the mismatch and has a decisive impact on the production value and on the system sizes that are reasonable to install for a small-scale producer. Impacts of large-scale PV-DG on low-voltage (LV) grids and on the national power system were studied. Power flow studies showed that voltage rise in LV grids is not a limiting factor for integration of PV-DG. Variability and correlations with large-scale wind power were determined using a scenario for large-scale building-mounted PV. Profound impacts on the power system were found only for the most extreme scenarios. / Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 711
297

Particle filters and Markov chains for learning of dynamical systems

Lindsten, Fredrik January 2013 (has links)
Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies. / CNDM / CADICS
298

Statistical Inference for Models with Intractable Normalizing Constants

Jin, Ick Hoon 16 December 2013 (has links)
In this dissertation, we have proposed two new algorithms for statistical inference for models with intractable normalizing constants: the Monte Carlo Metropolis-Hastings algorithm and the Bayesian Stochastic Approximation Monte Carlo algorithm. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. At each iteration, it replaces the unknown normalizing constant ratio by a Monte Carlo estimate. Although the algorithm violates the detailed balance condition, it still converges, as shown in the paper, to the desired target distribution under mild conditions. The BSAMC algorithm works by simulating from a sequence of approximated distributions using the SAMC algorithm. A strong law of large numbers has been established for BSAMC estimators under mild conditions. One significant advantage of our algorithms over the auxiliary variable MCMC methods is that they avoid the requirement for perfect samples, and thus it can be applied to many models for which perfect sampling is not available or very expensive. In addition, although the normalizing constant approximation is also involved in BSAMC, BSAMC can perform very robustly to initial guesses of parameters due to the powerful ability of SAMC in sample space exploration. BSAMC has also provided a general framework for approximated Bayesian inference for the models for which the likelihood function is intractable: sampling from a sequence of approximated distributions with their average converging to the target distribution. With these two illustrated algorithms, we have demonstrated how the SAMCMC method can be applied to estimate the parameters of ERGMs, which is one of the typical examples of statistical models with intractable normalizing constants. We showed that the resulting estimate is consistent, asymptotically normal and asymptotically efficient. Compared to the MCMLE and SSA methods, a significant advantage of SAMCMC is that it overcomes the model degeneracy problem. The strength of SAMCMC comes from its varying truncation mechanism, which enables SAMCMC to avoid the model degeneracy problem through re-initialization. MCMLE and SSA do not possess the re-initialization mechanism, and tend to converge to a solution near the starting point, so they often fail for the models which suffer from the model degeneracy problem.
299

Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions

Karawatzki, Roman, Leydold, Josef, Pötzelberger, Klaus January 2005 (has links) (PDF)
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. An implementation of these algorithms in C is available from the <a href="http://statmath.wu-wien.ac.at/software/hitro/">authors</a>. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
300

A MARKOV TRANSITION MODEL TO DEMENTIA WITH DEATH AS A COMPETING EVENT

Xu, Liou 01 January 2010 (has links)
The research on multi-state Markov transition model is motivated by the nature of the longitudinal data from the Nun Study (Snowdon, 1997), and similar information on the BRAiNS cohort (Salazar, 2004). Our goal is to develop a flexible methodology for handling the categorical longitudinal responses and competing risks time-to-event that characterizes the features of the data for research on dementia. To do so, we treat the survival from death as a continuous variable rather than defining death as a competing absorbing state to dementia. We assume that within each subject the survival component and the Markov process are linked by a shared latent random effect, and moreover, these two pieces are conditionally independent given the random effect and their corresponding predictor variables. The problem of the dependence among observations made on the same subject (repeated measurements) is addressed by assuming a first order Markovian dependence structure. A closed-form expression for the individual and thus overall conditional marginal likelihood function is derived, which we can evaluate numerically to produce the maximum likelihood estimates for the unknown parameters. This method can be implemented using standard statistical software such as SAS Proc Nlmixed©. We present the results of simulation studies designed to show how the model’s ability to accurately estimate the parameters can be affected by the distributional form of the survival term. Then we focus on addressing the problem by accommodating the residual life time of the subject’s confounding in the nonhomogeneous chain. The convergence status of the chain is examined and the formulation of the absorption statistics is derived. We propose using the Delta method to estimate the variance terms for construction of confidence intervals. The results are illustrated with applications to the Nun Study data in details.

Page generated in 0.0331 seconds