Spelling suggestions: "subject:"[een] MARKOV CHAIN"" "subject:"[enn] MARKOV CHAIN""
251 |
Reversible Jump Markov Chain Monte CarloNeuhoff, Daniel 15 March 2016 (has links)
Die vier in der vorliegenden Dissertation enthaltenen Studien beschäftigen sich vorwiegend mit dem dynamischen Verhalten makroökonomischer Zeitreihen. Diese Dynamiken werden sowohl im Kontext eines einfachen DSGE Modells, als auch aus der Sichtweise reiner Zeitreihenmodelle untersucht. / The four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
|
252 |
The use of supercapacitors in conjunction with batteries in industrial auxiliary DC power systems / Ruan PekelharingPekelharing, Ruan January 2015 (has links)
Control and monitoring networks often operate on AC/DC power systems. DC batteries and chargers are commonly used on industrial plants as auxiliary DC power systems for these control and monitoring networks. The energy demand and load profiles for these control networks differ from application to application. Proper design, sizing, and maintenance of the components that forms part of the DC control power system are therefore required.
Throughout the load profile of a control and monitoring system there are various peak currents. The peak currents are classified as inrush and momentary loads. These inrush and momentary loads play a large role when calculating the required battery size for an application. This study investigates the feasibility of using supercapacitors in conjunction with batteries, in order to reduce the size of the required battery capacity. A reduction in the size of the required battery capacity not only influences the cost of the battery itself, but also influences the hydrogen emissions, the physical space requirements, and the required rectifiers and chargers.
When calculating the required size batteries for an auxiliary power system, a defined load profile is required. Control and monitoring systems are used to control dynamic processes, which entails a continuous starting and stopping of equipment as the process demands. This starting and stopping of devices will cause fluctuations in the load profile. Ideally, data should be obtained from a live plant for the purpose of defining load profiles. Unfortunately, due to the economic risks involved, installing data logging equipment on a live industrial plant for the purpose of research, is not allowed. There are also no historical data available from which load profiles could be generated.
In order to evaluate the influence of supercapacitors, complex load profiles are required. In this study, an alternative method of defining the load profile for a dynamic process is investigated. Load profiles for various applications are approximated using a probabilistic approach.
The approximation methodology make use of plant operating philosophies as input to the Markov Chain Monte Carlo simulation theory. The required battery sizes for
the approximated profiles are calculated using the IEEE recommended practice for sizing batteries. The approximated load profile, as well the calculated battery size are used for simulating the auxiliary power system.
A supercapacitor is introduced into the circuit and the simulations are repeated. The introduction of the supercapacitor relieves the battery of the inrush and momentary loads of the load profile. The battery sizing calculations are repeated so as to test the influence of the supercapacitor on the required battery capacity.
In order to investigate the full influence of adding a supercapacitor to the design, the impact on various factors are considered. In this study, these factors include the battery size, charger size, H2 extraction system, as well as maintenance requirements and the life of the battery.
No major cost savings where evident from the results obtained. Primary reasons for this low cost saving are the fixed ranges in which battery sizes are available, as well as conservative battery data obtained from battery suppliers. It is believed that applications other than control and monitoring systems will show larger savings. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
|
253 |
The use of supercapacitors in conjunction with batteries in industrial auxiliary DC power systems / Ruan PekelharingPekelharing, Ruan January 2015 (has links)
Control and monitoring networks often operate on AC/DC power systems. DC batteries and chargers are commonly used on industrial plants as auxiliary DC power systems for these control and monitoring networks. The energy demand and load profiles for these control networks differ from application to application. Proper design, sizing, and maintenance of the components that forms part of the DC control power system are therefore required.
Throughout the load profile of a control and monitoring system there are various peak currents. The peak currents are classified as inrush and momentary loads. These inrush and momentary loads play a large role when calculating the required battery size for an application. This study investigates the feasibility of using supercapacitors in conjunction with batteries, in order to reduce the size of the required battery capacity. A reduction in the size of the required battery capacity not only influences the cost of the battery itself, but also influences the hydrogen emissions, the physical space requirements, and the required rectifiers and chargers.
When calculating the required size batteries for an auxiliary power system, a defined load profile is required. Control and monitoring systems are used to control dynamic processes, which entails a continuous starting and stopping of equipment as the process demands. This starting and stopping of devices will cause fluctuations in the load profile. Ideally, data should be obtained from a live plant for the purpose of defining load profiles. Unfortunately, due to the economic risks involved, installing data logging equipment on a live industrial plant for the purpose of research, is not allowed. There are also no historical data available from which load profiles could be generated.
In order to evaluate the influence of supercapacitors, complex load profiles are required. In this study, an alternative method of defining the load profile for a dynamic process is investigated. Load profiles for various applications are approximated using a probabilistic approach.
The approximation methodology make use of plant operating philosophies as input to the Markov Chain Monte Carlo simulation theory. The required battery sizes for
the approximated profiles are calculated using the IEEE recommended practice for sizing batteries. The approximated load profile, as well the calculated battery size are used for simulating the auxiliary power system.
A supercapacitor is introduced into the circuit and the simulations are repeated. The introduction of the supercapacitor relieves the battery of the inrush and momentary loads of the load profile. The battery sizing calculations are repeated so as to test the influence of the supercapacitor on the required battery capacity.
In order to investigate the full influence of adding a supercapacitor to the design, the impact on various factors are considered. In this study, these factors include the battery size, charger size, H2 extraction system, as well as maintenance requirements and the life of the battery.
No major cost savings where evident from the results obtained. Primary reasons for this low cost saving are the fixed ranges in which battery sizes are available, as well as conservative battery data obtained from battery suppliers. It is believed that applications other than control and monitoring systems will show larger savings. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
|
254 |
Topics in Modern Bayesian ComputationQamar, Shaan January 2015 (has links)
<p>Collections of large volumes of rich and complex data has become ubiquitous in recent years, posing new challenges in methodological and theoretical statistics alike. Today, statisticians are tasked with developing flexible methods capable of adapting to the degree of complexity and noise in increasingly rich data gathered across a variety of disciplines and settings. This has spurred the need for novel multivariate regression techniques that can efficiently capture a wide range of naturally occurring predictor-response relations, identify important predictors and their interactions and do so even when the number of predictors is large but the sample size remains limited. </p><p>Meanwhile, efficient model fitting tools must evolve quickly to keep pace with the rapidly growing dimension and complexity of data they are applied to. Aided by the tremendous success of modern computing, Bayesian methods have gained tremendous popularity in recent years. These methods provide a natural probabilistic characterization of uncertainty in the parameters and in predictions. In addition, they provide a practical way of encoding model structure that can lead to large gains in statistical estimation and more interpretable results. However, this flexibility is often hindered in applications to modern data which are increasingly high dimensional, both in the number of observations $n$ and the number of predictors $p$. Here, computational complexity and the curse of dimensionality typically render posterior computation inefficient. In particular, Markov chain Monte Carlo (MCMC) methods which remain the workhorse for Bayesian computation (owing to their generality and asymptotic accuracy guarantee), typically suffer data processing and computational bottlenecks as a consequence of (i) the need to hold the entire dataset (or available sufficient statistics) in memory at once; and (ii) having to evaluate of the (often expensive to compute) data likelihood at each sampling iteration. </p><p>This thesis divides into two parts. The first part concerns itself with developing efficient MCMC methods for posterior computation in the high dimensional {\em large-n large-p} setting. In particular, we develop an efficient and widely applicable approximate inference algorithm that extends MCMC to the online data setting, and separately propose a novel stochastic search sampling scheme for variable selection in high dimensional predictor settings. The second part of this thesis develops novel methods for structured sparsity in the high-dimensional {\em large-p small-n} regression setting. Here, statistical methods should scale well with the predictor dimension and be able to efficiently identify low dimensional structure so as to facilitate optimal statistical estimation in the presence of limited data. Importantly, these methods must be flexible to accommodate potentially complex relationships between the response and its associated explanatory variables. The first work proposes a nonparametric additive Gaussian process model to learn predictor-response relations that may be highly nonlinear and include numerous lower order interaction effects, possibly in different parts of the predictor space. A second work proposes a novel class of Bayesian shrinkage priors for multivariate regression with a tensor valued predictor. Dimension reduction is achieved using a low-rank additive decomposition for the latter, enabling a highly flexible and rich structure within which excellent cell-estimation and region selection may be obtained through state-of-the-art shrinkage methods. In addition, the methods developed in these works come with strong theoretical guarantees.</p> / Dissertation
|
255 |
AN EXTENSION OF SOQPSK TO M-ARY SIGNALLINGBishop, Chris, Fahey, Mike 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Shaped Offset Quadrature Phase Shift Keying (SOQPSK) has the advantages of low sidelobes and high detection probability; however, its main lobe has a fixed width set by the number of constellation points. By slightly modifying the modulation scheme, the four constellation points of quadrature shift keying can be changed to M constellation points where M is a power of 2. After this change, the power spectral density (PSD) retains low sidelobes, and the desirable property of being able to detect the signal by integrating over two symbol periods is retained.
|
256 |
Bayesian Generative Modeling of Complex Dynamical SystemsGuan, Jinyan January 2016 (has links)
This dissertation presents a Bayesian generative modeling approach for complex dynamical systems for emotion-interaction patterns within multivariate data collected in social psychology studies. While dynamical models have been used by social psychologists to study complex psychological and behavior patterns in recent years, most of these studies have been limited by using regression methods to fit the model parameters from noisy observations. These regression methods mostly rely on the estimates of the derivatives from the noisy observation, thus easily result in overfitting and fail to predict future outcomes. A Bayesian generative model solves the problem by integrating the prior knowledge of where the data comes from with the observed data through posterior distributions. It allows the development of theoretical ideas and mathematical models to be independent of the inference concerns. Besides, Bayesian generative statistical modeling allows evaluation of the model based on its predictive power instead of the model residual error reduction in regression methods to prevent overfitting in social psychology data analysis. In the proposed Bayesian generative modeling approach, this dissertation uses the State Space Model (SSM) to model the dynamics of emotion interactions. Specifically, it tests the approach in a class of psychological models aimed at explaining the emotional dynamics of interacting couples in committed relationships. The latent states of the SSM are composed of continuous real numbers that represent the level of the true emotional states of both partners. One can obtain the latent states at all subsequent time points by evolving a differential equation (typically a coupled linear oscillator (CLO)) forward in time with some known initial state at the starting time. The multivariate observed states include self-reported emotional experiences and physiological measurements of both partners during the interactions. To test whether well-being factors, such as body weight, can help to predict emotion-interaction patterns, we construct functions that determine the prior distributions of the CLO parameters of individual couples based on existing emotion theories. Besides, we allow a single latent state to generate multivariate observations and learn the group-shared coefficients that specify the relationship between the latent states and the multivariate observations. Furthermore, we model the nonlinearity of the emotional interaction by allowing smooth changes (drift) in the model parameters. By restricting the stochasticity to the parameter level, the proposed approach models the dynamics in longer periods of social interactions assuming that the interaction dynamics slowly and smoothly vary over time. The proposed approach achieves this by applying Gaussian Process (GP) priors with smooth covariance functions to the CLO parameters. Also, we propose to model the emotion regulation patterns as clusters of the dynamical parameters. To infer the parameters of the proposed Bayesian generative model from noisy experimental data, we develop a Gibbs sampler to learn the parameters of the patterns using a set of training couples. To evaluate the fitted model, we develop a multi-level cross-validation procedure for learning the group-shared parameters and distributions from training data and testing the learned models on held-out testing data. During testing, we use the learned shared model parameters to fit the individual CLO parameters to the first 80% of the time points of the testing data by Monte Carlo sampling and then predict the states of the last 20% of the time points. By evaluating models with cross-validation, one can estimate whether complex models are overfitted to noisy observations and fail to generalize to unseen data. I test our approach on both synthetic data that was generated by the generative model and real data that was collected in multiple social psychology experiments. The proposed approach has the potential to model other complex behavior since the generative model is not restricted to the forms of the underlying dynamics.
|
257 |
Spatial Growth Regressions: Model Specification, Estimation and InterpretationLeSage, James P., Fischer, Manfred M. 04 1900 (has links) (PDF)
This paper uses Bayesian model comparison methods to simultaneously specify both the
spatial weight structure and explanatory variables for a spatial growth regression involving
255 NUTS 2 regions across 25 European countries. In addition, a correct interpretation of
the spatial regression parameter estimates that takes into account the simultaneous feed-
back nature of the spatial autoregressive model is provided. Our findings indicate that
incorporating model uncertainty in conjunction with appropriate parameter interpretation
decreased the importance of explanatory variables traditionally thought to exert an important influence on regional income growth rates. (authors' abstract)
|
258 |
Probabilistic Models for Species Tree Inference and Orthology AnalysisUllah, Ikram January 2015 (has links)
A phylogenetic tree is used to model gene evolution and species evolution using molecular sequence data. For artifactual and biological reasons, a gene tree may differ from a species tree, a phenomenon known as gene tree-species tree incongruence. Assuming the presence of one or more evolutionary events, e.g., gene duplication, gene loss, and lateral gene transfer (LGT), the incongruence may be explained using a reconciliation of a gene tree inside a species tree. Such information has biological utilities, e.g., inference of orthologous relationship between genes. In this thesis, we present probabilistic models and methods for orthology analysis and species tree inference, while accounting for evolutionary factors such as gene duplication, gene loss, and sequence evolution. Furthermore, we use a probabilistic LGT-aware model for inferring gene trees having temporal information for duplication and LGT events. In the first project, we present a Bayesian method, called DLRSOrthology, for estimating orthology probabilities using the DLRS model: a probabilistic model integrating gene evolution, a relaxed molecular clock for substitution rates, and sequence evolution. We devise a dynamic programming algorithm for efficiently summing orthology probabilities over all reconciliations of a gene tree inside a species tree. Furthermore, we present heuristics based on receiver operating characteristics (ROC) curve to estimate suitable thresholds for deciding orthology events. Our method, as demonstrated by synthetic and biological results, outperforms existing probabilistic approaches in accuracy and is robust to incomplete taxon sampling artifacts. In the second project, we present a probabilistic method, based on a mixture model, for species tree inference. The method employs a two-phase approach, where in the first phase, a structural expectation maximization algorithm, based on a mixture model, is used to reconstruct a maximum likelihood set of candidate species trees. In the second phase, in order to select the best species tree, each of the candidate species tree is evaluated using PrIME-DLRS: a method based on the DLRS model. The method is accurate, efficient, and scalable when compared to a recent probabilistic species tree inference method called PHYLDOG. We observe that, in most cases, the analysis constituted only by the first phase may also be used for selecting the target species tree, yielding a fast and accurate method for larger datasets. Finally, we devise a probabilistic method based on the DLTRS model: an extension of the DLRS model to include LGT events, for sampling reconciliations of a gene tree inside a species tree. The method enables us to estimate gene trees having temporal information for duplication and LGT events. To the best of our knowledge, this is the first probabilistic method that takes gene sequence data directly into account for sampling reconciliations that contains information about LGT events. Based on the synthetic data analysis, we believe that the method has the potential to identify LGT highways. / <p>QC 20150529</p>
|
259 |
Essays on Bayesian Inference for Social NetworksKoskinen, Johan January 2004 (has links)
<p>This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time.</p><p>A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation.</p><p>The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step.</p><p>In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications.</p><p>Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.</p>
|
260 |
Bayesian stochastic differential equation modelling with application to financeAl-Saadony, Muhannad January 2013 (has links)
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
|
Page generated in 0.0279 seconds