Spelling suggestions: "subject:"aximum likelihood destimation"" "subject:"aximum likelihood coestimation""
71 |
Dimension reduction of streaming data via random projectionsCosma, Ioana Ada January 2009 (has links)
A data stream is a transiently observed sequence of data elements that arrive unordered, with repetitions, and at very high rate of transmission. Examples include Internet traffic data, networks of banking and credit transactions, and radar derived meteorological data. Computer science and engineering communities have developed randomised, probabilistic algorithms to estimate statistics of interest over streaming data on the fly, with small computational complexity and storage requirements, by constructing low dimensional representations of the stream known as data sketches. This thesis combines techniques of statistical inference with algorithmic approaches, such as hashing and random projections, to derive efficient estimators for cardinality, l_{alpha} distance and quasi-distance, and entropy over streaming data. I demonstrate an unexpected connection between two approaches to cardinality estimation that involve indirect record keeping: the first using pseudo-random variates and storing selected order statistics, and the second using random projections. I show that l_{alpha} distances and quasi-distances between data streams, and entropy, can be recovered from random projections that exploit properties of alpha-stable distributions with full statistical efficiency. This is achieved by the method of L-estimation in a single-pass algorithm with modest computational requirements. The proposed estimators have good small sample performance, improved by the methods of trimming and winsorising; in other words, the value of these summary statistics can be approximated with high accuracy from data sketches of low dimension. Finally, I consider the problem of convergence assessment of Markov Chain Monte Carlo methods for simulating from complex, high dimensional, discrete distributions. I argue that online, fast, and efficient computation of summary statistics such as cardinality, entropy, and l_{alpha} distances may be a useful qualitative tool for detecting lack of convergence, and illustrate this with simulations of the posterior distribution of a decomposable Gaussian graphical model via the Metropolis-Hastings algorithm.
|
72 |
Accommodating flexible spatial and social dependency structures in discrete choice models of activity-based travel demand modelingSener, Ipek N. 09 November 2010 (has links)
Spatial and social dependence shape human activity-travel pattern decisions and their antecedent choices. Although the transportation literature has long recognized the importance of considering spatial and social dependencies in modeling individuals’ choice behavior, there has been less research on techniques to accommodate these dependencies in discrete choice models, mainly because of the modeling complexities introduced by such interdependencies. The main goal of this dissertation, therefore, is to propose new modeling approaches for accommodating flexible spatial and social dependency structures in discrete choice models within the broader context of activity-based travel demand modeling. The primary objectives of this dissertation research are three-fold. The first objective is to develop a discrete choice modeling methodology that explicitly incorporates spatial dependency (or correlation) across location choice alternatives (whether the choice alternatives are contiguous or non-contiguous). This is achieved by incorporating flexible spatial correlations and patterns using a closed-form Generalized Extreme Value (GEV) structure. The second objective is to propose new approaches to accommodate spatial dependency (or correlation) across observational units for different aspatial discrete choice models, including binary choice and ordered-response choice models. This is achieved by adopting different copula-based methodologies, which offer flexible dependency structures to test for different forms of dependencies. Further, simple and practical approaches are proposed, obviating the need for any kind of simulation machinery and methods for estimation. Finally, the third objective is to formulate an enhanced methodology to capture the social dependency (or correlation) across observational units. In particular, a clustered copula-based approach is formulated to recognize the potential dependence due to cluster effects (such as family-related effects) in an ordered-response context. The proposed approaches are empirically applied in the context of both spatial and aspatial choice situations, including residential location and activity participation choices. In particular, the results show that ignoring spatial and social dependencies, when present, can lead to inconsistent and inefficient parameter estimates that, in turn, can result in misinformed policy actions and recommendations. The approaches proposed in this research are simple, flexible and easy-to-implement, applicable to data sets of any size, do not require any simulation machinery, and do not impose any restrictive assumptions on the dependency structure. / text
|
73 |
Parameter Estimation for Nonlinear State Space ModelsWong, Jessica 23 April 2012 (has links)
This thesis explores the methodology of state, and in particular, parameter estimation for time
series datasets. Various approaches are investigated that are suitable for nonlinear models
and non-Gaussian observations using state space models. The methodologies are applied to a
dataset consisting of the historical lynx and hare populations, typically modeled by the Lotka-
Volterra equations. With this model and the observed dataset, particle filtering and parameter
estimation methods are implemented as a way to better predict the state of the system.
Methods for parameter estimation considered include: maximum likelihood estimation, state
augmented particle filtering, multiple iterative filtering and particle Markov chain Monte
Carlo (PMCMC) methods. The specific advantages and disadvantages for each technique
are discussed. However, in most cases, PMCMC is the preferred parameter estimation
solution. It has the advantage over other approaches in that it can well approximate any
posterior distribution from which inference can be made. / Master's thesis
|
74 |
Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopesSaab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form
l(G)=Σi log∫f(yi,γ) dG(γ)
where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω.
The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data.
Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
|
75 |
On statistical approaches to climate change analysisLee, Terry Chun Kit 21 April 2008 (has links)
Evidence for a human contribution to climatic changes during the past
century is accumulating rapidly. Given the strength of the evidence, it seems natural to ask
whether forcing projections can be used to forecast climate change. A Bayesian method for
post-processing forced climate model simulations that produces probabilistic hindcasts of
inter-decadal temperature changes on large spatial scales is proposed. Hindcasts produced for the
last two decades of the 20th century are shown to be skillful. The suggestion that
skillful decadal forecasts can be produced on large regional scales by exploiting the response to
anthropogenic forcing provides additional evidence that anthropogenic change in the composition of
the atmosphere has influenced our climate. In the absence of large negative volcanic forcing on the
climate system (which cannot presently be forecast), the global mean temperature for the decade
2000-2009 is predicted to lie above the 1970-1999 normal with probability 0.94. The global mean
temperature anomaly for this decade relative to 1970-1999 is predicted to be 0.35C (5-95%
confidence range: 0.21C-0.48C).
Reconstruction of temperature variability of the past centuries using climate proxy data can also
provide important information on the role of anthropogenic forcing in the observed 20th
century warming. A state-space model approach that allows incorporation of additional
non-temperature information, such as the estimated response to external forcing, to reconstruct
historical temperature is proposed. An advantage of this approach is that it permits simultaneous
reconstruction and detection analysis as well as future projection. A difficulty in using this
approach is that estimation of several unknown state-space model parameters is required. To take
advantage of the data structure in the reconstruction problem, the existing parameter estimation
approach is modified, resulting in two new estimation approaches. The competing estimation
approaches are compared based on theoretical grounds and through simulation studies. The two new
estimation approaches generally perform better than the existing approach.
A number of studies have attempted to reconstruct hemispheric mean temperature for the past
millennium from proxy climate indicators. Different statistical methods are used in these studies
and it therefore seems natural to ask which method is more reliable. An empirical comparison
between the different reconstruction methods is considered using both climate model data and
real-world paleoclimate proxy data. The proposed state-space model approach and the RegEM method
generally perform better than their competitors when reconstructing interannual variations in
Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen
to perform well when reconstructing decadal temperature variability. The similarity in performance
provides evidence that the difference between many real-world reconstructions is more likely to be
due to the choice of the proxy series, or the use of difference target seasons or latitudes, than
to the choice of statistical method.
|
76 |
Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopesSaab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form
l(G)=Σi log∫f(yi,γ) dG(γ)
where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω.
The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data.
Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
|
77 |
Optimal Control and Estimation of Stochastic Systems with Costly Partial InformationKim, Michael J. 31 August 2012 (has links)
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.
|
78 |
Optimal Control and Estimation of Stochastic Systems with Costly Partial InformationKim, Michael J. 31 August 2012 (has links)
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.
|
79 |
Stochastic Volatility Models and Simulated Maximum Likelihood EstimationChoi, Ji Eun 08 July 2011 (has links)
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models.
The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models.
In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1.
Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
|
80 |
Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity ModelOberhofer, Harald, Pfaffermayr, Michael 01 1900 (has links) (PDF)
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely to decline within a range between 7.2% and 45.7% (5.9% and 38.2%) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UKs real income between 1.4% and 5.7% under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero. / Series: Department of Economics Working Paper Series
|
Page generated in 0.092 seconds