• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 24
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 138
  • 45
  • 29
  • 26
  • 26
  • 23
  • 22
  • 20
  • 20
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Accommodating flexible spatial and social dependency structures in discrete choice models of activity-based travel demand modeling

Sener, Ipek N. 09 November 2010 (has links)
Spatial and social dependence shape human activity-travel pattern decisions and their antecedent choices. Although the transportation literature has long recognized the importance of considering spatial and social dependencies in modeling individuals’ choice behavior, there has been less research on techniques to accommodate these dependencies in discrete choice models, mainly because of the modeling complexities introduced by such interdependencies. The main goal of this dissertation, therefore, is to propose new modeling approaches for accommodating flexible spatial and social dependency structures in discrete choice models within the broader context of activity-based travel demand modeling. The primary objectives of this dissertation research are three-fold. The first objective is to develop a discrete choice modeling methodology that explicitly incorporates spatial dependency (or correlation) across location choice alternatives (whether the choice alternatives are contiguous or non-contiguous). This is achieved by incorporating flexible spatial correlations and patterns using a closed-form Generalized Extreme Value (GEV) structure. The second objective is to propose new approaches to accommodate spatial dependency (or correlation) across observational units for different aspatial discrete choice models, including binary choice and ordered-response choice models. This is achieved by adopting different copula-based methodologies, which offer flexible dependency structures to test for different forms of dependencies. Further, simple and practical approaches are proposed, obviating the need for any kind of simulation machinery and methods for estimation. Finally, the third objective is to formulate an enhanced methodology to capture the social dependency (or correlation) across observational units. In particular, a clustered copula-based approach is formulated to recognize the potential dependence due to cluster effects (such as family-related effects) in an ordered-response context. The proposed approaches are empirically applied in the context of both spatial and aspatial choice situations, including residential location and activity participation choices. In particular, the results show that ignoring spatial and social dependencies, when present, can lead to inconsistent and inefficient parameter estimates that, in turn, can result in misinformed policy actions and recommendations. The approaches proposed in this research are simple, flexible and easy-to-implement, applicable to data sets of any size, do not require any simulation machinery, and do not impose any restrictive assumptions on the dependency structure. / text
82

Parameter Estimation for Nonlinear State Space Models

Wong, Jessica 23 April 2012 (has links)
This thesis explores the methodology of state, and in particular, parameter estimation for time series datasets. Various approaches are investigated that are suitable for nonlinear models and non-Gaussian observations using state space models. The methodologies are applied to a dataset consisting of the historical lynx and hare populations, typically modeled by the Lotka- Volterra equations. With this model and the observed dataset, particle filtering and parameter estimation methods are implemented as a way to better predict the state of the system. Methods for parameter estimation considered include: maximum likelihood estimation, state augmented particle filtering, multiple iterative filtering and particle Markov chain Monte Carlo (PMCMC) methods. The specific advantages and disadvantages for each technique are discussed. However, in most cases, PMCMC is the preferred parameter estimation solution. It has the advantage over other approaches in that it can well approximate any posterior distribution from which inference can be made. / Master's thesis
83

Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopes

Saab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form l(G)=Σi log∫f(yi,γ) dG(γ) where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω. The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data. Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
84

On statistical approaches to climate change analysis

Lee, Terry Chun Kit 21 April 2008 (has links)
Evidence for a human contribution to climatic changes during the past century is accumulating rapidly. Given the strength of the evidence, it seems natural to ask whether forcing projections can be used to forecast climate change. A Bayesian method for post-processing forced climate model simulations that produces probabilistic hindcasts of inter-decadal temperature changes on large spatial scales is proposed. Hindcasts produced for the last two decades of the 20th century are shown to be skillful. The suggestion that skillful decadal forecasts can be produced on large regional scales by exploiting the response to anthropogenic forcing provides additional evidence that anthropogenic change in the composition of the atmosphere has influenced our climate. In the absence of large negative volcanic forcing on the climate system (which cannot presently be forecast), the global mean temperature for the decade 2000-2009 is predicted to lie above the 1970-1999 normal with probability 0.94. The global mean temperature anomaly for this decade relative to 1970-1999 is predicted to be 0.35C (5-95% confidence range: 0.21C-0.48C). Reconstruction of temperature variability of the past centuries using climate proxy data can also provide important information on the role of anthropogenic forcing in the observed 20th century warming. A state-space model approach that allows incorporation of additional non-temperature information, such as the estimated response to external forcing, to reconstruct historical temperature is proposed. An advantage of this approach is that it permits simultaneous reconstruction and detection analysis as well as future projection. A difficulty in using this approach is that estimation of several unknown state-space model parameters is required. To take advantage of the data structure in the reconstruction problem, the existing parameter estimation approach is modified, resulting in two new estimation approaches. The competing estimation approaches are compared based on theoretical grounds and through simulation studies. The two new estimation approaches generally perform better than the existing approach. A number of studies have attempted to reconstruct hemispheric mean temperature for the past millennium from proxy climate indicators. Different statistical methods are used in these studies and it therefore seems natural to ask which method is more reliable. An empirical comparison between the different reconstruction methods is considered using both climate model data and real-world paleoclimate proxy data. The proposed state-space model approach and the RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing decadal temperature variability. The similarity in performance provides evidence that the difference between many real-world reconstructions is more likely to be due to the choice of the proxy series, or the use of difference target seasons or latitudes, than to the choice of statistical method.
85

Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopes

Saab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form l(G)=Σi log∫f(yi,γ) dG(γ) where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω. The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data. Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
86

Optimal Control and Estimation of Stochastic Systems with Costly Partial Information

Kim, Michael J. 31 August 2012 (has links)
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.
87

Optimal Control and Estimation of Stochastic Systems with Costly Partial Information

Kim, Michael J. 31 August 2012 (has links)
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.
88

Stochastic Volatility Models and Simulated Maximum Likelihood Estimation

Choi, Ji Eun 08 July 2011 (has links)
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models. The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models. In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1. Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
89

Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model

Oberhofer, Harald, Pfaffermayr, Michael 01 1900 (has links) (PDF)
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely to decline within a range between 7.2% and 45.7% (5.9% and 38.2%) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UKs real income between 1.4% and 5.7% under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero. / Series: Department of Economics Working Paper Series
90

Teletraffic Models for Mobile Network Connectivity. / Teletrafik Modeller för mobilt nätverk Anslutningar

Venigalla, Thejaswi, Akkapaka, Raj Kiran January 2013 (has links)
We are in an era marked by tremendous global growth in mobile traffic and subscribers due to change in the mobile communication technology from second generation to third and fourth generations. Especially usage of packet-data applications has recorded remarkable growth. The need for mobile communication networks capable of providing an ever increasing spectrum of services calls for efficient techniques for the analysis, monitoring and design of networks. To meet the ever increasing demands of the user and to ensure on reliability and affordability, system models that can capture the characteristics of actual network load and yield acceptable precise predictions of performance in a reasonable amount of time must be developed. This can be achieved using teletraffic models as they capture the behaviour of system through interpret-able functions and parameters. Past years have seen extremely numerous teletraffic models for different purposes. Nevertheless there is no model that provides a proper frame work to analyse the mobile networks. This report attempts to provide a frame work to analyse the mobile traffic and based on the analysis we design teletraffic models that represent the realistic mobile networks and calculate the buffer under-flow probability. / Vi är i en tid präglad av enorm global tillväxt inom mobil trafik och abonnenter på grund av förändringar i den mobila kommunikationsteknikenfrån andra generationen till tredje och fjärde led . Särskilt användningen av paketdataapplikationerhar spelat in en anmärkningsvärd tillväxt . Behovet av mobila kommunikationsnät som kan ge en allt större spektrum av tjänster lyser effektiva metoder för analys , övervakning och utformning av nät . För att möta de ständigt ökande kraven på användaren och för att säkerställa den tillförlitlighet och överkomliga priser , måste systemmodeller som kan fånga egenskaper faktiska belastningen på nätet och ger acceptabla precisa förutsägelser om prestanda i en rimlig tid att utvecklas . Detta kan uppnås med användning av teletrafik modeller som de fångar beteendet hos systemet genom tolka bara funktioner och parametrar . Tidigare år har sett väldigt många teletrafik modeller för olika ändamål . Det är likväl inte modellen som ger en ordentlig ramverk för att analysera de mobilnät. Rapporten försöker ge ett ramverk för att analysera mobiltrafik och baserat på analysen vi designar teletrafik modeller som representerar de realistiska mobilnät och beräkna buffertunderflödesannolikhet .

Page generated in 0.1286 seconds