• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 45
  • 32
  • 15
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 299
  • 299
  • 75
  • 52
  • 50
  • 47
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 28
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model Selection

Zhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter estimation in various statistical problems. Regular estimating functions produce parameter estimators which have desirable properties, such as consistency and asymptotic normality. In quasi-likelihood inference, an important example of estimating functions, correct specification of the first two moments of the underlying distribution leads to the information unbiasedness, which states that two forms of the information matrix: the negative sensitivity matrix (negative expectation of the first order derivative of an estimating function) and the variability matrix (variance of an estimating function) are equal, or in other words, the analogue of the Fisher information is equivalent to the Godambe information. Consequently, the information unbiasedness indicates that the model-based covariance matrix estimator and sandwich covariance matrix estimator are equivalent. By comparing the model-based and sandwich variance estimators, we propose information ratio (IR) statistics for testing model misspecification of variance/covariance structure under correctly specified mean structure, in the context of linear regression models, generalized linear regression models and generalized estimating equations. Asymptotic properties of the IR statistics are discussed. In addition, through intensive simulation studies, we show that the IR statistics are powerful in various applications: test for heteroscedasticity in linear regression models, test for overdispersion in count data, and test for misspecified variance function and/or misspecified working correlation structure. Moreover, the IR statistics appear more powerful than the classical information matrix test proposed by White (1982). In the literature, model selection criteria have been intensively discussed, but almost all of them target choosing the optimal mean structure. In this thesis, two model selection procedures are proposed for selecting the optimal variance/covariance structure among a collection of candidate structures. One is based on a sequence of the IR tests for all the competing variance/covariance structures. The other is based on an ``information discrepancy criterion" (IDC), which provides a measurement of discrepancy between the negative sensitivity matrix and the variability matrix. In fact, this IDC characterizes the relative efficiency loss when using a certain candidate variance/covariance structure, compared with the true but unknown structure. Through simulation studies and analyses of two data sets, it is shown that the two proposed model selection methods both have a high rate of detecting the true/optimal variance/covariance structure. In particular, since the IDC magnifies the difference among the competing structures, it is highly sensitive to detect the most appropriate variance/covariance structure.
72

Fully Bayesian Analysis of Switching Gaussian State Space Models

Frühwirth-Schnatter, Sylvia January 2000 (has links) (PDF)
In the present paper we study switching state space models from a Bayesian point of view. For estimation, the model is reformulated as a hierarchical model. We discuss various MCMC methods for Bayesian estimation, among them unconstrained Gibbs sampling, constrained sampling and permutation sampling. We address in detail the problem of unidentifiability, and discuss potential information available from an unidentified model. Furthermore the paper discusses issues in model selection such as selecting the number of states or testing for the presence of Markov switching heterogeneity. The model likelihoods of all possible hypotheses are estimated by using the method of bridge sampling. We conclude the paper with applications to simulated data as well as to modelling the U.S./U.K. real exchange rate. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
73

Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model Selection

Zhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter estimation in various statistical problems. Regular estimating functions produce parameter estimators which have desirable properties, such as consistency and asymptotic normality. In quasi-likelihood inference, an important example of estimating functions, correct specification of the first two moments of the underlying distribution leads to the information unbiasedness, which states that two forms of the information matrix: the negative sensitivity matrix (negative expectation of the first order derivative of an estimating function) and the variability matrix (variance of an estimating function) are equal, or in other words, the analogue of the Fisher information is equivalent to the Godambe information. Consequently, the information unbiasedness indicates that the model-based covariance matrix estimator and sandwich covariance matrix estimator are equivalent. By comparing the model-based and sandwich variance estimators, we propose information ratio (IR) statistics for testing model misspecification of variance/covariance structure under correctly specified mean structure, in the context of linear regression models, generalized linear regression models and generalized estimating equations. Asymptotic properties of the IR statistics are discussed. In addition, through intensive simulation studies, we show that the IR statistics are powerful in various applications: test for heteroscedasticity in linear regression models, test for overdispersion in count data, and test for misspecified variance function and/or misspecified working correlation structure. Moreover, the IR statistics appear more powerful than the classical information matrix test proposed by White (1982). In the literature, model selection criteria have been intensively discussed, but almost all of them target choosing the optimal mean structure. In this thesis, two model selection procedures are proposed for selecting the optimal variance/covariance structure among a collection of candidate structures. One is based on a sequence of the IR tests for all the competing variance/covariance structures. The other is based on an ``information discrepancy criterion" (IDC), which provides a measurement of discrepancy between the negative sensitivity matrix and the variability matrix. In fact, this IDC characterizes the relative efficiency loss when using a certain candidate variance/covariance structure, compared with the true but unknown structure. Through simulation studies and analyses of two data sets, it is shown that the two proposed model selection methods both have a high rate of detecting the true/optimal variance/covariance structure. In particular, since the IDC magnifies the difference among the competing structures, it is highly sensitive to detect the most appropriate variance/covariance structure.
74

Bayesian Adjustment for Multiplicity

Scott, James Gordon January 2009 (has links)
<p>This thesis is about Bayesian approaches for handling multiplicity. It considers three main kinds of multiple-testing scenarios: tests of exchangeable experimental units, tests for variable inclusion in linear regresson models, and tests for conditional independence in jointly normal vectors. Multiplicity adjustment in these three areas will be seen to have many common structural features. Though the modeling approach throughout is Bayesian, frequentist reasoning regarding error rates will often be employed.</p><p>Chapter 1 frames the issues in the context of historical debates about Bayesian multiplicity adjustment. Chapter 2 confronts the problem of large-scale screening of functional data, where control over Type-I error rates is a crucial issue. Chapter 3 develops new theory for comparing Bayes and empirical-Bayes approaches for multiplicity correction in regression variable selection. Chapters 4 and 5 describe new theoretical and computational tools for Gaussian graphical-model selection, where multiplicity arises in performing many simultaneous tests of pairwise conditional independence. Chapter 6 introduces a new approach to sparse-signal modeling based upon local shrinkage rules. Here the focus is not on multiplicity per se, but rather on using ideas from Bayesian multiple-testing models to motivate a new class of multivariate scale-mixture priors. Finally, Chapter 7 describes some directions for future study, many of which are the subjects of my current research agenda.</p> / Dissertation
75

Selecting Web Services by Problem Similarity

Yan, Shih-hua 11 February 2009 (has links)
The recent development of the service-oriented architecture (SOA) has provided an opportunity to apply this new technology to support model management. This is particularly critical when more and more decision models are delivered as web services. A web-services-based approach to model management is useful in providing effective decision support. When a decision model is implemented as a web service, it is called a model-based web service. In model management, selecting a proper model-based web service is an important issue. Most current research on selecting such web service relies on matching inputs and outputs of the model, which is oversimplified. The incorporation of more semantic knowledge may be necessary to make the selection of model-based web services more effective. In this research, we propose a new mechanism that represents the semantics associated with a problem and then use the similarity of semantic information between a new problem description and existing web services to find the most suitable web services for solving the new problem. The paper defines the concept of entity similarity, attribute similarity, and functional similarity for problem matching. The web service that has the highest similarity is chosen as a base for constructing the new web services. The identified mapping is converted into BPEL4WS codes for utilizing the web services. To verify the feasibility of the proposed method, a prototype system has been implemented in JAVA.
76

A Model of Positive Sequential Dependencies in Judgments of Frequency

Annis, Jeffrey Scott 01 January 2013 (has links)
Positive sequential dependencies occur when the response on the current trial n is positively correlated with the response on trial n-1. This was recently observed in a Judgment of Frequency (JOF) task (Malmberg and Annis, 2011). A model of positive sequential dependencies was developed in the REM framework (Shiffrin & Steyvers, 1997) by assuming that features that represent the current test item in a retrieval cue carry over from the previous retrieval cue. To assess the model, we sought a set of data that allows us to distinguish between frequency similarity and item similarity. Therefore, we chose to use a JOF task in which we manipulated the item similarity of the stimuli by presenting either landscape photos (high similarity), or photos of everyday objects such as shoes, cars, etc (low similarity). Similarity was modeled by assuming either that the item representations share a proportion of features or by assuming that the exemplars from different stimulus classes vary in the distinctiveness or diagnosticity. The model fits indicated that the best way to model similarity was to assume that items share a proportions of features.
77

Selection, calibration, and validation of coarse-grained models of atomistic systems

Farrell, Kathryn Anne 03 September 2015 (has links)
This dissertation examines the development of coarse-grained models of atomistic systems for the purpose of predicting target quantities of interest in the presence of uncertainties. It addresses fundamental questions in computational science and engineering concerning model selection, calibration, and validation processes that are used to construct predictive reduced order models through a unified Bayesian framework. This framework, enhanced with the concepts of information theory, sensitivity analysis, and Occam's Razor, provides a systematic means of constructing coarse-grained models suitable for use in a prediction scenario. The novel application of a general framework of statistical calibration and validation to molecular systems is presented. Atomistic models, which themselves contain uncertainties, are treated as the ground truth and provide data for the Bayesian updating of model parameters. The open problem of the selection of appropriate coarse-grained models is addressed through the powerful notion of Bayesian model plausibility. A new, adaptive algorithm for model validation is presented. The Occam-Plausibility ALgorithm (OPAL), so named for its adherence to Occam's Razor and the use of Bayesian model plausibilities, identifies, among a large set of models, the simplest model that passes the Bayesian validation tests, and may therefore be used to predict chosen quantities of interest. By discarding or ignoring unnecessarily complex models, this algorithm contains the potential to reduce computational expense with the systematic process of considering subsets of models, as well as the implementation of the prediction scenario with the simplest valid model. An application to the construction of a coarse-grained system of polyethylene is given to demonstrate the implementation of molecular modeling techniques; the process of Bayesian selection, calibration, and validation of reduced-order models; and OPAL. The potential of the Bayesian framework for the process of coarse graining and of OPAL as a means of determining a computationally conservative valid model is illustrated on the polyethylene example. / text
78

Estimating and Correcting the Effects of Model Selection Uncertainty / Estimating and Correcting the Effects of Model Selection Uncertainty

Nguefack Tsague, Georges Lucioni Edison 03 February 2006 (has links)
No description available.
79

Coupling distances between Lévy measures and applications to noise sensitivity of SDE

Gairing, Jan, Högele, Michael, Kosenkova, Tetiana, Kulik, Alexei January 2013 (has links)
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Lévy diffusions in terms of the couping distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.
80

An evaluation of latent Dirichlet allocation in the context of plant-pollinator networks

Callaghan, Liam 08 January 2013 (has links)
There may be several mechanisms that drive observed interactions between plants and pollinators in an ecosystem, many of which may involve trait matching or trait complementarity. Hence a model of insect species activity on plant species should be represented as a mixture of these linkage rules. Unfortunately, ecologists do not always know how many, or even which, traits are the main contributors to the observed interactions. This thesis proposes the Latent Dirichlet Allocation (LDA) model from artificial intelligence for modelling the observed interactions in an ecosystem as a finite mixture of (latent) interaction groups in which plant and pollinator pairs that share common linkage rules are placed in the same interaction group. Several model selection criteria are explored for estimating how many interaction groups best describe the observed interactions. This thesis also introduces a new model selection score called ``penalized perplexity". The performance of the model selection criteria, and of LDA in general, are evaluated through a comprehensive simulation study that consider networks of various size along with varying levels of nesting and numbers of interaction groups. Results of the simulation study suggest that LDA works well on networks with mild-to-no nesting, but loses accuracy with increased nestedness. Further, the penalized perplexity tended to outperform the other model selection criteria in identifying the correct number of interaction groups used to simulate the data. Finally, LDA was demonstrated on a real network, the results of which provided insights into the functional roles of pollinator species in the study region.

Page generated in 0.0469 seconds