• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 2
  • Tagged with
  • 43
  • 25
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Exploration of Effect of Model Misspecification and Development of an Adequacy-Test for Substitution Model in Phylogenetics

Chen, Wei Jr 06 November 2012 (has links)
It is possible that the maximum likelihood method can give an inconsistent result when the DNA sequences are generated under a tree topology which is in the Felsentein Zone and analyzed with a misspeci ed model. Therefore, it is important to select a good substitution model. This thesis rst explores the e ects of di erent degrees and types of model misspeci cation on the maximum likelihood estimates. The results are presented for tree selection and branch length estimates based on simulated data sets. Next, two Pearson's goodness-of- t tests are developed based on binning of site patterns. These two tests are used for testing the adequacy of substitution models and their performances are studied on both simulated data sets and empirical data.
2

Spatial Econometrics Revisited: A Case Study of Land Values in Roanoke County

Kaltsas, Ioannis 27 November 2000 (has links)
An increasing volume of empirical literature demonstrates the possibility of spatial autocorrelation in land value models. A number of objections regarding the methodology followed in those empirical studies have been raised. This thesis examines three propositions. The first proposition states that there is spatial dependence in the land value model in Roanoke County. The second proposition is that mechanical construction of neighborhood effects, or grouping nearby land parcels into neighborhoods, is not always the best way to capture spatial effects. Finally, the third and most important proposition states that by implementing a comprehensive set of individual and joint misspecification tests, one can better identify misspecification error sources and establish a more statistically sound and reliable model than models based on existing spatial econometric practices. The findings of this dissertation basically confirm the validity of those three propositions. In addition, we conclude that based on their development status prices of land parcels in Roanoke County may follow different stochastic processes. Changes in the values of hedonic variables have different implications for different groups of land parcels. / Ph. D.
3

Combining structural and reduced-form models for macroeconomic analysis and policy forecasting

Monti, Francesca 08 February 2011 (has links)
Can we fruitfully use the same macroeconomic model to forecast and to perform policy analysis? There is a tension between a model’s ability to forecast accurately and its ability to tell a theoretically consistent story. The aim of this dissertation is to propose ways to soothe this tension, combining structural and reduced-form models in order to have models that can effectively do both.
4

Model Robust Regression Based on Generalized Estimating Equations

Clark, Seth K. 04 April 2002 (has links)
One form of model robust regression (MRR) predicts mean response as a convex combination of a parametric and a nonparametric prediction. MRR is a semiparametric method by which an incompletely or an incorrectly specified parametric model can be improved through adding an appropriate amount of a nonparametric fit. The combined predictor can have less bias than the parametric model estimate alone and less variance than the nonparametric estimate alone. Additionally, as shown in previous work for uncorrelated data with linear mean function, MRR can converge faster than the nonparametric predictor alone. We extend the MRR technique to the problem of predicting mean response for clustered non-normal data. We combine a nonparametric method based on local estimation with a global, parametric generalized estimating equations (GEE) estimate through a mixing parameter on both the mean scale and the linear predictor scale. As a special case, when data are uncorrelated, this amounts to mixing a local likelihood estimate with predictions from a global generalized linear model. Cross-validation bandwidth and optimal mixing parameter selectors are developed. The global fits and the optimal and data-driven local and mixed fits are studied under no/some/substantial model misspecification via simulation. The methods are then illustrated through application to data from a longitudinal study. / Ph. D.
5

Two Essays on Resource Economics: A Study of the Statistical Evidence for Global Warming and An Analysis of Overcompliance with Effluent Standards Among Wastewater Treatment Plants

Akobundu, Eberechukwu 02 December 2004 (has links)
These papers analyze two issues in resource economics that are currently debated in academic and policy arenas: global warming and overcompliant behavior amongst regulated sources of water pollution. The first paper examines the evidence for global warming in particular, the published estimates of the rate of global warming. The paper reproduces published results using the same data, provides evidence that the statistical model used to obtain these estimates is misspecified for the data, and re-specifies the model in order to obtain a statistically adequate model. The re-specified model indicates that trends in the surface temperature anomalies are highly nonlinear rather than linear and that currently published estimates of the degree of global warming are based on a misspecified model. It argues for caution in interpreting linear trend estimates and illustrates the importance of model misspecification testing and re-specification when modeling climate change using statistical models. The second paper examines recent evidence for overcompliant behavior amongst wastewater treatment plants whose pollutant discharges are regulated under the Clean Water Act. The historical evidence suggests that many regulated facilities do not comply with permit regulations. This behavior has been attributed to inadequate monitoring and enforcement by the regulatory agencies as well as to an institutional structure that penalizes noncompliance but that does not reward overcompliance. Against this backdrop, the evidence for significant and widespread overcompliance appears puzzling. The paper examines overcompliance with a widely- regulated pollutant, biochemical oxygen demand (BOD). The testable hypotheses are: whether jointness in pollution control between nitrogen and BOD can explain overcompliance and whether variation in BOD output can explain BOD overcompliance. These hypotheses are examined by developing a conceptual model of BOD overcompliance and estimating a model of BOD control. The results indicate that jointness in pollution control plays a significant role in explaining BOD overcompliance. Variation in BOD output is not a significant factor in explaining BOD overcompliance. The paper explores plausible reasons for this result and proposes significant modifications to the traditional marginal analysis of BOD overcompliance/compliance decisions. / Ph. D.
6

Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models

Ye, Fan 2011 May 1900 (has links)
Numerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).
7

A-optimal Minimax Design Criterion for Two-level Fractional Factorial Designs

Yin, Yue 29 August 2013 (has links)
In this thesis we introduce and study an A-optimal minimax design criterion for two-level fractional factorial designs, which can be used to estimate a linear model with main effects and some interactions. The resulting designs are called A-optimal minimax designs, and they are robust against the misspecification of the terms in the linear model. They are also efficient, and often they are the same as A-optimal and D-optimal designs. Various theoretical results about A-optimal minimax designs are derived. A couple of search algorithms including a simulated annealing algorithm are discussed to search for optimal designs, and many interesting examples are presented in the thesis. / Graduate / 0463 / yinyue@uvic.ca
8

A robust test of homogeneity in zero-inflated models for count data

Mawella, Nadeesha R. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Wei-Wen Hsu / Evaluating heterogeneity in the class of zero-inflated models has attracted considerable attention in the literature, where the heterogeneity refers to the instances of zero counts generated from two different sources. The mixture probability or the so-called mixing weight in the zero-inflated model is used to measure the extent of such heterogeneity in the population. Typically, the homogeneity tests are employed to examine the mixing weight at zero. Various testing procedures for homogeneity in zero-inflated models, such as score test and Wald test, have been well discussed and established in the literature. However, it is well known that these classical tests require the correct model specification in order to provide valid statistical inferences. In practice, the testing procedure could be performed under model misspecification, which could result in biased and invalid inferences. There are two common misspecifications in zero-inflated models, which are the incorrect specification of the baseline distribution and the misspecified mean function of the baseline distribution. As an empirical evidence, intensive simulation studies revealed that the empirical sizes of the homogeneity tests for zero-inflated models might behave extremely liberal and unstable under these misspecifications for both cross-sectional and correlated count data. We propose a robust score statistic to evaluate heterogeneity in cross-sectional zero-inflated data. Technically, the test is developed based on the Poisson-Gamma mixture model which provides a more general framework to incorporate various baseline distributions without specifying their associated mean function. The testing procedure is further extended to correlated count data. We develop a robust Wald test statistic for correlated count data with the use of working independence model assumption coupled with a sandwich estimator to adjust for any misspecification of the covariance structure in the data. The empirical performances of the proposed robust score test and Wald test are evaluated in simulation studies. It is worth to mention that the proposed Wald test can be implemented easily with minimal programming efforts in a routine statistical software such as SAS. Dental caries data from the Detroit Dental Health Project (DDHP) and Girl Scout data from Scouting Nutrition and Activity Program (SNAP) are used to illustrate the proposed methodologies.
9

Exploring the Test of Covariate Moderation Effect and the Impact of Model Misspecification in Multilevel MIMIC Models

Cao, Chunhua 29 March 2017 (has links)
In multilevel MIMIC models, covariates at the between level and at the within level can be modeled simultaneously. Covariates interaction effect occurs when the effect of one covariate on the latent factor varies depending on the level of the other covariate. The two covariates can be both at the between level, both at the within level, and one at the between level and the other one at the within level. And they can create between level covariates interaction, within level covariates interaction, and cross level covariates interaction. Study One purports to examine the performance of multilevel MIMIC models in estimating the covariates interaction described above. Type I error of falsely detecting covariates interaction when there is no covariates interaction effect in the population model, and the power of correctly detecting the covariates interaction effect, bias of the estimate of interaction effect, and RMSE are examined. The design factors include the location of the covariates interaction effect, cluster number, cluster size, intra-class correlation (ICC) level, and magnitude of the interaction effect. The results showed that ML MIMIC performed well in detecting the covariates interaction effect when the covariates interaction effect was at the within level or cross level. However, when the covariates interaction effect was at the between level, the performance of ML MIMIC depended on the magnitude of the interaction effect, ICC, and sample size, especially cluster size. In Study Two, the impact of omitting covariates interaction effect on the estimate of other parameters is investigated when the covariates interaction effect is present in the population model. Parameter estimates of factor loadings, intercepts, main effects of the covariates, and residual variances produced by the correct model in Study One are compared to those produced by the misspecified model to check the impact. Moreover, the sensitivity of fit indices, such as chi-square, CFI, RMSEA, SRMR-B (between), and SRM-W (within) are also examined. Results indicated that none of the fit indices was sensitive to the omission of the covariates interaction effect. The biased parameter estimates included the two covariates main effect and the between-level factor mean.
10

Essays on DSGE Models and Bayesian Estimation

Kim, Jae-yoon 11 June 2018 (has links)
This thesis explores the theory and practice of sovereignty. I begin with a conceptual analysis of sovereignty, examining its theological roots in contrast with its later influence in contestations over political authority. Theological debates surrounding God’s sovereignty dealt not with the question of legitimacy, which would become important for political sovereignty, but instead with the limits of his ability. Read as an ontological capacity, sovereignty is coterminous with an existent’s activity in the world. As lived, this capacity is regularly limited by the ways in which space is produced via its representations, its symbols, and its practices. All collective appropriations of space have a nomos that characterizes their practice. Foucault’s account of “biopolitics” provides an account of how contemporary materiality is distributed, an account that can be supplemented by sociological typologies of how city space is typically produced. The collective biopolitical distribution of space expands the range of practices that representationally legibilize activity in the world, thereby expanding the conceptual limits of existents and what it means for them to act up to the borders of their capacity, i.e., to practice sovereignty. The desire for total authorial capacity expresses itself in relations of domination and subordination that never erase the fundamental precarity of subjects, even as these expressions seek to disguise it. I conclude with a close reading of narratives recounting the lives of residents in Chicago’s Englewood, reading their activity as practices of sovereignty which manifest variously as they master and produce space. / Ph. D.

Page generated in 0.166 seconds