• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • Tagged with
  • 12
  • 12
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Exploration of Effect of Model Misspecification and Development of an Adequacy-Test for Substitution Model in Phylogenetics

Chen, Wei Jr 06 November 2012 (has links)
It is possible that the maximum likelihood method can give an inconsistent result when the DNA sequences are generated under a tree topology which is in the Felsentein Zone and analyzed with a misspeci ed model. Therefore, it is important to select a good substitution model. This thesis rst explores the e ects of di erent degrees and types of model misspeci cation on the maximum likelihood estimates. The results are presented for tree selection and branch length estimates based on simulated data sets. Next, two Pearson's goodness-of- t tests are developed based on binning of site patterns. These two tests are used for testing the adequacy of substitution models and their performances are studied on both simulated data sets and empirical data.
2

Model Robust Regression Based on Generalized Estimating Equations

Clark, Seth K. 04 April 2002 (has links)
One form of model robust regression (MRR) predicts mean response as a convex combination of a parametric and a nonparametric prediction. MRR is a semiparametric method by which an incompletely or an incorrectly specified parametric model can be improved through adding an appropriate amount of a nonparametric fit. The combined predictor can have less bias than the parametric model estimate alone and less variance than the nonparametric estimate alone. Additionally, as shown in previous work for uncorrelated data with linear mean function, MRR can converge faster than the nonparametric predictor alone. We extend the MRR technique to the problem of predicting mean response for clustered non-normal data. We combine a nonparametric method based on local estimation with a global, parametric generalized estimating equations (GEE) estimate through a mixing parameter on both the mean scale and the linear predictor scale. As a special case, when data are uncorrelated, this amounts to mixing a local likelihood estimate with predictions from a global generalized linear model. Cross-validation bandwidth and optimal mixing parameter selectors are developed. The global fits and the optimal and data-driven local and mixed fits are studied under no/some/substantial model misspecification via simulation. The methods are then illustrated through application to data from a longitudinal study. / Ph. D.
3

Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models

Ye, Fan 2011 May 1900 (has links)
Numerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).
4

A-optimal Minimax Design Criterion for Two-level Fractional Factorial Designs

Yin, Yue 29 August 2013 (has links)
In this thesis we introduce and study an A-optimal minimax design criterion for two-level fractional factorial designs, which can be used to estimate a linear model with main effects and some interactions. The resulting designs are called A-optimal minimax designs, and they are robust against the misspecification of the terms in the linear model. They are also efficient, and often they are the same as A-optimal and D-optimal designs. Various theoretical results about A-optimal minimax designs are derived. A couple of search algorithms including a simulated annealing algorithm are discussed to search for optimal designs, and many interesting examples are presented in the thesis. / Graduate / 0463 / yinyue@uvic.ca
5

A robust test of homogeneity in zero-inflated models for count data

Mawella, Nadeesha R. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Wei-Wen Hsu / Evaluating heterogeneity in the class of zero-inflated models has attracted considerable attention in the literature, where the heterogeneity refers to the instances of zero counts generated from two different sources. The mixture probability or the so-called mixing weight in the zero-inflated model is used to measure the extent of such heterogeneity in the population. Typically, the homogeneity tests are employed to examine the mixing weight at zero. Various testing procedures for homogeneity in zero-inflated models, such as score test and Wald test, have been well discussed and established in the literature. However, it is well known that these classical tests require the correct model specification in order to provide valid statistical inferences. In practice, the testing procedure could be performed under model misspecification, which could result in biased and invalid inferences. There are two common misspecifications in zero-inflated models, which are the incorrect specification of the baseline distribution and the misspecified mean function of the baseline distribution. As an empirical evidence, intensive simulation studies revealed that the empirical sizes of the homogeneity tests for zero-inflated models might behave extremely liberal and unstable under these misspecifications for both cross-sectional and correlated count data. We propose a robust score statistic to evaluate heterogeneity in cross-sectional zero-inflated data. Technically, the test is developed based on the Poisson-Gamma mixture model which provides a more general framework to incorporate various baseline distributions without specifying their associated mean function. The testing procedure is further extended to correlated count data. We develop a robust Wald test statistic for correlated count data with the use of working independence model assumption coupled with a sandwich estimator to adjust for any misspecification of the covariance structure in the data. The empirical performances of the proposed robust score test and Wald test are evaluated in simulation studies. It is worth to mention that the proposed Wald test can be implemented easily with minimal programming efforts in a routine statistical software such as SAS. Dental caries data from the Detroit Dental Health Project (DDHP) and Girl Scout data from Scouting Nutrition and Activity Program (SNAP) are used to illustrate the proposed methodologies.
6

Multiscale Change-point Segmentation: Beyond Step Functions

Guo, Qinghai 03 February 2017 (has links)
No description available.
7

Testing the Effectiveness of Various Commonly Used Fit Indices for Detecting Misspecifications in Multilevel Structure Equation Models

Hsu, Hsien-Yuan 2009 December 1900 (has links)
Two Monte Carlo studies were conducted to investigate the sensitivity of fit indices in detecting model misspecification in multilevel structural equation models (MSEM) with normally distributed or dichotomous outcome variables separately under various conditions. Simulation results showed that RMSEA and CFI only reflected within-model fit. In addition, SRMR for within-model (SRMR-W) was more sensitive to within-model misspecifications in factor covariances than pattern coefficients regardless of the impact of other design factors. Researchers should use SRMR-W in combination with RMSEA and CFI to evaluate the within-mode. On the other hand, SRMR for between-model (SRMR-B) was less likely to detect between-model misspecifications when ICC decreased. Lastly, the performance of WRMR was dominated by the misfit of within-model. In addition, WRMR was less likely to detect the misspecified between-models when ICC was relative low. Therefore, WRMR can be used to evaluate the between-model fit when the within-models were correctly specified and the ICC was not too small.
8

Essays in monetary economics and applied econometrics

Giordani, Paolo January 2001 (has links)
This dissertation collects five independent essays. The first essay is An Alternative Explanation of the Price Puzzle. The most widely accepted explanation of the price puzzle points to an inadequate performance of the VAR in forecasting inflation. This essay suggests that the finding of a price puzzle is due to a seemingly innocent misspecification in taking the theoretical model to the data: a measure of output gap is not included in the VAR (output alone being used instead), while this variable is a crucial element in every equation of the theoretical models. When the VAR is correctly specified, the price puzzle disappears. Building on results contained in the first paper, the second-- Stronger Evidence of Long-Run Neutrality: A comment on Bernanke and Mihov---improves the empirical performance of standard models on the prediction that a monetary policy shock should have temporary effects on output. It turns out that the same misspecification causing the price puzzle is also responsible for overestimation of the time needed for the effects on output of a monetary policy shock to die out. The point can be proven in a theoretical economy, and is confirmed on US data. Monetary Policy Without Monetary Aggregates: Some (Surprising) Evidence , joint with Giovanni Favara) is the third essay. It points to what seems to be a falsified prediction of models in the New-Keynesian framework. In this framework monetary aggregates are reserved a pretty boring role, so boring that they can be safely excluded from the final lay out of the model. These models predict that a money demand shock should have no effect on output, inflation and interest rate. However, the prediction seems to be quite wrong Inflation Forecast Targeting, joint with Paul Söderlind, takes a step outside the representative-agent framework. In RE models, all agents typically have the same information set, and therefore make the same predictions. However, in the real even professional forecasters show substantial disagreement. This disagreement can have an impact on asset prices and transaction volumes, among other things. However, there is no unique way of aggregating forecasts (or forecast probability density functions) into a measure of disagreement. The paper deals with this problem, surveying some proposed methods. The most appropriate measure of disagreement turns out to depend on the intended use, that is, on the model. Moreover, forecasters underestimate uncertainty. Constitutions and Central-Bank Independence: An Objection to McCallum's Second Fallacy, joint with Giancarlo Spagnolo , is an excursion into the field of Political Economy. The essay provides some foundations for the assumption that renegotiating a delegation contract can be costly by illustrating how political institutions can generate inertia in re-contracting, reduce the gains from it or prevent it altogether. Once the nature of renegotiation costs has been clarified, it is easier to see why certain institutions can mitigate or solve dynamic inconsistencies better than others. / Diss. Stockholm : Handelshögsk., 2001
9

Model Misspecification and the Hedging of Exotic Options

Balshaw, Lloyd Stanley 30 August 2018 (has links)
Asset pricing models are well established and have been used extensively by practitioners both for pricing options as well as for hedging them. Though Black-Scholes is the original and most commonly communicated asset pricing model, alternative asset pricing models which incorporate additional features have since been developed. We present three asset pricing models here - the Black-Scholes model, the Heston model and the Merton (1976) model. For each asset pricing model we test the hedge effectiveness of delta hedging, minimum variance hedging and static hedging, where appropriate. The options hedged under the aforementioned techniques and asset pricing models are down-and-out call options, lookback options and cliquet options. The hedges are performed over three strikes, which represent At-the-money, Out-the-money and In-the-money options. Stock prices are simulated under the stochastic-volatility double jump diffusion (SVJJ) model, which incorporates stochastic volatility as well as jumps in the stock and volatility process. Simulation is performed under two ’Worlds’. World 1 is set under normal market conditions, whereas World 2 represents stressed market conditions. Calibrating each asset pricing model to observed option prices is performed via the use of a least squares optimisation routine. We find that there is not an asset pricing model which consistently provides a better hedge in World 1. In World 2, however, the Heston model marginally outperforms the Black-Scholes model overall. This can be explained through the higher volatility under World 2, which the Heston model can more accurately describe given the stochastic volatility component. Calibration difficulties are experienced with the Merton model. These difficulties lead to larger errors when minimum variance hedging and alternative calibration techniques should be considered for future users of the optimiser.
10

Model robust regression: combining parametric, nonparametric, and semiparametric methods

Mays, James Edward January 1995 (has links)
In obtaining a regression fit to a set of data, ordinary least squares regression depends directly on the parametric model formulated by the researcher. If this model is incorrect, a least squares analysis may be misleading. Alternatively, nonparametric regression (kernel or local polynomial regression, for example) has no dependence on an underlying parametric model, but instead depends entirely on the distances between regressor coordinates and the prediction point of interest. This procedure avoids the necessity of a reliable model, but in using no information from the researcher, may fit to irregular patterns in the data. The proper combination of these two regression procedures can overcome their respective problems. Considered is the situation where the researcher has an idea of which model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two methods of model robust regression are developed and compared in this context. These methods involve parametric fits to the data and nonparametric fits to either the data or residuals. The two fits are then combined in the most efficient proportions via a mixing parameter. Performance is based on bias and variance considerations. / Ph. D. / incomplete_metadata

Page generated in 0.1314 seconds