Spelling suggestions: "subject:"parametric bootstrap"" "subject:"parametric gbootstrap""
1 |
Rates and dates: Evaluating rhythmicity and cyclicity in sedimentary and biomineral recordsDexter, Troy Anthony 05 June 2011 (has links)
It is important to evaluate periodic fluctuations in environment or climate recorded through time to better understand the nature of Earth's history as well as to develop ideas about what the future may hold. There exist numerous proxies by which these environmental patterns can be demonstrated and analyzed through various time scales; from sequence stratigraphic bundles of transgressive-regressive cycles that demonstrate eustatic changes in global sea level, to the geochemical composition of a skeleton that records fluctuations in ocean temperature through the life of the biomineralizing organism. This study examines some of the methods by which we can analyze environmental fluctuations recorded at different time scales. The first project examines the methods by which extrabasinal orbital forcing (i.e. Milankovitch cycles) can be tested in the rock record. In order to distinguish these patterns, computer generated carbonate rock records were simulated with the resulting outcrops tested using common methods. These simulations were built upon eustatic sea level fluctuations with periods similar to what has been demonstrated in the rock record, as well as maintaining the many factors that affect the resultant rock composition such as tectonics, subsidence, and erosion. The result demonstrated that substantially large sea level fluctuations, such as those that occur when the planet is in an icehouse condition, are necessary to produce recognizable and preservable patterns that are otherwise overwhelmed by other depositional factors. The second project examines the temporal distribution of the bivalve Semele casali from Ubatuba Bay, Brazil by using amino acid racemization (AAR) calibrated with ¹⁴C radiometric dates. This data set is one of the largest ever compiled and demonstrates that surficial shell assemblages in the area have very long residence times extending back in time 10,000 years. The area has had very little change in sea level and the AAR ratios which are highly temperature dependent could be calibrated across sites varying from 10 to 53 meters in water depth. Long time scales of dated shells provide us with an opportunity to study climate fluctuations such as El Niño southern oscillation. The third project describes a newly developed method for estimating growth rates in organisms using closely related species from similar environments statistically analyzed for error using a jackknife corrected parametric bootstrap. As geochemical analyses get more precise while using less material, data can be collected through the skeleton of a biomineralizing organism, thus revealing information about environmental shifts at scales shorter than a year. For such studies, the rate of growth of an organism has substantial effects on the interpretation of results, and such rates of growth are difficult to ascertain, particularly in fossilized specimens. This method removes the need for direct measures of growth rates and even the most conservative estimates of growth rates are useful in constraining the age ranges of geochemical intra-skeletal studies, thus elucidating the likely time period under analysis. This study assesses the methods by which periodic environmental fluctuations at greatly varying time scales can be used to evaluate our understanding of earth processes using rigorous quantitative strategies. / Ph. D.
|
2 |
Goodness-of-Fit Test Issues in Generalized Linear Mixed ModelsChen, Nai-Wei 2011 December 1900 (has links)
Linear mixed models and generalized linear mixed models are random-effects models widely applied to analyze clustered or hierarchical data. Generally, random effects are often assumed to be normally distributed in the context of mixed models. However, in the mixed-effects logistic model, the violation of the assumption of normally distributed random effects may result in inconsistency for estimates of some fixed effects and the variance component of random effects when the variance of the random-effects distribution is large. On the other hand, summary statistics used for assessing goodness of fit in the ordinary logistic regression models may not be directly applicable to the mixed-effects logistic models. In this dissertation, we present our investigations of two independent studies related to goodness-of-fit tests in generalized linear mixed models.
First, we consider a semi-nonparametric density representation for the random effects distribution and provide a formal statistical test for testing normality of the random-effects distribution in the mixed-effects logistic models. We obtain estimates of parameters by using a non-likelihood-based estimation procedure. Additionally, we not only evaluate the type I error rate of the proposed test statistic through asymptotic results, but also carry out a bootstrap hypothesis testing procedure to control the inflation of the type I error rate and to study the power performance of the proposed test statistic. Further, the methodology is illustrated by revisiting a case study in mental health.
Second, to improve assessment of the model fit in the mixed-effects logistic models, we apply the nonparametric local polynomial smoothed residuals over within-cluster continuous covariates to the unweighted sum of squares statistic for assessing the goodness-of-fit of the logistic multilevel models. We perform a simulation study to evaluate the type I error rate and the power performance for detecting a missing quadratic or interaction term of fixed effects using the kernel smoothed unweighted sum of squares statistic based on the local polynomial smoothed residuals over x-space. We also use a real data set in clinical trials to illustrate this application.
|
3 |
Tests de type fonction caractéristique en inférence de copulesBahraoui, Tarik January 2017 (has links)
Une classe générale de statistiques de rangs basées sur la fonction caractéristique est introduite afin de tester l'hypothèse composite d'appartenance à une famille de copules multidimensionnelles. Ces statistiques d'adéquation sont définies comme des distances fonctionnelles de type L_2 pondérées entre une version non paramétrique et une version semi-paramétrique de la fonction caractéristique que l'on peut associer à une copule. Il est démontré que ces statistiques de test se comportent asymptotiquement comme des V-statistiques dégénérées d'ordre quatre et que leurs lois limites s'expriment en termes de sommes pondérées de variables khi-deux indépendantes. La convergence des tests sous des alternatives générales est établie, de même que la validité du bootstrap paramétrique pour le calcul de valeurs critiques. Le comportement des nouveaux tests sous des tailles d'échantillons faibles et modérées est étudié à l'aide de simulations et est comparé à celui d'un test concurrent fondé sur la copule empirique. La méthodologie est finalement illustrée sur un jeu de données à plusieurs dimensions.
|
4 |
An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups designWang, Chunxin 01 July 2011 (has links)
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions.
When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results.
The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
|
5 |
Inference for the intrinsic separation among distributions which may differ in location and scaleLing, Yan January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Paul I. Nelson / The null hypothesis of equal distributions, H0 : F1[equals]F2[equals]...[equals]FK , is commonly used to compare two or more treatments based on data consisting of independent random samples. Using this approach, evidence of a difference among the treatments may be reported even though from a practical standpoint their effects are indistinguishable, a longstanding problem in hypothesis testing. The concept of effect size is widely used in the social sciences to deal with this issue by computing a unit-free estimate of the magnitude of the departure from H0 in terms of a change in location. I extend this approach by replacing H0 with hypotheses H0* that state that the distributions {Fi} are possibly
different in location and or scale, but close, so that rejection provides evidence that at least one treatment has an important practical effect. Assessing statistical significance under H0* is difficult and typically requires inference in the presence of nuisance parameters. I will use frequentist, Bayesian and Fiducial modes of inference to obtain approximate tests and
carry out simulation studies of their behavior in terms of size and power. In some cases a bootstrap will be employed. I will focus on tests based on independent random samples arising from K[greater than and equals]3 normal distributions not required to have the same variances to generalize the K[equals]2 sample parameter P(X1>X2) and non-centrality type parameters that arise in testing for the equality of means.
|
6 |
Comparing measures of fit for circular distributionsSun, Zheng 04 May 2010 (has links)
This thesis shows how to test the fit of a data set to a number of different models, using Watson’s U2 statistic for both grouped and continuous data. While Watson’s U2 statistic was introduced for continuous data, in recent work, the statistic has been adapted for grouped data. However, when using Watson’s U2 for continuous data, the asymptotic distribution is difficult to obtain, particularly, for some skewed circular distributions that contain four or five parameters. Until now, U2 asymptotic points are worked out only for uniform distribution and the von Mises distribution among all circular distributions. We give U2 asymptotic points for the wrapped exponential distributions, and we show that U2 asymptotic points when data are grouped is usually easier to obtain for other more advanced circular distributions.
In practice, all continuous data is grouped into cells whose width is decided by the accuracy of the measurement. It will be found useful to treat such data as grouped with sufficient number of cells in the examples to be analyzed. When the data are
treated as grouped, asymptotic points for U2 match well with the points when the data are treated as continuous. Asymptotic theory for U2 adopted for grouped data is given in the thesis. Monte Carlo studies show that, for reasonable sample sizes, the asymptotic points will give good approximations to the p-values of the test.
|
7 |
Úplně nejmenší čtverce a jejich asymptotické vlastnosti / Total Least Squares and Their Asymptotic PropertiesChuchel, Karel January 2020 (has links)
Tato práce se zabývá metodou úplně nejmenších čtverc·, která slouží pro odhad parametr· v lineárních modelech. V práci je uveden základní popis metody a její asymptotické vlastnosti. Je vysvětleno, jakým zp·sobem lze v konceptu metody využít neparametrický bootstrap pro hledání odhadu. Vlastnosti bootstrap od- had· jsou pak simulovány na pseudo náhodně vygenerovaných datech. Simulace jsou prováděny pro dvourozměrný parametr v r·zných nastaveních základního modelu. Jednotlivé bootstrap odhady jsou v rovině řazeny pomocí Mahalanobis a Tukey statistical depth function. Simulace potvrzují, že bootstrap odhad dává dostatečně dobré výsledky, aby se dal využít pro reálné situace.
|
8 |
Actuarial applications of multivariate phase-type distributions : model calibration and credibilityHassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
9 |
Actuarial applications of multivariate phase-type distributions : model calibration and credibilityHassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
|
10 |
Etudes sur le cycle économique. Une approche par les modèles à changements de régime / Studies in Business Cycles Using Markov-switching ModelsRabah-Romdhane, Zohra 12 December 2013 (has links)
L'ampleur de la Grande Récession a suscité un regain d'intérêt pour l'analyse conjoncturelle, plus particulièrement du cycle économique. Notre thèse participe de ce renouveau d'attention pour l'étude des fluctuations économiques.Après une présentation générale des modèles à changements de régime dans le chapitre 1, le chapitre suivant propose une chronologie du cycle des affaires de l'économie française sur la période 1970-2009. Trois méthodes de datation sont utilisées à cette fin : la règle des deux trimestres consécutifs de croissance négative, l'approche non paramétrique de Bry et Boschan (1971) et le modèle markovien à changements de régime de Hamilton (1989). Les résultats montrent que l'existence de ruptures structurelles peut empêcher ce dernier modèle d'identifier correctement les points de retournement cycliques. Cependant, quandces ruptures sont prises en considération, le calendrier des récessions françaises obtenu à l'aide du modèle d'Hamilton coïncide largement avec celui obtenu par les deux autres méthodes. Le chapitre 3 développe une analyse de la non-linéarité dans le modèle à changements de régime en utilisant un ensemble de tests non-standards. Une étude par simulation Monte Carlo révèle qu'un test récemment proposé par Carrasco, Hu et Ploberger (2013) présente une faible puissance pour des processus générateurs des données empiriquement pertinents et ce, lorsqu'on tient compte de l'autocorrélation sous l'hypothèse nulle. En revanche, untest "bootstrap" paramétrique basé sur le rapport des vraisemblances a, pour sa part une puissance plus élevée, ce qui traduit l'existence probable de non-linéarités significatives dans le PIB réel trimestriel de la France et des Etats-Unis. Quand il s'agit de tester un changement de régime en moyenne ou en constante, il est important de tenir compte de l'autocorrélation sous l'hypothèse nulle de linéarité. En effet, dans le cas contraire, un rejet de la linéarité pourrait simplement refléter une mauvaise spécification de la persistance des données, plutôt que d'une non-linéarité inhérente.Le chapitre 4 examine une question importante : la considération de ruptures structurelles dans les séries améliore-t-elle la performance prédictive du modèle markovien relativement à son homologue linéaire ? La démarche adoptée pour y répondre consiste à combiner les prévisions obtenues pour différentes périodes d'estimation. Voici le principal résultat dû à l'application de cette démarche : la prise en compte des données provenant des intervalles de temps précédant les ruptures structurelles et la "Grande Modération" améliore les prévisions basées sur des données tirées exclusivement de ces épisodes. De la sorte, les modèles à changements de régime s'avèrent capables de prédire la probabilité d'événements tels que la Grande Récession, avec plus de précision que ses homologues linéaires.Les conclusions générales synthétisent les principaux acquis de la thèse et évoqueplusieurs perspectives de recherche future. / The severity of the Great Recession has renewed interest in the analysis of business cycles. Our thesis pertains to this revival of attention for the study of cyclical fluctuations. After reviewing the regime-switching models in Chapter one, the following chapter suggests a chronology of the classical business cycle in French economy for the 1970-2009 period. To that end, three dating methodologies are used: the rule of thumb of two consecutive quarters of negative growth, the non-parametric approach of Bry and Boschan (1971), and the Markov-switching approach of Hamilton (1989). The results show that,omitted structural breaks may hinder the Markov-switching approach to capture business-cycle fluctuations. However, when such breaks are allowed for, the timing of the French recessions provided by the Markov-switching model closely matches those derived by the rule-based approaches.Chapter 3 performs a nonlinearity analysis inMarkov-switching modelling using a set of non-standard tests. Monte Carlo analysis reveals that a recently test proposed by Carrasco, Hu, and Ploberger (2013) for Markov switching has low power for empirically-relevant data generating processes when allowing for serial correlation under the null. By contrast, a parametric bootstrap likelihood ratio (LR) test of Markov switching has higher power in the same setting, providing stronger support for nonlinearity in quarterly French and U.S. real GDP. When testing for Markov switching in mean or intercept of an autoregressive process, it is important to allow for serial correlation under the null hypothesis of linearity.Otherwise, a rejection of linearity could merely reflect misspecification of the persistence properties of the data, rather than any inherent nonlinearity.Chapter 4 examines whether controlling for structural breaks improves the forecasting performance of the Markov-switching models, as compared to their linear counterparts.The approach considered to answer this issue is to combined forecasts across different estimation windows. The outcome of applying such an approach shows that, including data from periods preceding structural breaks and particularly the "Great Moderation" improves upon forecasts based on data drawn exclusively from these episodes. Accordingly, Markov-switching models forecast the probability of events such as the Great Recession more accurately than their linear counterparts.The general conclusions summarize the main results of the thesis and, suggest several directions for future research.
|
Page generated in 0.0639 seconds