141 |
Corrected LM goodness-of-fit tests with applicaton to stock returnsPercy, Edward Richard, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 263-266).
|
142 |
Splitting frames based on hypothesis testing for patient motion compensation in SPECTMa, Linna. January 2006 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Hypothesis testing; motion compensation; SPECT. Includes bibliographical references (leaves 30-31).
|
143 |
Rank-sum test for two-sample location problem under order restricted randomized designSun, Yiping. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 121-124).
|
144 |
Novel applications for hierarchical natural move Monte Carlo simulations : from proteins to nucleic acidsDemharter, Samuel January 2016 (has links)
Biological molecules often undergo large structural changes to perform their function. Computational methods can provide a fine-grained description at the atomistic scale. Without sufficient approximations to accelerate the simulations, however, the time-scale on which functional motions often occur is out of reach for many traditional methods. Natural Move Monte Carlo belongs to a class of methods that were introduced to bridge this gap. I present three novel applications for Natural Move Monte Carlo, two on proteins and one on DNA epigenetics. In the second part of this thesis I introduce a new protocol for the testing of hypotheses regarding the functional motions of biological systems, named customised Natural Move Monte Carlo. Two different case studies are presented aimed at demonstrating the feasibility of customised Natural Move Monte Carlo.
|
145 |
Essays on regime switching and DSGE models with applications to U.S. business cycleZhuo, Fan 09 November 2016 (has links)
This dissertation studies various issues related to regime switching and DSGE models. The methods developed are used to study U.S. business cycles.
Chapter one considers and derives the limit distributions of likelihood ratio based tests for Markov regime switching in multiple parameters in the context of a general class of nonlinear models. The analysis simultaneously addresses three difficulties: (1) some nuisance parameters are unidentified under the null hypothesis, (2) the null hypothesis yields a local optimum, and (3) the conditional regime probabilities follow stochastic processes that can only be represented recursively. When applied to US quarterly real GDP growth rates, the tests suggest strong evidence favoring the regime switching specification over a range of sample periods.
Chapter two develops a modified likelihood ratio (MLR) test to detect regime switching in state space models. I apply the filtering algorithm introduced in Gordon and Smith (1988) to construct a modified likelihood function under the alternative hypothesis of two regimes and I extend the analysis in Chapter one to establish the asymptotic distribution of the MLR statistic under the null hypothesis of a single regime. I also apply the test to a simple model of the U.S. unemployment rate. This contribution is the first to develop a test based on the likelihood ratio principle to detect regime switching in state space models.
The final chapter estimates a search and matching model of the aggregate labor market with sticky price and staggered wage negotiation. It starts with a partial equilibrium search and matching model and expands into a general equilibrium model with sticky price and staggered wage. I study the quantitative implications of the model. The results show that (1) the price stickiness and staggered wage structure are quantitatively important for the search and matching model of the aggregate labor market; (2) relatively high outside option payments to the workers, such as unemployment insurance payments, are needed to match the data; and (3) workers have lower bargaining power relative to firms, which contrasts with the assumption in the literature that workers and firms share equally the surplus generated from their employment relationship.
|
146 |
Testes de hipoteses para dados funcionais baseados em distancias : um estudo usando splines / Distances approach to test hypothesis for functional dataSouza, Camila Pedroso Estevam de 25 April 2008 (has links)
Orientador: Ronaldo Dias / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T22:55:48Z (GMT). No. of bitstreams: 1
Souza_CamilaPedrosoEstevamde_M.pdf: 4239065 bytes, checksum: 099f19df22c0b40a411d07eacc2fe0d1 (MD5)
Previous issue date: 2008 / Resumo: Avanços na tecnologia moderna têm facilitado a coleta e análise de dados de alta dimensão, ou dados que são formados por medidas repetidas de um mesmo objeto. Quando os dados são registrados densamente ao longo do tempo, freqüentemente por máquinas, eles são tipicamente chamados de dados funcionais, com uma curva (ou função) observada por objeto em estudo. A análise estatística de uma amostra de n curvas como essas é comumente chamada de análise de dados funcionais, ou ADF. Conceitualmente, dados funcionais são continuamente definidos. Claro que na prática eles geralmente são observados em pontos discretos. Não há exigência para que os dados sejam suaves, mas freqüentemente a suavidade ou outra regularidade será um aspecto chave da análise, em alguns casos derivadas das funções observadas serão importantes. Nessa dissertação diferentes técnicas de suavização serão apresentadas e discutidas, principalmente aquelas baseadas em funções splines...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: Advances in modern technology have facilitated the collection and analysis of high-dimensional data, or data that are repeated measurements of the same subject. When the data are recorded densely over time, often by machine, they are typically termed functional or curve data, with one observed curve (or function) per subject. The statistical analysis of a sample of n such curves is commonly termed functional data analysis, or FDA. Conceptually, functional data are continuously defined. Of course, in practice they are usually observed at discrete points. There is no general requirement that the data be smooth, but often smoothness or other regularity will be a key aspect of the analysis, in some cases derivatives of the observed functions will be important. In this project different smooth techniques are presented and discussed, mainly those based on splines functions...Note: The complete abstract is available with the full electronic digital thesis or dissertations / Mestrado / Estatistica Não Parametrica / Mestre em Estatística
|
147 |
Monotonicidade em testes de hipóteses / Monotonicity in hypothesis testsGustavo Miranda da Silva 09 March 2010 (has links)
A maioria dos textos na literatura de testes de hipóteses trata de critérios de otimalidade para um determinado problema de decisão. No entanto, existem, em menor quantidade, alguns textos sobre os problemas de se realizar testes de hipóteses simultâneos e sobre a concordância lógica de suas soluções ótimas. Algo que se espera de testes de hipóteses simultâneos e que, se uma hipótese H1 implica uma hipótese H0, então é desejável que a rejeição da hipótese H0 necessariamente implique na rejeição da hipótese H1, para uma mesma amostra observada. Essa propriedade é chamada aqui de monotonicidade. A fim de estudar essa propriedade sob um ponto de vista mais geral, neste trabalho é definida a nocão de classe de testes de hipóteses, que estende a funcão de teste para uma sigma-álgebra de possíveis hipóteses nulas, e introduzida uma definição de monotonicidade. Também é mostrado, por meio de alguns exemplos simples, que, para um nível de signicância fixado, a classe de testes Razão de Verossimilhanças Generalizada (RVG) não apresenta monotonicidade, ao contrário de testes formulados sob a perspectiva bayesiana, como o teste de Bayes baseado em probabilidades a posteriori, o teste de Lindley e o FBST. Porém, são verificadas, sob a teoria da decisão, quando possível, quais as condições suficientes para que uma classe de testes de hipóteses tenha monotonicidade. / Most of the texts in the literature of hypothesis testing deal with optimality criteria for a single decision problem. However, there are, to a lesser extent, texts on the problem of simultaneous hypothesis testing and the logical consistency of the optimal solutions of such procedures. For instance, the following property should be observed in simultaneous hypothesis testing: if a hypothesis H implies a hypothesis H0, then, on the basis of the same sample observation, the rejection of the hypothesis H0 necessarily should imply the rejection of the hypothesis H. Here, this property is called monotonicity. To investigate this property under a more general point of view, in this work, it is dened rst the notion of a class of hypothesis testing, which extends the test function to a sigma-eld of possible null hypotheses, and then the concept of monotonicity is introduced properly. It is also shown, through some simple examples, that for a xed signicance level, the class of Generalized Likelihood Ratio tests (GLR) does not meet monotonicity, as opposed to tests developed under the Bayesian perspective, such as Bayes tests based on posterior probabilities, Lindleys tests and Full Bayesian Signicance Tests (FBST). Finally, sucient conditions for a class of hypothesis testing to have monotonicity are determined, when possible, under a decision-theoretic approach.
|
148 |
The application of frequency domain methods to two statistical problemsPotgieter, Gert Diedericks Johannes 10 September 2012 (has links)
D.Phil. / We propose solutions to two statistical problems using the frequency domain approach to time series analysis. In both problems the data at hand can be described by the well known signal plus noise model. The first problem addressed is the estimation of the underlying variance of a process for the use in a Shewhart or CUSUM control chart when the mean of the process may be changing. We propose an estimator for the underlying variance based on the periodogram of the observed data. Such estimators have properties which make them superior to some estimators currently used in Statistical Quality Control. We also present a CUSUM chart for monitoring the variance which is based upon the periodogram-based estimator for the variance. The second problem, stimulated by a specific problem in Variable Star Astronomy, is to test whether or not the mean of a bivariate time series is constant over the span of observations. We consider two periodogram-based tests for constancy of the mean, derive their asymptotic distributions under the null hypothesis and under local alternatives and show how consistent estimators for the unknown parameters in the proposed model can be found
|
149 |
More accurate two sample comparisons for skewed populationsTong, Bo January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Haiyan Wang / Various tests have been created to compare the means of two populations in many scenarios and applications. The two-sample t-test, Wilcoxon Rank-Sum Test and bootstrap-t test are commonly used methods. However, methods for skewed two-sample data set are not well studied. In this dissertation, several existing two sample tests were evaluated and four new tests were proposed to improve the test accuracy under moderate sample size and high population skewness.
The proposed work starts with derivation of a first order Edgeworth expansion for the test statistic of the two sample t-test. Using this result, new two-sample tests based on Cornish Fisher expansion (TCF tests) were created for both cases of common variance and unequal variances. These tests can account for population skewness and give more accurate test results. We also developed three new tests based on three transformations (T[subscript i] test, i = 1; 2; 3) for the pooled case, which can be used to eliminate the skewness of the studentized statistic.
In this dissertation, some theoretical properties of the newly proposed tests are presented. In particular, we derived the order of type I error rate accuracy of the pooled two-sample t-test based on normal approximation (TN test), the TCF and T[subscript i] tests. We proved that these tests give the same theoretical type I error rate under skewness. In addition, we derived the power function of the TCF and TN tests as a function of the population parameters. We also provided the detailed conditions under which the theoretical power of the two-sample TCF test is higher than the two-sample TN test. Results from extensive simulation studies and real data analysis were also presented in this dissertation. The empirical results further confirm our theoretical results. Comparing with commonly used two-sample parametric and nonparametric tests, our new tests (TCF and Ti) provide the same empirical type I error rate but higher power.
|
150 |
Maximization of power in randomized clinical trials using the minimization treatment allocation techniqueMarange, Chioneso Show January 2010 (has links)
Generally the primary goal of randomized clinical trials (RCT) is to make comparisons among two or more treatments hence clinical investigators require the most appropriate treatment allocation procedure to yield reliable results regardless of whether the ultimate data suggest a clinically important difference between the treatments being studied. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the statistical efficiency in detecting treatment effect and its complexity in implementation. Methods: A SAS simulation code was designed for allocating patients into two different treatment groups. Categorical prognostic factors were used together with multi-level response variables and demonstration of how simulation of data can help to determine the power of the minimization technique was carried out using ordinal logistic regression models. Results: Several scenarios were simulated in this study. Within the selected scenarios, increasing the sample size significantly increased the power of detecting the treatment effect. This was contrary to the case when the probability of allocation was decreased. Power did not change when the probability of allocation given that the treatment groups are balanced was increased. The probability of allocation { } k P was seen to be the only one with a significant effect on treatment balance. Conclusion: Maximum power can be achieved with a sample of size 300 although a small sample of size 200 can be adequate to attain at least 80% power. In order to have maximum power, the probability of allocation should be fixed at 0.75 and set to 0.5 if the treatment groups are equally balanced.
|
Page generated in 0.1221 seconds