Spelling suggestions: "subject:"bypothesis test"" "subject:"bypothesis est""
1 |
Statistical inference concerning means and percentiles of normal populationsJaber, K. H. January 1984 (has links)
No description available.
|
2 |
A comparison of hypothesis testing procedures for two population proportionsHort, Molly January 1900 (has links)
Master of Science / Department of Statistics / John E. Boyer Jr / It has been shown that the most straightforward approach to testing for the difference of two independent population proportions, called the Wald procedure, tends to declare differences too often. Because of this poor performance, various researchers have proposed simple adjustments to the Wald approach that tend to provide significance levels closer to the nominal. Additionally, several tests that take advantage of different methodologies have been proposed.
This paper extends the work of Tebbs and Roths (2008), who wrote an R program to compare confidence interval coverage for a variety of these procedures when used to estimate a contrast in two or more binomial parameters. Their program has been adapted to generate exact significance levels and power for the two parameter hypothesis testing situation.
Several combinations of binomial parameters and sample sizes are considered. Recommendations for a choice of procedure are made for practical situations.
|
3 |
Hypothesis Testing for the Process Capability RatioDatar, Satyajit V. 16 December 2002 (has links)
No description available.
|
4 |
Teste para avaliar a propriedade de incrementos independentes em um processo pontual / Test to evaluate the property of independent increments in a point processSouza, Francys Andrews de 26 June 2013 (has links)
Em econometria um dos tópicos que vem se tornando ao longo dos anos primordial e a análise de ultra-frequência, ou seja, a análise da transação negócio a negócio. Ela tem se mostrado fundamental na modelagem da microestrutura do mercado intraday. Ainda assim temos uma teoria escassa que vem crescendo de forma humilde a cerca deste tema. Buscamos desenvolver um teste de hipótese para verificar se os dados de ultra-frequência apresentam incrementos independentes e estacionários, pois neste cenário saber disso é de grande importância, ja que muitos trabalhos tem como base essa hipótese. Além disso Grimshaw et. al. (2005)[6] mostrou que ao utilizarmos uma distribuição de probabilidade contínua para modelarmos dados econômicos, em geral, estimamos uma função de intensidade crescente, devido a resultados viciados obtidos como consequência do arredondamento, em nosso trabalho buscamos trabalhar com distribuições discretas para que contornar esse problema acarretado pelo uso de distribuições contínuas / In econometrics a topic that is becoming primordial over the years is the ultra frequency analysis, or analysis of the trades to trades transaction. This topic is shown to be fundamental in modeling the microstructure of the market intraday. Nevertheless we have a little theory that is growing so lowly about this topic. We seek to develop a hypothesis test to verify that the data ultrasonic frequency have independent and stationary increments, for this scenario the knowledge of it great importance, since many jobs is based on this hypothesis. In general Grimshaw et. al. (2005)[6] showed that when we use a continuous probability distribution to model ecomomic data, we estimate a function of increasing intensity due to addicts results obtained as a result of rounding. In our research we seek to work with discrete distributions to circumvent this problem entailed by the use of continuous distributions
|
5 |
Testes de superioridade para modelos de chances proporcionais com e sem fração de cura / Superiority test for proportional odds model with and without cure fractionTeixeira, Juliana Cecilia da Silva 24 October 2017 (has links)
Estudos que comprovem a superioridade de um fármaco em relação a outros já existentes no mercado são de grande interesse na prática clínica. Através deles a Agência Nacional de Vigilância Sanitária (ANVISA) concede registro a novos produtos, que podem curar mais rápido ou aumentar a probabilidade de cura dos pacientes, em comparação ao tratamento padrão. É de suma importância que os testes de hipóteses controlem a probabilidade do erro tipo I, ou seja, controlem a probabilidade de que um tratamento não superior seja aprovado para uso; e também atinja o poder de teste regulamentado com o menor número de indivíduos possível. Os testes de hipóteses existentes para esta finalidade ou desconsideram o tempo até que o evento de interesse ocorra (reação alérgica, efeito positivo, etc) ou são baseados no modelo de riscos proporcionais. No entanto, na prática, a hipótese de riscos proporcionais pode nem sempre ser satisfeita, como é o caso de ensaios cujos riscos dos diferentes grupos em estudo se igualam com o passar do tempo. Nesta situação, o modelo de chances proporcionais é mais adequado para o ajuste dos dados. Neste trabalho desenvolvemos e investigamos dois testes de hipóteses para ensaios clínicos de superioridade, baseados na comparação de curvas de sobrevivência sob a suposição de que os dados seguem o modelo de chances de sobrevivências proporcionais, um sem a incorporação da fração de cura e outro com esta incorporação. Vários estudos de simulação são conduzidos para analisar a capacidade de controle da probabilidade do erro tipo I e do valor do poder dos testes quando os dados satisfazem ou não a suposição do teste para diversos tamanhos amostrais e dois métodos de estimação das quantidades de interesse. Concluímos que a probabilidade do erro tipo I é subestimada quando os dados não satisfazem a suposição do teste e é controlada quando satisfazem, como esperado. De forma geral, concluímos que é imprescindível satisfazer as suposições dos testes de superioridade. / Studies that prove the superiority of a drug in relation to others already existing in the market are of great interest in clinical practice. Based on them the Brazilian National Agency of Sanitary Surveillance (ANVISA) grants superiority drugs registers which can cure faster or increase the probability of cure of patients, compared to standard treatment. It is of the utmost importance that hypothesis tests control the probability of type I error, that is, they control the probability that a non-superior treatment is approved for use; and also achieve the test power regulated with as few individuals as possible. Tests of hypotheses existing for this purpose or disregard the time until the event of interest occurrence (allergic reaction, positive effect, etc.) or are based on the proportional hazards model. However, in practice, the hypothesis of proportional hazards may not always be satisfied, as is the case of trials whose risks of the different study groups become equal over time. In this situation, the proportional odds survival model is more adequate for the adjustment of the data. In this work we developed and investigated two hypothesis tests for clinical trials of superiority, based on the comparison of survival curves under the assumption that the data follow the proportional survival odds model, one without the incorporation of cure fraction and another considering cure fraction. Several simulation studies are conducted to analyze the ability to control the probability of type I error and the value of the power of the tests when the data satisfy or not the assumption of the test for different sample sizes and two estimation methods of the quantities of interest. We conclude that the probability of type I error is underestimated when the data do not satisfy the assumption of the test and it is controlled when they satisfy, as expected. In general, we conclude that it is indispensable to satisfy the assumptions of superiority tests.
|
6 |
Safety Evaluation of Freeway Exit RampsChen, Hongyun 05 March 2008 (has links)
The primary objective of the study is to evaluate safety performances of different exit ramps used in Florida and nationally. More specific, the research objectives include the following two parts: (1) to evaluate the impacts of different exit ramp types on safety performance for freeway diverge areas; and (2) to identify the different factors contributing to the crashes happening on the exit ramp sections. To achieve the research objectives, the research team investigated crash history at 424 sites throughout Florida. The study area includes two parts, the freeway diverge area and the exit ramp sections. For the freeway diverge areas, exit ramp types were defined based on the number of lanes used by vehicular traffic to exit freeways. Four exit ramp types were considered here including single-lane exit ramps (Type 1), sing-lane exit ramps without a taper (Type 2), two-lane exit ramps with an optional lane (Type 3), and two-lane exit ramps without an optional lane (Type 4). For the exit ramp sections, four ramp configurations, including diamond, out connection, free-flow loop and parclo loop, were considered.
Cross-sectional comparisons were conducted to compare crash frequency, crash rate, crash severity and crash types between different exit ramp groups. Crash predictive models were also built to quantify the impacts of various contributing factors. On the freeway diverge areas, it shows that Type 1 exit ramp has the best safety performance in terms of the lowest crash frequency and crash rate. The crash prediction model shows that for one-lane exit ramp, replacing a Type 1 with a Type 2 will increase crash counts at freeway diverge areas by 15.57% while replacing a Type 3 with a Type 4 will increase crash counts by 10.80% for two-lane ramps. On the exit ramp sections, the out connection ramps appear to have the lowest average crash rate than the other three. The crash predictive model shows that replacing an out connection exit ramp with a diamond, free-flow, and parclo loop will increase crashes counts by 26.90%, 68.47% and 48.72% respectively. The results of this study will help transportation decision makers develop tailored technical guidelines governing the selection of the optimum design combinations on freeway diverge areas and exit ramp sections.
|
7 |
Generating Surrogates from RecurrencesThiel, Marco, Romano, Maria Carmen, Kurths, Jürgen, Rolfs, Martin, Kliegl, Reinhold January 2006 (has links)
In this paper we present an approach to recover the dynamics from recurrences
of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i. e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be one centre only in the brain that produces the fixational movements in both eyes or a close link between two centres.
|
8 |
The k-Sample Problem When k is Large and n SmallZhan, Dongling 2012 May 1900 (has links)
The k-sample problem, i.e., testing whether two or more data sets come from the same population, is a classic one in statistics. Instead of having a small number of k groups of
samples, this dissertation works on a large number of p groups of samples, where within each group, the sample size, n, is a fixed, small number. We call this as a "Large p, but Small n" setting. The primary goal of the research is to provide a test statistic based on kernel density estimation (KDE) that has an asymptotic normal distribution when p goes to infinity with n fixed.
In this dissertation, we propose a test statistic called Tp(S) and its standardized version, T(S). By using T(S), we conduct our test based on the critical values of the standard normal distribution. Theoretically, we show that our test is invariant to a location and scale transformation of the data. We also find conditions under which our test is consistent. Simulation studies show that our test has good power against a variety of alternatives. The real data analyses show that our test finds differences between gene distributions that are not due simply to location.
|
9 |
Goodness-of-fit test and bilinear modelFeng, Huijun 12 December 2012 (has links)
The Empirical Likelihood method (ELM) was introduced by A. B. Owen to test hypotheses in the early 1990s. It's a nonparametric method and uses the data directly to do statistical tests and to compute confidence intervals/regions. Because of its distribution free property and generality, it has been studied extensively and employed widely in statistical topics. There are many classical test statistics such as the Cramer-von Mises (CM)
test statistic, the Anderson-Darling test statistic, and the Watson test statistic, to name a few. However, none is universally most powerful. This thesis is dedicated to extending the ELM to several interesting statistical topics in hypothesis tests. First of all, we focus on testing the fit of distributions. Based on the CM test, we propose a novel Jackknife Empirical
Likelihood test via estimating equations in testing the goodness-of-fit. The proposed new test
allows one to add more relevant constraints so as to improve the power. Also, this idea can be generalized to other classical test statistics. Second, when aiming at testing the error distributions generated from a statistical model (e.g., the regression model), we introduce the Jackknife Empirical Likelihood idea to the regression model, and further compute the confidence regions with the merits of distribution free limiting chi-square property. Third, the ELM
based on some weighted score equations are proposed for constructing confidence intervals
for the coefficient in the simple bilinear model. The effectiveness of all presented methods are demonstrated by some extensive simulation studies.
|
10 |
On Improved Generalization of 5-State Hidden Markov Model-based Internet Traffic ClassifiersBartnik, Grant 06 June 2013 (has links)
The multitude of services delivered over the Internet would have been difficult to fathom 40 years ago when much of the initial design was being undertaken. As a consequence, the resulting architecture did not make provisions for differentiating between, and managing the potentially conflicting requirements of different types of services such as real-time voice communication and peer-to-peer file sharing. This shortcoming has resulted in a situation whereby services with conflicting requirements often interfere with each other and ultimately decrease the effectiveness of the Internet as an enabler of new and transformative services. The ability to passively identify different types of Internet traffic then would address this shortcoming and enable effective management of conflicting types of services, in addition to facilitating a better understanding of how the Internet is used in general. Recent attempts at developing such techniques have shown promising results in simulation environments but perform considerably worse when deployed into real-world scenarios. One possible reason for this descrepancy can be attributed to the implicit assumption shared by recent approaches regarding the degree of similarity between the many networks which comprise the Internet. This thesis quantifies the degradation in performance which can be expected when such an assumption is violated as well as demonstrating alternative classification techniques which are less sensitive to such violations.
|
Page generated in 0.0451 seconds