• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 125
  • 125
  • 41
  • 30
  • 25
  • 17
  • 16
  • 16
  • 14
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Splitting frames based on hypothesis testing for patient motion compensation in SPECT

Ma, Linna. January 2006 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Hypothesis testing; motion compensation; SPECT. Includes bibliographical references (leaves 30-31).
92

Rank-sum test for two-sample location problem under order restricted randomized design

Sun, Yiping. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 121-124).
93

Testes de hipoteses para dados funcionais baseados em distancias : um estudo usando splines / Distances approach to test hypothesis for functional data

Souza, Camila Pedroso Estevam de 25 April 2008 (has links)
Orientador: Ronaldo Dias / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T22:55:48Z (GMT). No. of bitstreams: 1 Souza_CamilaPedrosoEstevamde_M.pdf: 4239065 bytes, checksum: 099f19df22c0b40a411d07eacc2fe0d1 (MD5) Previous issue date: 2008 / Resumo: Avanços na tecnologia moderna têm facilitado a coleta e análise de dados de alta dimensão, ou dados que são formados por medidas repetidas de um mesmo objeto. Quando os dados são registrados densamente ao longo do tempo, freqüentemente por máquinas, eles são tipicamente chamados de dados funcionais, com uma curva (ou função) observada por objeto em estudo. A análise estatística de uma amostra de n curvas como essas é comumente chamada de análise de dados funcionais, ou ADF. Conceitualmente, dados funcionais são continuamente definidos. Claro que na prática eles geralmente são observados em pontos discretos. Não há exigência para que os dados sejam suaves, mas freqüentemente a suavidade ou outra regularidade será um aspecto chave da análise, em alguns casos derivadas das funções observadas serão importantes. Nessa dissertação diferentes técnicas de suavização serão apresentadas e discutidas, principalmente aquelas baseadas em funções splines...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: Advances in modern technology have facilitated the collection and analysis of high-dimensional data, or data that are repeated measurements of the same subject. When the data are recorded densely over time, often by machine, they are typically termed functional or curve data, with one observed curve (or function) per subject. The statistical analysis of a sample of n such curves is commonly termed functional data analysis, or FDA. Conceptually, functional data are continuously defined. Of course, in practice they are usually observed at discrete points. There is no general requirement that the data be smooth, but often smoothness or other regularity will be a key aspect of the analysis, in some cases derivatives of the observed functions will be important. In this project different smooth techniques are presented and discussed, mainly those based on splines functions...Note: The complete abstract is available with the full electronic digital thesis or dissertations / Mestrado / Estatistica Não Parametrica / Mestre em Estatística
94

The application of frequency domain methods to two statistical problems

Potgieter, Gert Diedericks Johannes 10 September 2012 (has links)
D.Phil. / We propose solutions to two statistical problems using the frequency domain approach to time series analysis. In both problems the data at hand can be described by the well known signal plus noise model. The first problem addressed is the estimation of the underlying variance of a process for the use in a Shewhart or CUSUM control chart when the mean of the process may be changing. We propose an estimator for the underlying variance based on the periodogram of the observed data. Such estimators have properties which make them superior to some estimators currently used in Statistical Quality Control. We also present a CUSUM chart for monitoring the variance which is based upon the periodogram-based estimator for the variance. The second problem, stimulated by a specific problem in Variable Star Astronomy, is to test whether or not the mean of a bivariate time series is constant over the span of observations. We consider two periodogram-based tests for constancy of the mean, derive their asymptotic distributions under the null hypothesis and under local alternatives and show how consistent estimators for the unknown parameters in the proposed model can be found
95

Maximization of power in randomized clinical trials using the minimization treatment allocation technique

Marange, Chioneso Show January 2010 (has links)
Generally the primary goal of randomized clinical trials (RCT) is to make comparisons among two or more treatments hence clinical investigators require the most appropriate treatment allocation procedure to yield reliable results regardless of whether the ultimate data suggest a clinically important difference between the treatments being studied. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the statistical efficiency in detecting treatment effect and its complexity in implementation. Methods: A SAS simulation code was designed for allocating patients into two different treatment groups. Categorical prognostic factors were used together with multi-level response variables and demonstration of how simulation of data can help to determine the power of the minimization technique was carried out using ordinal logistic regression models. Results: Several scenarios were simulated in this study. Within the selected scenarios, increasing the sample size significantly increased the power of detecting the treatment effect. This was contrary to the case when the probability of allocation was decreased. Power did not change when the probability of allocation given that the treatment groups are balanced was increased. The probability of allocation { } k P was seen to be the only one with a significant effect on treatment balance. Conclusion: Maximum power can be achieved with a sample of size 300 although a small sample of size 200 can be adequate to attain at least 80% power. In order to have maximum power, the probability of allocation should be fixed at 0.75 and set to 0.5 if the treatment groups are equally balanced.
96

Statistické zpracování dat z reálného výrobního procesu / Statistical analysis of real manufacturing process data

Kučerová, Barbora January 2012 (has links)
Tématem této diplomové práce je statistická regulace výrobního procesu. Cílem bylo analyzovat data z reálného technologického procesu revolverového vstřikovacího lisu. Analýza byla provedena za užití statistického testování hypotéz, analýzy rozptylu, obecného lineárního modelu a analýzy způsobilosti procesu. Analýza dat byla provedena ve statistickém softwaru Minitab 16.
97

The Chi Square Approximation to the Hypergeometric Probability Distribution

Anderson, Randy J. (Randy Jay) 08 1900 (has links)
This study compared the results of his chi square text of independence and the corrected chi square statistic against Fisher's exact probability test (the hypergeometric distribution) in contection with sampling from a finite population. Data were collected by advancing the minimum call size from zero to a maximum which resulted in a tail area probability of 20 percent for sample sizes from 10 to 100 by varying increments. Analysis of the data supported the rejection of the null hypotheses regarding the general rule-of-thumb guidelines concerning sample size, minimum cell expected frequency and the continuity correction factor. it was discovered that the computation using Yates' correction factor resulted in values which were so overly conservative (i.e. tail area porobabilities that were 20 to 50 percent higher than Fisher's exact test) that conclusions drawn from this calculation might prove to be inaccurate. Accordingly, a new correction factor was proposed which eliminated much of this discrepancy. Its performance was equally consistent with that of the uncorrected chi square statistic and at times, even better.
98

Nonparametric Methods for Measuring Conditional Dependence, Multi-Sample Dissimilarity, and Testing for Symmetry

Huang, Zhen January 2024 (has links)
We describe new nonparametric methods for (i) quantifying conditional dependence, (ii) quantifying multi-sample dissimilarity, and (iii) testing multivariate symmetry. In the first part of the thesis, we propose a kernel partial correlation (KPC) to quantify conditional dependence, and a kernel measure of dissimilarity between multiple distributions (KMD) to quantify the difference between multiple distributions. These two measures are both deterministic numbers between 0 and 1, with 0 and 1 corresponding to the two extreme cases --- KPC is 0 if and only if perfect conditional dependence holds, and 1 if and only if there is a conditional functional relationship; while KMD is 0 if and only if all the distributions that are compared are equal, and 1 if and only if these distributions are mutually singular. Both KPC and KMD can be estimated consistently using a computationally efficient graph-based method (including k-nearest neighbor graph and minimum spanning tree). For applications, KPC can be used to develop a model-free variable selection algorithm. This algorithm is provably consistent under sparsity assumptions, and shows superior performance in practice compared to existing procedures. KMD can be used to design an easily implementable test for the equality of multiple distributions, which is consistent against all alternatives where at least two distributions are not equal. A problem closely related to multi-sample testing is testing for symmetry. In the second part of the thesis, we develop distribution-free tests for multivariate symmetry (that includes central symmetry, sign symmetry, spherical symmetry, etc.) based on multivariate signs, ranks and signed-ranks defined via optimal transport (OT). One test we propose can be thought of as a multivariate generalization of Wilcoxon signed-rank (GWSR) test and shares many of the appealing properties of its one-dimensional counterparts. In particular, when testing against location shift alternatives, the GWSR test suffers from no loss in (asymptotic) efficiency, when compared to Hotelling's T2 test, despite being nonparametric and exactly distribution-free. Another test we propose is based on a combination of kernel methods and the multivariate signs and ranks defined via OT. This test is universally consistent against all alternatives, while still maintaining the distribution-free property. Furthermore, it is capable of testing a broader class of multivariate symmetry, including exchangeability, extending beyond the class of symmetry testable by GWSR.
99

Hypothesis testing procedures for non-nested regression models

Bauer, Laura L. January 1987 (has links)
Theory often indicates that a given response variable should be a function of certain explanatory variables yet fails to provide meaningful information as to the specific form of this function. To test the validity of a given functional form with sensitivity toward the feasible alternatives, a procedure is needed for comparing non-nested families of hypotheses. Two hypothesized models are said to be non-nested when one model is neither a restricted case nor a limiting approximation of the other. These non-nested hypotheses cannot be tested using conventional likelihood ratio procedures. In recent years, however, several new approaches have been developed for testing non-nested regression models. A comprehensive review of the procedures for the case of two linear regression models was presented. Comparisons between these procedures were made on the basis of asymptotic distributional properties, simulated finite sample performance and computational ease. A modification to the Fisher and McAleer JA-test was proposed and its properties investigated. As a compromise between the JA-test and the Orthodox F-test, it was shown to have an exact non-null distribution. Its properties, both analytically and empirically derived, exhibited the practical worth of such an adjustment. A Monte Carlo study of the testing procedures involving non-nested linear regression models in small sample situations (n ≤ 40) provided information necessary for the formulation of practical guidelines. It was evident that the modified Cox procedure, N̄ , was most powerful for providing correct inferences. In addition, there was strong evidence to support the use of the adjusted J-test (AJ) (Davidson and MacKinnon's test with small-sample modifications due to Godfrey and Pesaran), the modified JA-test (NJ) and the Orthodox F-test for supplemental information. Under non normal disturbances, similar results were yielded. An empirical study of spending patterns for household food consumption provided a practical application of the non-nested procedures in a large sample setting. The study provided not only an example of non-nested testing situations but also the opportunity to draw sound inferences from the test results. / Ph. D.
100

Algoritmiese rangordebepaling van akademiese tydskrifte

Strydom, Machteld Christina 31 October 2007 (has links)
Opsomming Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van akademiese publikasies te bepaal en te vergelyk. Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik gemaak wat gewigte aan verwysings toeken. In die Internetomgewing word hierdie benadering reeds met groot sukses toegepas deur onder andere die PageRank-algoritme van die Google soekenjin. Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van verwysings. Abstract Ranking of journals are often used as an indicator of quality, and is extensively used as a mechanism for determining promotion and funding. This research studied ways of extracting the impact, or influence, of a journal from citation data, using an iterative process that allocates a weight to the source of a citation. After evaluating and discussing the characteristics that influence quality and importance of research with specialist researchers, a measure called the Influence factor was introduced, emulating the PageRankalgorithm used by Google to rank web pages. The Influence factor can be seen as a measure of the reaction that was generated by a publication, based on the number of scientists who read and cited itA good correlation between the rankings produced by the Influence factor and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)

Page generated in 0.1437 seconds