• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 29
  • 11
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 268
  • 55
  • 52
  • 46
  • 29
  • 25
  • 24
  • 18
  • 17
  • 17
  • 17
  • 17
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

A comparison of four estimators of a population measure of model misfit in covariance structure analysis

Zhang, Wei. January 2005 (has links)
Thesis (M. A.)--University of Notre Dame, 2005. / Thesis directed by Ke-Hai Yuan for the Department of Psychology. "October 2005." Includes bibliographical references (leaves 60-63).
222

Jämförelser av styrkefunktioner för t-testet och Mann-Whitneys U-test en simuleringsstudie /

Widjeskog, Östen. January 1982 (has links)
Thesis--Åbo akademi, 1982. / Added t.p. with thesis statement inserted. Includes bibliographical references (p. 95-102).
223

Issues in Bayesian Gaussian Markov random field models with application to intersensor calibration

Liang, Dong. Cowles, Mary Kathryn. January 2009 (has links)
Thesis advisor: Cowles, Mary K. Includes bibliographic references (p. 167-172).
224

The impact of the inappropriate modeling of cross-classified data structures

Meyers, Jason Leon, Beretvas, Susan Natasha, January 2004 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Susan N. Beretvas. Vita. Includes bibliographical references.
225

Modeling the performance of a baseball player's offensive production /

Smith, Michael Ross, January 2006 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Statistics, 2006. / Includes bibliographical references (p. 67-68).
226

Rank-sum test for two-sample location problem under order restricted randomized design

Sun, Yiping. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 121-124).
227

Load sharing in a computer-communication network

January 1976 (has links)
by Eberhard Frank Wunderlich. / Bibliography: p.124-126. / Prepared under Advanced Research Projects Agency Contract ONR/N00014-75-C-1183. Originally presented as the author's thesis, (M.S.) in the M.I.T. Dept. of Electrical Engineering and Computer Science, 1975.
228

An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Equal Sample Sizes, Utilizing Box's Coefficient of Variance Variation

Strozeski, Michael W. 05 1900 (has links)
This study sought to determine boundary conditions for robustness of the Tukey HSD statistic when the assumptions of homogeneity of variance were violated. Box's coefficient of variance variation, C^2 , was utilized to index the degree of variance heterogeneity. A Monte Carlo computer simulation technique was employed to generate data under controlled violation of the homogeneity of variance assumption. For each sample size and number of treatment groups condition, an analysis of variance F-test was computed, and Tukey's multiple comparison technique was calculated. When the two additional sample size cases were added to investigate the large sample sizes, the Tukey test was found to be conservative when C^2 was set at zero. The actual significance level fell below the lower limit of the 95 per cent confidence interval around the 0.05 nominal significance level.
229

Mitigação de incertezas através da integração com ajuste de histórico de produção e técnicas de amostragem / Uncertainty mitigation through integration with history matching and sampling techniques

Vasconcelos, David Dennyson Sousa 07 November 2011 (has links)
Orientadores: Denis José Schiozer, Célio Maschio / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências / Made available in DSpace on 2018-08-18T20:59:19Z (GMT). No. of bitstreams: 1 Vasconcelos_DavidDennysonSousa_M.pdf: 1907776 bytes, checksum: f7eeb89b73385df9b024d60d9968b96e (MD5) Previous issue date: 2011 / Resumo: As incertezas geológicas influenciam diretamente a previsão de comportamento de reservatórios de petróleo, podendo, muitas vezes, tornar mais complexo o uso de ferramentas como simuladores de fluxo. A integração de técnicas de redução de incertezas e ajuste de histórico ganha um importante destaque neste processo, principalmente devido às limitações apresentadas pelas técnicas tradicionais de ajuste de histórico, sobretudo em campos com poucos dados de produção e maiores incertezas. O objetivo principal desse trabalho é obter um ajuste de histórico probabilístico a partir da redução das incertezas do modelo de reservatório. A proposta desse estudo é apresentar contribuições a uma metodologia existente, com o objetivo de possibilitar o tratamento de um elevado número de atributos incertos e aumentar a eficiência do processo. O método consiste em um procedimento dinâmico de calibração de propriedades do reservatório, utilizando dados observados e técnicas de amostragem. Os atributos considerados, discretizados em níveis de incertezas (com uma probabilidade associada), são submetidos a um processo de amostragem, com o método de Hipercubo Latino e, posteriormente combinados estatisticamente. Cada combinação entre níveis dos diferentes atributos resulta em um modelo de simulação e, após realizadas as simulações, novas probabilidades são estimadas, para cada nível, a partir de um procedimento que utiliza a diferença entre os dados observados e simulados, relativos a cada modelo. A qualidade do ajuste obtido pode ser avaliada a partir das curvas de incertezas, compostas por modelos representativos das probabilidades iniciais e finais de cada atributo e através dos indicadores propostos nesse trabalho, como variabilidade das probabilidades e afastamentos por poço. Os resultados obtidos indicam um método capaz de fornecer resultados confiáveis no processo de mitigação de incertezas, quando há dados de histórico disponíveis. O aumento na qualidade dos resultados com esse método, para as situações onde os atributos possuem mais níveis discretos que o convencional (normalmente são 3 níveis), depende do esforço computacional (em termos do número de simulações). Contudo, não há um aumento expressivo do número de simulações, como ocorre na técnica de árvore de derivação usada em trabalhos anteriores / Abstract: The geological uncertainties influence directly the prediction of reservoir behavior, making more complex the use of tools such as flow simulators. The integration between mitigation uncertainties techniques and history matching gains an important emphasis in this process, mainly due to the limitations presented by history matching traditional techniques, especially in areas with little observed data and greater uncertainties. The main objective of this work is to set a probabilistic history matching from the mitigation of reservoir uncertainty. The purpose of this study is to provide input to an existing methodology, in order to allow treatment of a large number of uncertain attributes and increase process efficiency. The method involves a dynamic procedure of global and local calibration of the geological model, using observed data and sampling techniques. The considered attributes, discretized into uncertainty levels (with an associated probability), are undergoing a sampling process, with Latin Hypercube method and then statistically combined. Each combination among levels of different attributes results in a complete simulation model, and after the simulations are performed, new probabilities are estimated for each level, from a procedure that uses the difference between observed and simulated data for each model. The quality of the history matching process can be evaluated from the uncertainty curves, composed of representative models of initial and final probabilities of each attribute, and using the indicators proposed in this work, as probabilities variability and the difference between observed and simulated data by well. The results obtained with this methodology indicate a tool capable of providing reliable results in the uncertainty mitigation process, when there is observed data available. The increase in quality of results with this method, for situations where the attributes has a number of discrete levels higher than the conventional technique (3 levels) depends on the computational effort (in terms of simulations number), but without the significant increase in the simulations number, as in the derivation tree technique used in previous works / Mestrado / Reservatórios e Gestão / Mestre em Ciências e Engenharia de Petróleo
230

Sequential Rerandomization in the Context of Small Samples

Yang, Jiaxi January 2021 (has links)
Rerandomization (Morgan & Rubin, 2012) is designed for the elimination of covariate imbalance at the design stage of causal inference studies. By improving the covariate balance, rerandomization helps provide more precise and trustworthy estimates (i.e., lower variance) of the average treatment effect (ATE). However, there are only a limited number of studies considering rerandomization strategies or discussing the covariate balance criteria that are observed before conducting the rerandomization procedure. In addition, researchers may find more difficulty in ensuring covariate balance across groups with small-sized samples. Furthermore, researchers conducting experimental design studies in psychology and education fields may not be able to gather data from all subjects simultaneously. Subjects may not arrive at the same time and experiments can hardly wait until the recruitment of all subjects. As a result, we have presented the following research questions: 1) How does the rerandomization procedure perform when the sample size is small? 2) Are there any other balancing criteria that may work better than the Mahalanobis distance in the context of small samples? 3) How well does the balancing criterion work in a sequential rerandomization design? Based on the Early Childhood Longitudinal Study, Kindergarten Class, a Monte-Carlo simulation study is presented for finding a better covariate balance criterion with respect to small samples. In this study, the neural network predicting model is used to calculate missing counterfactuals. Then, to ensure covariate balance in the context of small samples, the rerandomization procedure uses various criteria measuring covariate balance to find the specific criterion for the most precise estimate of sample average treatment effect. Lastly, a relatively good covariate balance criterion is adapted to Zhou et al.’s (2018) sequential rerandomization procedure and we examined its performance. In this dissertation, we aim to identify the best covariate balance criterion using the rerandomization procedure to determine the most appropriate randomized assignment with respect to small samples. On the use of Bayesian logistic regression with Cauchy prior as the covariate balance criterion, there is a 19% decrease in the root mean square error (RMSE) of the estimated sample average treatment effect compared to pure randomization procedures. Additionally, it is proved to work effectively in sequential rerandomization, thus making a meaningful contribution to the studies of psychology and education. It further enhances the power of hypothesis testing in randomized experimental designs.

Page generated in 0.4932 seconds