• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 283
  • 101
  • 98
  • 81
  • 67
  • 67
  • 45
  • 39
  • 38
  • 37
  • 37
  • 35
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Dynamic Bayesian Approaches to the Statistical Calibration Problem

Rivers, Derick Lorenzo 01 January 2014 (has links)
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the "classical" approach and the "inverse" regression approach. Both of these models are static models and are used to estimate "exact" measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe the measurement. The Bayesian time series analysis method of Dynamic Linear Models (DLM) can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs the use of Bayesian methodology to perform statistical calibration. The DLM framework is used to capture the time-varying parameters that may be changing or drifting over time. Dynamic based approaches to the linear, nonlinear, and multivariate calibration problem are presented in this dissertation. Simulation studies are conducted where the dynamic models are compared to some well known "static'" calibration approaches in the literature from both the frequentist and Bayesian perspectives. Applications to microwave radiometry are given.
82

Statistical partition problem for exponential populations and statistical surveillance of cancers in Louisiana

Gu, Jin 18 December 2014 (has links)
In this dissertation, we consider the problem of partitioning a set of k population with respect to a control population. For this problem some multistage methodologies are proposed and their properties are derived. Using the Monte Carlo simulation techniques, the small and moderate sample size performance of the proposed procedure are studied. We have also considered at statistical surveillance of various cancers in Louisiana.
83

Automated Sea State Classification from Parameterization of Survey Observations and Wave-Generated Displacement Data

Teichman, Jason A 13 May 2016 (has links)
Sea state is a subjective quantity whose accuracy depends on an observer’s ability to translate local wind waves into numerical scales. It provides an analytical tool for estimating the impact of the sea on data quality and operational safety. Tasks dependent on the characteristics of local sea surface conditions often require accurate and immediate assessment. An attempt to automate sea state classification using eleven years of ship motion and sea state observation data is made using parametric modeling of distribution-based confidence and tolerance intervals and a probabilistic model using sea state frequencies. Models utilizing distribution intervals are not able to exactly convert ship motion data into various sea states scales with significant accuracy. Model averages compared to sea state tolerances do provide improved statistical accuracy but the results are limited to trend assessment. The probabilistic model provides better prediction potential than interval-based models, but is spatially and temporally dependent.
84

Dismembering the Multi-Armed Bandit

Timothy J Keaton (6991049) 14 August 2019 (has links)
<div>The multi-armed bandit (MAB) problem refers to the task of sequentially assigning treatments to experimental units so as to identify the best treatment(s) while controlling the opportunity cost of further investigation. Many algorithms have been developed that attempt to balance this trade-off between exploiting the seemingly optimum treatment and exploring the other treatments. The selection of an MAB algorithm for implementation in a particular context is often performed by comparing candidate algorithms in terms of their abilities to control the expected regret of exploration versus exploitation. This singular criterion of mean regret is insufficient for many practical problems, and therefore an additional criterion that should be considered is control of the variance, or risk, of regret.</div><div>This work provides an overview of how the existing prominent MAB algorithms handle both criteria. We additionally investigate the effects of incorporating prior information into an algorithm's model, including how sharing information across treatments affects the mean and variance of regret.</div><div>A unified and accessible framework does not currently exist for constructing MAB algorithms that control both of these criteria. To this end, we develop such a framework based on the two elementary concepts of dismemberment of treatments and a designed learning phase prior to dismemberment. These concepts can be incorporated into existing MAB algorithms to effectively yield new algorithms that better control the expectation and variance of regret. We demonstrate the utility of our framework by constructing new variants of the Thompson sampler that involve a small number of simple tuning parameters. As we illustrate in simulation and case studies, these new algorithms are implemented in a straightforward manner and achieve improved control of both regret criteria compared to the traditional Thompson sampler. Ultimately, our consideration of additional criteria besides expected regret illuminates novel insights into the multi-armed bandit problem.</div><div>Finally, we present visualization methods, and a corresponding R Shiny app for their practical execution, that can yield insights into the comparative performances of popular MAB algorithms. Our visualizations illuminate the frequentist dynamics of these algorithms in terms of how they perform the exploration-exploitation trade-off over their populations of realizations as well as the algorithms' relative regret behaviors. The constructions of our visualizations facilitate a straightforward understanding of complicated MAB algorithms, so that our visualizations and app can serve as unique and interesting pedagogical tools for students and instructors of experimental design.</div>
85

Análise Bayesiana de ensaios fatoriais 2k usando os princípios dos efeitos esparsos, da hierarquia e da hereditariedade / Bayesian analysis of 2k factorial designs using the sparse eects, hierarchy and heredity principles

Biz, Guilherme 29 January 2010 (has links)
No Planejamento de experimentos para o ajuste de modelos polinomiais envolvendo k fatores principais e respectivas interações, e bastante comum a utilização dos fatoriais 2k, 3k ou frações dos mesmos. Para as analises dos resultados desses experimentos, freqüentemente se considera o princípio da hereditariedade, ou seja, uma vez constatada uma interação significativa entre fatores, os fatores que aparecem nesta interação e respectivas interações devem também estar presentes no modelo. Neste trabalho, esse princípio e incorporado diretamente a priori, para um método de seleção de variáveis Bayesiana, seguindo as idéias propostas por Chipman, Hamada e Wu (1997), porem com uma alteração dos valores sugeridos pelos autores para os hiperparâmetros. Essa alteração, proposta neste trabalho, promove uma melhoria considerável na metodologia original. A metodologia e então ilustrada por meio da analise dos resultados de um experimento fatorial para a elaboração de biofilmes de amido originado da ervilha. / In experimental planning for adjustment of polynomials models involving k main factors and their interactions, it is frequent to adopt the 2k, 3k designs or its fractions. Furthermore, it is not unusual, when analysing the results of such experiments, to consider the heredity principle. In other words, once detected a signicant interaction between factors, the factors that appear in this interaction and respective interactions should also be present in the model. In this work, this principle is incorporated directly in the prior, following the ideas proposed by Chipman, Hamada and Wu (1997), but changing some of the hyperparameters. What improves considerably the original methodology. Finally the methodology is illustrated by the analysis of the results of an experiment for the elaboration of pea starch biolms.
86

Aplicação de modelos multiníveis na análise de dados de medidas repetidas no tempo. / Multilevel models applied in the analysis of repeated measure data.

Bergamo, Genevile Carife 28 October 2002 (has links)
Em muitos trabalhos científicos, é comum encontrar os dados estruturados de forma hierarquica, ou seja, os indivíduos em estudo estão agrupados em unidades de nível mais baixo, que por sua vez pertencem a unidades de um nível mais alto e assim sucessivamente. Na análise desse tipo de dados é importante levar em conta a estrutura hierarquica uma vez que, não faze-la, pode implicar na superestimação dos coecientes do modelo em estudo. Assim, para facilitar a análise de dados seguindo uma estrutura hierarquica, foram desenvolvidos os modelos multiníveis. Tais modelos levam em conta toda a variabilidade existente para os dados num mesmo nível como nos diferentes níveis da hierarquia. No caso da análise de dados de medidas repetidas no tempo, uma estrutura hierarquica em dois níveis pode ser considerada, organizando as ocasiões de medidas, no primeiro nível, para cada indivíduo no segundo nível. Neste trabalho, é feita uma abordagem dos modelos multiníveis para vários níveis da hierarquia bem como os métodos de estimação e teste dos parâmetros envolvidos no modelo. Como aplicação, foram analisados dados provenientes do Programa de Atenção ao Idoso (PAI), desenvolvido no ambulatório municipal Dr. Plinio do Prado Coutinho em Alfenas, M.G., em que foram observadas as variáveis Indice de Massa Corporea (imc) e Pressão Arterial dos idosos durante 22 meses. Também, foram analisados dados referentes ao teor de proteína no leite de 79 vacas australianas, coletados durante 19 semanas após o parto e submetidas a três dietas (Diggle et al., 1994). Para os dados do "PAI", foi possível verificar que as diferentes medidas de pressão arterial estão relacionadas positivamente com o imcao longo do tempo, independente de sexo, idade e estado civil. Já nos dados relativos ao teor de proteína no leite, notou-se uma redução do teor de proteína no leite ao longo do tempo, independente dos tratamentos aplicados. Foram utilizados os softwares MLwiN e SAS para a realização das análises. / It is common to and data structured in a hierarchical form in several scientific works, that is, the studied subjects are nested in the lowest level unites, that belong to the highest level unites, and so on. To analyze these sort of data it is important to take in account the hierarchical structure once, if does not do it, the coeficients can be overestimated in the studied model. Then, in order to become easier the data analysis according to the hierarchical structure, multilevel models were developed. Such models take into account all the existing variability for the data at the same level as well as in diferent levels of the hierarchy. In the case of repeated measure data, a two levels hierarchical structure can be considered, organizing the occasions at the first level for each subject at the second level. In this work, na approach of the multilevel models for several levels are made as well as the estimation methods and the tests for the involved parameters in the model. As an application, data from the Elderly Care Program (ECP), developed at outpatient clinic Dr. Plinio do Prado Coutinho at Alfenas, M.G., where the Body Mass Index and the Bloody Pressure were observed from elderly people for 22 months. Also, it was analyzed the milk protein content of 79 australian cows during 19 weeks after calving and subject to three diets (Diggle et al., 1994). For the data of the ECP it was possible to observe that the bloody pressure are positively related to the occasions, independently of sex, race and marital status. For the data form the milk protein content, a reduce in the content in the occasions even after the diets are included. MLwiN and SAS softwares were used to run the analysis.
87

Modelos lineares mistos: estruturas de matrizes de variâncias e covariâncias e seleção de modelos. / Mixed linear models: structures of matrix of variances and covariances and selection of models.

Camarinha Filho, Jomar Antonio 27 September 2002 (has links)
É muito comum encontrar nas áreas agronômica e biológica experimentos cujas observações são correlacionadas. Porém, tais correlações, em tese, podem estar associadas às parcelas ou às subparcelas, dependendo do plano experimental adotado. Além disso, a metodologia de modelos lineares mistos vem sendo utilizada com mais freqüência, principalmente após os trabalhos de Searle (1988), Searle at al. (1992), Wolfinger (1993b) entre outros. O sucesso do procedimento de modelagem está fortemente associado ao exame dos efeitos aleatórios que devem permanecer no modelo e na possibilidade de se introduzir, no modelo, estruturas de variâncias e covariâncias das variáveis aleatórias que, para o modelo linear misto, podem estar inseridas no resíduo e, também, na parte aleatória associada ao fator aleatório conhecido. Nesse contexto, o Teste da Razão de Verossimilhança e o Critério de Akaike podem auxiliar na tarefa de escolha do modelo mais apropriado para análise dos dados, além de permitir verificar que escolhas de modelos inadequadas acarretam em conclusões divergentes em relação aos efeitos fixos do modelo. Com o desenvolvimento do Proc Mixed do SAS (Littel at al. 1996), utilizado neste trabalho, a análise desses experimentos, tratada pela metodologia modelos lineares mistos, tornou-se mais usual e segura. Com a finalidade de se atingir o objetivo deste trabalho, utilizaram-se dois exemplos (A e B) sobre a resposta da produtividade de três cultivares de trigo, em relação a níveis de irrigação por aspersão line-source. Foram criados e analisados 29 modelos para o Exemplo A e 16 modelos para o Exemplo B. Pôde-se verificar, para cada um dos exemplos, que as conclusões em relação aos efeitos fixos se modificaram de acordo com o modelo adotado. Notou-se, também, que o Critério de Akaike deve ser visto com cautela. Ao se comparar modelos similares entre os dois exemplos, ratificou-se a importância de se programar corretamente no Proc Mixed. Nesse contexto, conclui-se que é fundamental conduzir a análise de experimentos de forma ampla, buscando vários modelos e verificando quais têm lógica em relação ao plano experimental, evitando erros ao término da análise. / In Biology and Agronomy, experiments that produce correlated observations are often found. Theoretically, these correlations may be associated with whole-plots or subplots, according to the chosen experimental design. Also, the mixed linear model methodology is now being used much more frequently, especially after the works of Searle (1988), Searle et al. (1992) and Wolfinger (1993b), among others. The success of the modeling procedure is strongly associated with the examination of the random effects that must remain within the model and the possibility of introducing variance-covariance structures of random variables in the model. In the case of the mixed linear model, they may be included in the residual error or in the random part which is associated with the known random factor. In this context, the Likelihood Ratio Test and Akaike's Information Criterion can help in choosing the most appropriate model for data analysis. They also enable the verification of inadequate choice of models which can lead to divergent conclusions regarding the fixed effects of the model. With the development of the SAS Mixed Procedure (Little at al. 1996), which was used in this work, analysis of these experiments, conducted through the mixed linear model methodology, has become more usual and secure. In order to achieve the target of this work, two examples were utilized (A and B) involving the productivity response of three varieties of wheat, in regards to irrigation levels by line-source aspersion. Twenty-nine models for Example A and 16 models for Example B were created and analyzed. For each example, it was verified that conclusions regarding fixed effects changed according to the model adopted. It was also verified that Akaike’s Information Criterion must be regarded with caution. When comparing similar models between the two examples, the importance of correct programming in the Mixed Procedure was confirmed. In this context, it can be concluded that it is fundamental to conduct the experiment analysis in an ample manner, looking for various models and verifying which ones make sense according to the experimental plan, thus avoiding errors at analysis completion.
88

Análise Bayesiana de ensaios fatoriais 2k usando os princípios dos efeitos esparsos, da hierarquia e da hereditariedade / Bayesian analysis of 2k factorial designs using the sparse eects, hierarchy and heredity principles

Guilherme Biz 29 January 2010 (has links)
No Planejamento de experimentos para o ajuste de modelos polinomiais envolvendo k fatores principais e respectivas interações, e bastante comum a utilização dos fatoriais 2k, 3k ou frações dos mesmos. Para as analises dos resultados desses experimentos, freqüentemente se considera o princípio da hereditariedade, ou seja, uma vez constatada uma interação significativa entre fatores, os fatores que aparecem nesta interação e respectivas interações devem também estar presentes no modelo. Neste trabalho, esse princípio e incorporado diretamente a priori, para um método de seleção de variáveis Bayesiana, seguindo as idéias propostas por Chipman, Hamada e Wu (1997), porem com uma alteração dos valores sugeridos pelos autores para os hiperparâmetros. Essa alteração, proposta neste trabalho, promove uma melhoria considerável na metodologia original. A metodologia e então ilustrada por meio da analise dos resultados de um experimento fatorial para a elaboração de biofilmes de amido originado da ervilha. / In experimental planning for adjustment of polynomials models involving k main factors and their interactions, it is frequent to adopt the 2k, 3k designs or its fractions. Furthermore, it is not unusual, when analysing the results of such experiments, to consider the heredity principle. In other words, once detected a signicant interaction between factors, the factors that appear in this interaction and respective interactions should also be present in the model. In this work, this principle is incorporated directly in the prior, following the ideas proposed by Chipman, Hamada and Wu (1997), but changing some of the hyperparameters. What improves considerably the original methodology. Finally the methodology is illustrated by the analysis of the results of an experiment for the elaboration of pea starch biolms.
89

Analyses of 2002-2013 China’s Stock Market Using the Shared Frailty Model

Tang, Chao 01 August 2014 (has links)
This thesis adopts a survival model to analyze China’s stock market. The data used are the capitalization-weighted stock market index (CSI 300) and the 300 stocks for creating the index. We define the recurrent events using the daily return of the selected stocks and the index. A shared frailty model which incorporates the random effects is then used for analyses since the survival times of individual stocks are correlated. Maximization of penalized likelihood is presented to estimate the parameters in the model. The covariates are selected using the Akaike information criterion (AIC) and the variance inflation factor (VIF) to avoid multicollinearity. The result of analyses show that the general capital, total amount of a stock traded in a day, turnover rate and price book ratio are significant in the shared frailty model for daily stock data.
90

A Study of Four Statistics, Used in Analysis of Contingency Tables, in the Presence of Low Expected Frequencies

Post, Jane R. 01 May 1975 (has links)
Four statistics used for the analysis of categorical data were observed in the presence of many zero cell frequencies in two way classification contingency tables. The purpose of this study was to determine the effect of many zero cell frequencies upon the distribution properties of each of the four statistics studied. It was found that Light and Margolin's C and Pearson's Chi-square statistic closely approximated the Chi-square distribution as long as less than one-third of the table cells were empty. It was found that the mean and variance of Kullbach's 21 were larger than the expected values in the presence of few empty cells. The mean for 21 was found to become small in the presence of large numbers of empty cells. Ku's corrected 21 statistic was found, in the presence of many zero cell frequencies, to have a much larger mean value than would be expected in a Chi-square distribution. Kullback's 21 demonstrated a peculiar distribution change in the presence of large numbers of zero cell frequencies. 21 first enlarged, then decreased in average value.

Page generated in 0.0732 seconds