• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 45
  • 32
  • 15
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 297
  • 297
  • 74
  • 52
  • 50
  • 47
  • 44
  • 42
  • 42
  • 41
  • 35
  • 34
  • 28
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Projeção de preços de alumínio: modelo ótimo por meio de combinação de previsões / Aluminum price forecasting: optimal forecast combination

Castro, João Bosco Barroso de 15 June 2015 (has links)
Commodities primárias, tais como metais, petróleo e agricultura, constituem matérias-primas fundamentais para a economia mundial. Dentre os metais, destaca-se o alumínio, usado em uma ampla gama de indústrias, e que detém o maior volume de contratos na London Metal Exchange (LME). Como o preço não está diretamente relacionado aos custos de produção, em momentos de volatilidade ou choques econômicos, o impacto financeiro na indústria global de alumínio é significativo. Previsão de preços do alumínio é fundamental, portanto, para definição de política industrial, bem como para produtores e consumidores. Este trabalho propõe um modelo ótimo de previsões para preços de alumínio, por meio de combinações de previsões e de seleção de modelos através do Model Confidence Set (MCS), capaz de aumentar o poder preditivo em relação a métodos tradicionais. A abordagem adotada preenche uma lacuna na literatura para previsão de preços de alumínio. Foram ajustados 5 modelos individuais: AR(1), como benchmarking, ARIMA, dois modelos ARIMAX e um modelo estrutural, utilizando a base de dados mensais de janeiro de 1999 a setembro de 2014. Para cada modelo individual, foram geradas 142 previsões fora da amostra, 12 meses à frente, por meio de uma janela móvel de 36 meses. Nove combinações de modelos foram desenvolvidas para cada ajuste dos modelos individuais, resultando em 60 previsões fora da amostra, 12 meses à frente. A avaliação de desempenho preditivo dos modelos foi realizada por meio do MCS para os últimos 60, 48 e 36 meses. Um total de 1.250 estimações foram realizadas e 1.140 variáveis independentes e suas transformadas foram avaliadas. A combinação de previsões usando ARIMA e um ARMAX foi o único modelo que permaneceu no conjunto de modelos com melhor acuracidade de previsão para 36, 48 e 60 meses a um nível descritivo do MCS de 0,10. Para os últimos 36 meses, o modelo combinado proposto apresentou resultados superiores em relação a todos os demais modelos. Duas co-variáveis identificadas no modelo ARMAX, preço futuro de três meses e estoques mundiais, aumentaram a acuracidade de previsão. A combinação ótima apresentou um intervalo de confiança pequeno, equivalente a 5% da média global da amostra completa analisada, fornecendo subsídio importante para tomada de decisão na indústria global de alumínio. iii / Primary commodities, including metals, oil and agricultural products are key raw materials for the global economy. Among metals, aluminum stands out for its large use in several industrial applications and for holding the largest contract volume on the London Metal Exchange (LME). As the price is not directly related to production costs, during volatility periods or economic shocks, the financial impact on the global aluminum industry is significant. Aluminum price forecasting, therefore, is critical for industrial policy as well as for producers and consumers. This work has proposed an optimal forecast model for aluminum prices by using forecast combination and the Model Confidence Set for model selection, resulting in superior performance compared to tradicional methods. The proposed approach was not found in the literature for aluminum price forecasting. Five individual models were developed: AR(1) for benchmarking, ARIMA, two ARIMAX models and a structural model, using monthly data from January 1999 to September 2014. For each individual model, 142 out-of-sample, 12 month ahead, forecasts were generated through a 36 month rolling window. Nine foreast combinations were deveoped for each individual model estimation, resulting in 60 out-of-sample, 12 month ahead forecasts. Model predictive performace was assessed through the Model Confidence Set for the latest 36, 48, and 60 months, through 12-month ahead out-of-sample forecasts. A total of 1,250 estimations were performed and 1,140 independent variables and their transformations were assessed. The forecast combination using ARMA and ARIMAX was the only model among the best set of models presenting equivalent performance at 0.10 MCS p-value in all three periods. For the latest 36 months, the proposed combination was the best model at 0.1 MCS p-value. Two co-variantes, identified for the ARMAX model, namely, 3-month forward price and global inventories increased forecast accuracy. The optimal forecast combination has generated a small confidence interval, equivalent to 5% of average aluminum price for the entire sample, proving relevant support for global industry decision makers.
122

Análise Bayesiana de modelos de mistura finita com dados censurados / Bayesian analysis of finite mixture models with censored data

Melo, Brian Alvarez Ribeiro de 21 February 2017 (has links)
Misturas finitas são modelos paramétricos altamente flexíveis, capazes de descrever diferentes características dos dados em vários contextos, especialmente na análise de dados heterogêneos (Marin, 2005). Geralmente, nos modelos de mistura finita, todas as componentes pertencem à mesma família paramétrica e são diferenciadas apenas pelo vetor de parâmetros associado a essas componentes. Neste trabalho, propomos um novo modelo de mistura finita, capaz de acomodar observações censuradas, no qual as componentes são as densidades das distribuições Gama, Lognormal e Weibull (mistura GLW). Essas densidades são reparametrizadas, sendo reescritas em função da média e da variância, uma vez que estas quantidades são mais difundidas em diversas áreas de estudo. Assim, construímos o modelo GLW e desenvolvemos a análise de tal modelo sob a perspectiva bayesiana de inferência. Essa análise inclui a estimação, através de métodos de simulação, dos parâmetros de interesse em cenários com censura e com fração de cura, a construção de testes de hipóteses para avaliar efeitos de covariáveis e pesos da mistura, o cálculo de medidas para comparação de diferentes modelos e estimação da distribuição preditiva de novas observações. Através de um estudo de simulação, avaliamos a capacidade da mistura GLW em recuperar a distribuição original dos tempos de falha utilizando testes de hipóteses e estimativas do modelo. Os modelos desenvolvidos também foram aplicados no estudo do tempo de seguimento de pacientes com insuficiência cardíaca do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo. Nesta aplicação, os resultados mostram uma melhor adequação dos modelos de mistura em relação à utilização de apenas uma distribuição na modelagem dos tempos de seguimentos. Por fim, desenvolvemos um pacote para o ajuste dos modelos apresentados no software R. / Finite mixtures are highly flexible parametric models capable of describing different data features and are widely considered in many contexts, especially in the analysis of heterogeneous data (Marin, 2005). Generally, in finite mixture models, all the components belong to the same parametric family and are only distinguished by the associated parameter vector. In this thesis, we propose a new finite mixture model, capable of handling censored observations, in which the components are the densities from the Gama, Lognormal and Weibull distributions (the GLW finite mixture). These densities are rewritten in such a way that the mean and the variance are the parameters, since the interpretation of such quantities is widespread in various areas of study. In short, we constructed the GLW model and developed its analysis under the bayesian perspective of inference considering scenarios with censorship and cure rate. This analysis includes the parameter estimation, wich is made through simulation methods, construction of hypothesis testing to evaluate covariate effects and to assess the values of the mixture weights, computatution of model adequability measures, which are used to compare different models and estimation of the predictive distribution for new observations. In a simulation study, we evaluated the feasibility of the GLW mixture to recover the original distribution of failure times using hypothesis testing and some model estimated quantities as criteria for selecting the correct distribution. The models developed were applied in the study of the follow-up time of patients with heart failure from the Heart Institute of the University of Sao Paulo Medical School. In this application, results show a better fit of mixture models, in relation to the use of only one distribution in the modeling of the failure times. Finally, we developed a package for the adjustment of the presented models in software R.
123

Seleção de modelos multiníveis para dados de avaliação educacional / Selection of multilevel models for educational evaluation data

Coelho, Fabiano Rodrigues 11 August 2017 (has links)
Quando um conjunto de dados possui uma estrutura hierárquica, uma possível abordagem são os modelos de regressão multiníveis, que se justifica pelo fato de haver uma porção significativa da variabilidade dos dados que pode ser explicada por níveis macro. Neste trabalho, desenvolvemos a seleção de modelos de regressão multinível aplicados a dados educacionais. Esta análise divide-se em duas partes: seleção de variáveis e seleção de modelos. Esta última subdivide-se em dois casos: modelagem clássica e modelagem bayesiana. Buscamos através de critérios como o Lasso, AIC, BIC, WAIC entre outros, encontrar quais são os fatores que influenciam no desempenho em matemática dos alunos do nono ano do ensino fundamental do estado de São Paulo. Também investigamos o funcionamento de cada um dos critérios de seleção de variáveis e de modelos. Foi possível concluir que, sob a abordagem frequentista, o critério de seleção de modelos BIC é o mais eficiente, já na abordagem bayesiana, o critérioWAIC apresentou melhores resultados. Utilizando o critério de seleção de variáveis Lasso para abordagem clássica, houve uma diminuição de 34% dos preditores do modelo. Por fim, identificamos que o desempenho em matemática dos estudantes do nono ano do ensino fundamental do estado de São Paulo é influenciado pelas seguintes covariáveis: grau de instrução da mãe, frequência de leitura de livros, tempo gasto com recreação em dia de aula, o fato de gostar de matemática, o desempenho em matemática global da escola, desempenho em língua portuguesa do aluno, dependência administrativa da escola, sexo, grau de instrução do pai, reprovações e distorção idade-série. / When a dataset contains a hierarchical data structure, a possible approach is the multilevel regression modelling, which is justified by the significative amout of the data variability that can be explained by macro level processes. In this work, a selection of multilevel regression models for educational data is developed. This analysis is divided into two parts: variable selection and model selection. The latter is subdivided into two categories: classical and Bayesian modeling. Traditional criteria for model selection such as Lasso, AIC, BIC, and WAIC, among others are used in this study as an attempt to identify the factors influencing ninth grade students performance in Mathematics of elementary education in the State of São Paulo. Likewise, an investigation was conducted to evaluate the performance of each variable selection criteria and model selection methods applied to fitted models that will be mentioned throughout this work. It was possible to conclude that, under the frequentist approach, BIC is the most efficient, whereas under the bayesian approach, WAIC presented better results. Using Lasso under the frequentist approach, a decrease of 34% on the number of predictors was observed. Finally, we identified that the performance in Mathematics of students in the ninth year of elementary school in the state of São Paulo is most influenced by the following covariates: mothers educational level, frequency of book reading, time spent with recreation in classroom, the fact of liking Math, school global performance in Mathematics, performance in Portuguese, school administrative dependence, gender, fathers educational degree, failures and age-grade distortion.
124

Análise Bayesiana de modelos de mistura finita com dados censurados / Bayesian analysis of finite mixture models with censored data

Brian Alvarez Ribeiro de Melo 21 February 2017 (has links)
Misturas finitas são modelos paramétricos altamente flexíveis, capazes de descrever diferentes características dos dados em vários contextos, especialmente na análise de dados heterogêneos (Marin, 2005). Geralmente, nos modelos de mistura finita, todas as componentes pertencem à mesma família paramétrica e são diferenciadas apenas pelo vetor de parâmetros associado a essas componentes. Neste trabalho, propomos um novo modelo de mistura finita, capaz de acomodar observações censuradas, no qual as componentes são as densidades das distribuições Gama, Lognormal e Weibull (mistura GLW). Essas densidades são reparametrizadas, sendo reescritas em função da média e da variância, uma vez que estas quantidades são mais difundidas em diversas áreas de estudo. Assim, construímos o modelo GLW e desenvolvemos a análise de tal modelo sob a perspectiva bayesiana de inferência. Essa análise inclui a estimação, através de métodos de simulação, dos parâmetros de interesse em cenários com censura e com fração de cura, a construção de testes de hipóteses para avaliar efeitos de covariáveis e pesos da mistura, o cálculo de medidas para comparação de diferentes modelos e estimação da distribuição preditiva de novas observações. Através de um estudo de simulação, avaliamos a capacidade da mistura GLW em recuperar a distribuição original dos tempos de falha utilizando testes de hipóteses e estimativas do modelo. Os modelos desenvolvidos também foram aplicados no estudo do tempo de seguimento de pacientes com insuficiência cardíaca do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo. Nesta aplicação, os resultados mostram uma melhor adequação dos modelos de mistura em relação à utilização de apenas uma distribuição na modelagem dos tempos de seguimentos. Por fim, desenvolvemos um pacote para o ajuste dos modelos apresentados no software R. / Finite mixtures are highly flexible parametric models capable of describing different data features and are widely considered in many contexts, especially in the analysis of heterogeneous data (Marin, 2005). Generally, in finite mixture models, all the components belong to the same parametric family and are only distinguished by the associated parameter vector. In this thesis, we propose a new finite mixture model, capable of handling censored observations, in which the components are the densities from the Gama, Lognormal and Weibull distributions (the GLW finite mixture). These densities are rewritten in such a way that the mean and the variance are the parameters, since the interpretation of such quantities is widespread in various areas of study. In short, we constructed the GLW model and developed its analysis under the bayesian perspective of inference considering scenarios with censorship and cure rate. This analysis includes the parameter estimation, wich is made through simulation methods, construction of hypothesis testing to evaluate covariate effects and to assess the values of the mixture weights, computatution of model adequability measures, which are used to compare different models and estimation of the predictive distribution for new observations. In a simulation study, we evaluated the feasibility of the GLW mixture to recover the original distribution of failure times using hypothesis testing and some model estimated quantities as criteria for selecting the correct distribution. The models developed were applied in the study of the follow-up time of patients with heart failure from the Heart Institute of the University of Sao Paulo Medical School. In this application, results show a better fit of mixture models, in relation to the use of only one distribution in the modeling of the failure times. Finally, we developed a package for the adjustment of the presented models in software R.
125

Seleção de modelos econométricos não aninhados: J-Teste e FBST / Non nested econometric model selection: J-Test and FBST

Fernando Valvano Cerezetti 26 October 2007 (has links)
A comparação e seleção de modelos estatísticos desempenham um papel fundamental dentro da análise econométrica. No que se trata especificamente da avaliação de modelos não aninhados, o procedimento de teste denominado de J-Teste aparece como uma ferramenta de uso freqüente nessa literatura. De acordo com apontamentos, entre os anos de 1984 e 2004 o J-Teste foi citado em 497 artigos pertinentes. Diferentemente do J-Teste, as abordagens Bayesianas possuem um potencial de aplicabilidade ainda pouco explorado na literatura, dado que são metodologicamente coerentes com os procedimentos inferenciais da econometria. Nesse sentido, o objetivo do presente trabalho é o de avaliar a aplicabilidade do procedimento de teste Bayesiano FBST para a comparação de modelos econométricos não aninhados. Implementando-se o FBST para os mesmos dados de estudos estatísticos relevantes na Teoria Econômica, tais como Bremmer (2003) (Curva de Phillips) e Caporale e Grier (2000) (determinação da taxa de juros real), constata-se que os resultados obtidos apontam para conclusões semelhantes daquelas delineadas com a utilização do J-Teste. Além disso, ao se utilizar a noção de função poder para avaliar ambos os procedimentos de teste, observa-se que sob certas condições as chances de erro expressas pelo Erro Tipo I e Erro Tipo II se tornam relativamente próximas. / The comparison and selection of statistical models play an important role in econometric analysis. Dealing with evaluation of non nested models, the test procedure called J-Test is a frequently used tool in the literature. Accordingly to statistics, between the years 1894 and 2004 the J-Test was cited on 497 pertinent articles. Differently from J-Test, the Bayesian theories have an unexplored applicability potential in the literature, once they are methodologically coherent with the standard procedures of inference in econometrics. In this sense, the objective of this essay is to evaluate the applicability of the Bayesian procedure FBST to comparison of non nested econometric models. Implementing the FBST to the same data of some relevant statistical studies in Economic Theory, like Bremmer (2003) (Phillips Curve) and Caporale and Grier (2000) (real interest rate determination), it can be seen that the results obtained point to the same conclusions as that attained with J-Test utilization. Besides that, when implementing the power function to evaluate both test procedures, it can be observed that under some conditions the error chances expressed by Error Type I and Error Type II become relatively close.
126

Processos de salto com memória de alcance variável / Jump process with memory of variable length

Douglas Rodrigues Pinto 26 January 2016 (has links)
Nessa tese apresentamos uma nova classe de modelos, os processos de saltos com memória de alcance variável, uma generalização a tempo contínuo do processo introduzido em Galves e Löcherbach (2013). Desenvolvemos um novo estimador para a árvore de contexto imersa no processo de salto com memória de alcance variável, considerando mais parâmetros fornecidos pela amostra. Obtivemos também uma cota superior da taxa de convergência da árvore estimada para árvore real, provando a convergência quase certa do estimador. / In this work we deal with a new class of models: the jump processes with variable length memory. This is a continuous-time generalization of the process introduced in Galves and Löcherbach (2013). We present a new estimator for the tree context embedded in this process, considering all information provided by the sample. We also present an exponential upper bound for the rate of convergence, proving then the almost sure convergence of the estimator.
127

Processos de salto com memória de alcance variável / Jump process with memory of variable length

Pinto, Douglas Rodrigues 26 January 2016 (has links)
Nessa tese apresentamos uma nova classe de modelos, os processos de saltos com memória de alcance variável, uma generalização a tempo contínuo do processo introduzido em Galves e Löcherbach (2013). Desenvolvemos um novo estimador para a árvore de contexto imersa no processo de salto com memória de alcance variável, considerando mais parâmetros fornecidos pela amostra. Obtivemos também uma cota superior da taxa de convergência da árvore estimada para árvore real, provando a convergência quase certa do estimador. / In this work we deal with a new class of models: the jump processes with variable length memory. This is a continuous-time generalization of the process introduced in Galves and Löcherbach (2013). We present a new estimator for the tree context embedded in this process, considering all information provided by the sample. We also present an exponential upper bound for the rate of convergence, proving then the almost sure convergence of the estimator.
128

Space Use, Resource Selection, and Survival of Reintroduced Bighorn Sheep

Robinson, Rusty Wade 01 August 2017 (has links)
Successful management of bighorn sheep depends on understanding the mechanisms responsible for population growth or decline, habitat selection, and utilization distribution after translocations. We studied a declining population of desert bighorn sheep in the North San Rafael Swell, Utah to determine birthdates of neonates, demographics, limiting factors, population size, probable cause of death, production, and survival. We documented 19 mortalities attributed to a variety of causes including cougar predation (n = 10, 53%), bluetongue virus (n = 2, 11%), reproductive complications (n = 2, 11%), hunter harvest (n = 1, 5%), and unknown (n = 4, 21%). Annual survival of females was 73% (95% CI = 0.55—0.86) in 2012 and 73% (95% CI = 0.55—0.86) in 2013. Adult male survival was 75% in 2012 (95% CI = 0.38—0.94) and 88% (95% CI = 0.50—0.98) in 2013. Disease testing revealed the presence of pneumonia-related pathogens. The population increased from an estimated 127 in 2012 to 139 in 2013 (λ = 1.09). Lamb:ewe ratios were 47:100 in 2012 and 31:100 in 2013. Mean birthing dates were 21 May in 2012 and 20 May in 2013. Spatial separation from domestic sheep and goats, and aggressive harvest of cougars, may have aided in the recovery of this population after disease events. Second, we investigated the timing of parturition and nursery habitat of desert bighorn sheep in the North San Rafael Swell to determine the influence of vegetation, topography, and anthropogenic features on resource selection. We monitored 38 radio-tagged ewes to establish birthing dates. We documented birthdates of 45 lambs. We used collar-generated GPS locations to perform logistic regression within a model-selection framework to differentiate between nursery and random locations (n = 750 for each) based on a suite of covariates. The top model included elevation, slope, ruggedness, aspect, vegetation type, distance to trails, and distance to roads. We used these variables to create a GIS model of nursery habitat for the North San Rafael (desert bighorns) and the Green River Corridor (Rocky Mountain bighorns). Ewes showed preference for steep, north-facing slopes, rugged terrain, lower elevation, and avoidance of roads. Our model provides managers with a map of high probability nursery areas of desert and Rocky Mountain bighorns to aid in conservation planning and mitigate potential conflicts with industry and domestic livestock. Finally, we monitored 127 reintroduced female bighorn sheep in three adjacent restored populations to investigate if the size and overlap of habitat use by augmented bighorns differed from resident bighorns. The size of seasonal ranges for residents was generally larger than augmented females. However, there was a shift in utilization distribution in all three populations after augmentation. Overlap indices between resident and augmented sheep varied by source herd. These data will help managers understand the dynamics of home range expansion and the overlap between provenance groups following augmentations.
129

Serial Testing for Detection of Multilocus Genetic Interactions

Al-Khaledi, Zaid T. 01 January 2019 (has links)
A method to detect relationships between disease susceptibility and multilocus genetic interactions is the Multifactor-Dimensionality Reduction (MDR) technique pioneered by Ritchie et al. (2001). Since its introduction, many extensions have been pursued to deal with non-binary outcomes and/or account for multiple interactions simultaneously. Studying the effects of multilocus genetic interactions on continuous traits (blood pressure, weight, etc.) is one case that MDR does not handle. Culverhouse et al. (2004) and Gui et al. (2013) proposed two different methods to analyze such a case. In their research, Gui et al. (2013) introduced the Quantitative Multifactor-Dimensionality Reduction (QMDR) that uses the overall average of response variable to classify individuals into risk groups. The classification mechanism may not be efficient under some circumstances, especially when the overall mean is close to some multilocus means. To address such difficulties, we propose a new algorithm, the Ordered Combinatorial Quantitative Multifactor-Dimensionality Reduction (OQMDR), that uses a series of testings, based on ascending order of multilocus means, to identify best interactions of different orders with risk patterns that minimize the prediction error. Ten-fold cross-validation is used to choose from among the resulting models. Regular permutations testings are used to assess the significance of the selected model. The assessment procedure is also modified by utilizing the Generalized Extreme-Value distribution to enhance the efficiency of the evaluation process. We presented results from a simulation study to illustrate the performance of the algorithm. The proposed algorithm is also applied to a genetic data set associated with Alzheimer's Disease.
130

A unified discrepancy-based approach for balancing efficiency and robustness in state-space modeling estimation, selection, and diagnosis

Hu, Nan 01 December 2016 (has links)
Due to its generality and flexibility, the state-space model has become one of the most popular models in modern time domain analysis for the description and prediction of time series data. The model is often used to characterize processes that can be conceptualized as "signal plus noise," where the realized series is viewed as the manifestation of a latent signal that has been corrupted by observation noise. In the state-space framework, parameter estimation is generally accomplished by maximizing the innovations Gaussian log-likelihood. The maximum likelihood estimator (MLE) is efficient when the normality assumption is satisfied. However, in the presence of contamination, the MLE suffers from a lack of robustness. Basu, Harris, Hjort, and Jones (1998) introduced a discrepancy measure (BHHJ) with a non-negative tuning parameter that regulates the trade-off between robustness and efficiency. In this manuscript, we propose a new parameter estimation procedure based on the BHHJ discrepancy for fitting state-space models. As the tuning parameter is increased, the estimation procedure becomes more robust but less efficient. We investigate the performance of the procedure in an illustrative simulation study. In addition, we propose a numerical method to approximate the asymptotic variance of the estimator, and we provide an approach for choosing an appropriate tuning parameter in practice. We justify these procedures theoretically and investigate their efficacy in simulation studies. Based on the proposed parameter estimation procedure, we then develop a new model selection criterion in the state-space framework. The traditional Akaike information criterion (AIC), where the goodness-of-fit is assessed by the empirical log-likelihood, is not robust to outliers. Our new criterion is comprised of a goodness-of-fit term based on the empirical BHHJ discrepancy, and a penalty term based on both the tuning parameter and the dimension of the candidate model. We present a comprehensive simulation study to investigate the performance of the new criterion. In instances where the time series data is contaminated, our proposed model selection criterion is shown to perform favorably relative to AIC. Lastly, using the BHHJ discrepancy based on the chosen tuning parameter, we propose two versions of an influence diagnostic in the state-space framework. Specifically, our diagnostics help to identify cases that influence the recovery of the latent signal, thereby providing initial guidance and insight for further exploration. We illustrate the behavior of these measures in a simulation study.

Page generated in 0.0664 seconds