• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 18
  • 13
  • 7
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Avaliação da habilidade preditiva entre modelos Garch multivariados : uma análise baseada no critério Model Confidence Set

Borges, Bruna Kasprzak January 2012 (has links)
Esta dissertação analisa a questão da seleção de modelos GARCH multivariados em termos da perfomance de previsão da matriz de covariância condicional. A aplicação empírica é realizada com 7 retornos de índices de ações envolvendo um conjunto de 34 especificações de modelos para os quais computamos as previsões da variância condicional um passo a frente para uma amostra com 60 observações para cada especificação dos modelos GARCH multivariados. A comparação entre os modelos é baseada no procedimento Model Confidence Set (MCS) avaliado através de duas funções perdas robustas a proxies de volatilidade imperfeitas. O MCS é um procedimento que permite comparar vários modelos simultaneamente em termos de sua habilidade preditiva e determinar um conjunto de modelos estatisticamente semelhantes em termos de previsão, dado um nível de confiança. / This paper considers the question of the selection of multivariate GARCH models in terms of covariance matrix forecasting. In the empirical application we consider 7 series of returns and compare a set of 34 model specifications based on one-step-ahead conditional variance forecasts over a sample with 60 observations. The comparison between models is performed with the Model Confidence Set (MCS) procedure evaluated using two loss functions that are robust against imperfect volatility proxies. The MCS is a procedure that allows both a multiple model comparison in terms of forecasting accuracy and the determination of a model set composed of statistically equivalent models, under a confidence level.
12

Topics on Uncertainty Quantification for Model Selection

Wang, Linna January 2021 (has links)
No description available.
13

Comparação de previsões para a produção industrial brasileira considerando efeitos calendário e modelos agregados e desagregados

Nishida, Rodrigo 03 February 2016 (has links)
Submitted by Rodrigo Nishida (rodrigo.nishida@gmail.com) on 2016-03-01T21:32:49Z No. of bitstreams: 1 Dissertação_Rodrigo_Nishida.pdf: 1015993 bytes, checksum: 46a19756f9bf85226f3b1bd20eb5a724 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2016-03-01T21:44:25Z (GMT) No. of bitstreams: 1 Dissertação_Rodrigo_Nishida.pdf: 1015993 bytes, checksum: 46a19756f9bf85226f3b1bd20eb5a724 (MD5) / Made available in DSpace on 2016-03-02T11:17:06Z (GMT). No. of bitstreams: 1 Dissertação_Rodrigo_Nishida.pdf: 1015993 bytes, checksum: 46a19756f9bf85226f3b1bd20eb5a724 (MD5) Previous issue date: 2016-02-03 / The work aims to verify the existence and the relevance of Calendar Effects in industrial indicators. The analysis covers linear univariate models for the Brazilian monthly industrial production index and some of its components. Initially an in-sample analysis is conducted using state space structural models and Autometrics selection algorithm, which indicates statistically significance effect of most variables related to calendar. Then, using Diebold-Mariano (1995) procedure and Model Confidence Set, developed by Hansen, Lunde e Nason (2011), out-of-sample comparisons are realized between Autometrics derived models and a simple double difference device for a forecast horizon up to 24 months ahead. In general, forecasts of the Autometrics models that consider calendar variables are superior for 1-2 steps ahead and surpass the naive model in all horizons. The aggregation of the category of use components to form the general industry indicator shows evidence of a better perform in shorter term forecasts. / O trabalho tem como objetivo verificar a existência e a relevância dos Efeitos Calendário em indicadores industriais. São explorados modelos univariados lineares para o indicador mensal da produção industrial brasileira e alguns de seus componentes. Inicialmente é realizada uma análise dentro da amostra valendo-se de modelos estruturais de espaço-estado e do algoritmo de seleção Autometrics, a qual aponta efeito significante da maioria das variáveis relacionadas ao calendário. Em seguida, através do procedimento de Diebold-Mariano (1995) e do Model Confidence Set, proposto por Hansen, Lunde e Nason (2011), são realizadas comparações de previsões de modelos derivados do Autometrics com um dispositivo simples de Dupla Diferença para um horizonte de até 24 meses à frente. Em geral, os modelos Autometrics que consideram as variáveis de calendário se mostram superiores nas projeções de 1 a 2 meses adiante e superam o modelo simples em todos os horizontes. Quando se agrega os componentes de categoria de uso para formar o índice industrial total, há evidências de ganhos nas projeções de prazo mais curto.
14

Classification fine d'objets : identification d'espèces végétales / Fine-grained object categorization : plant species identification

Rejeb Sfar, Asma 10 July 2014 (has links)
Nous étudions la problématique de classification dite fine en se concentrant sur la détermination des espèces botaniques à partir d’images de feuilles. Nous nous intéressons aussi bien à la description et la représentation de l’objet qu’aux algorithmes de classification et des scénarios d’identification utiles à l’utilisateur. Nous nous inspirons du processus manuel des botanistes pour introduire une nouvelle représentation hiérarchique des feuilles. Nous proposons aussi un nouveau mécanisme permettant d’attirer l’attention au tour de certains points caractéristiques de l’objet et d’apprendre des signatures spécifiques à chaque catégorie.Nous adoptons une stratégie de classification hiérarchique utilisant une série de classifieurs locaux allant des plus grossiers vers les plus fins; la classification locale étant basée sur des rapports de vraisemblance. L’algorithme fournit une liste d’estimations ordonnées selon leurs rapports de vraisemblance. Motivés par les applications, nous introduisons un autre scénario proposant à l’utilisateur un ensemble de confiance contenant la bonne espèce avec une probabilité très élevée. Un nouveau critère de performance est donc considéré : la taille de l’ensemble retourné. Nous proposons un modèle probabiliste permettant de produire de tels ensembles de confiance. Toutes les méthodes sont illustrées sur plusieurs bases de feuilles ainsi que des comparaisons avec les méthodes existantes. / We introduce models for fine-grained categorization, focusing on determining botanical species from leaf images. Images with both uniform and cluttered background are considered and several identification scenarios are presented, including different levels of human participation. Both feature extraction and classification algorithms are investigated. We first leverage domain knowledge from botany to build a hierarchical representation of leaves based on IdKeys, which encode invariable characteristics, and refer to geometric properties (i.e., landmarks) and groups of species (e.g., taxonomic categories). The main idea is to sequentially refine the object description and thus narrow down the set of candidates during the identification task. We also introduce vantage feature frames as a more generic object representation and a mechanism for focusing attention around several vantage points (where to look) and learning dedicated features (what to compute). Based on an underlying coarse-to-fine hierarchy, categorization then proceeds from coarse-grained to fine-grained using local classifiers which are based on likelihood ratios. Motivated by applications, we also introduce on a new approach and performance criterion: report a subset of species whose expected size is minimized subject to containing the true species with high probability. The approach is model-based and outputs a confidence set in analogy with confidence intervals in classical statistics. All methods are illustrated on multiple leaf datasets with comparisons to existing methods.
15

Descobrindo modelos de previsão para a inflação brasileira: uma análise a partir do algoritmo Autometrics

Silva, Anderson Moriya 29 January 2016 (has links)
Submitted by anderson silva (amoriya@hotmail.com) on 2016-02-19T19:41:50Z No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2016-02-23T16:25:35Z (GMT) No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) / Made available in DSpace on 2016-02-23T20:09:48Z (GMT). No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) Previous issue date: 2016-01-29 / O presente trabalho tem como objetivo avaliar a capacidade preditiva de modelos econométricos de séries de tempo baseados em indicadores macroeconômicos na previsão da inflação brasileira (IPCA). Os modelos serão ajustados utilizando dados dentro da amostra e suas projeções ex-post serão acumuladas de um a doze meses à frente. As previsões serão comparadas a de modelos univariados como autoregressivo de primeira ordem - AR(1) - que nesse estudo será o benchmark escolhido. O período da amostra vai de janeiro de 2000 até agosto de 2015 para ajuste dos modelos e posterior avaliação. Ao todo foram avaliadas 1170 diferentes variáveis econômicas a cada período a ser projetado, procurando o melhor conjunto preditores para cada ponto no tempo. Utilizou-se o algoritmo Autometrics para a seleção de modelos. A comparação dos modelos foi feita através do Model Confidence Set desenvolvido por Hansen, Lunde e Nason (2010). Os resultados obtidos nesse ensaio apontam evidências de ganhos de desempenho dos modelos multivariados para períodos posteriores a 1 passo à frente. / The present work has aim to evaluate the superior predictions capabilities of econometrics time series models based on macroeconomics indicators for Brazilian inflation (IPCA). The models were adjusted in sample and the ex-post prediction are accumulating in one to twelve steps ahead. The forecasts will be compared with univariate models like first order autoregressive - AR (1) that is the chosen benchmark. The period of the sample goes through January 2000 to August 2015 for model adjustment and evaluation. It was evaluate over 1170 different economic variable for each forecast period, searching for the best predictor set for each point in time. It was used Autometrics to model selection. The models were compared the Model Confident Set, developed by Hansen, Lunde and Nason (2010). The results founded in this essay evidences gain of accuracy for one-step ahead.
16

Testando a superioridade preditiva do passeio aleatório em modelos de taxa de câmbio real efetiva

Zimmermann, Fabiano Penna 02 February 2016 (has links)
Submitted by FABIANO ZIMMERMANN (fabpenna@yahoo.com.br) on 2016-02-26T20:45:13Z No. of bitstreams: 1 Fabiano Zimmermann versao final.pdf: 445367 bytes, checksum: 653eae70f4eb404f167fdab5a327da7d (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2016-02-26T20:47:44Z (GMT) No. of bitstreams: 1 Fabiano Zimmermann versao final.pdf: 445367 bytes, checksum: 653eae70f4eb404f167fdab5a327da7d (MD5) / Made available in DSpace on 2016-02-26T20:56:05Z (GMT). No. of bitstreams: 1 Fabiano Zimmermann versao final.pdf: 445367 bytes, checksum: 653eae70f4eb404f167fdab5a327da7d (MD5) Previous issue date: 2016-02-02 / O estudo busca identificar quais variáveis são as mais relevantes para previsão da taxa de câmbio real efetiva e analisar a robustez dessas previsões. Foram realizados testes de cointegração de Johansen em 13 variáveis macroeconômicas. O banco de dados utilizado são séries trimestrais e os testes foram realizados sobre as séries combinadas dois a dois, três a três e quatro a quatro. Utilizando esse método, encontramos modelos que cointegravam entre si, para os países analisados. A partir desses modelos, foram feitas previsões fora da amostra a partir das últimas 60 observações. A qualidade das previsões foi avaliada por meio dos testes de Erro Quadrático Médio (EQM) e Modelo do Conjunto de Confiança de Hansen (MCS) utilizando um modelo de passeio aleatório do câmbio real como benchmark. Todos os testes mostram que, à medida que se aumenta o horizonte de projeção, o passeio aleatório perde poder preditivo e a maioria dos modelos são mais informativos sobre o futuro da taxa de câmbio real efetivo. / This paper seeks to identify which variables are the most relevant for predicting the real effective exchange rate and analyze the robustness of these forecasts. Johansen Cointegration tests were performed on 13 macroeconomic variables. The database quarterly series are used and tests were carried out on the combined sets two by two, three by three and four by four. Using this method, we find models that cointegrated each other, for the countries analyzed. From these models, predictions were made out of the sample from the last 60 observations. The quality of the forecasts was assessed by the mean square error tests (RMSE) and Model Hansen Confidence Set (MCS) used a random walk model the real exchange rate as a benchmark. All tests show that, as we expand the forecast horizon, the random walk lose predictive power and most models are more informative about the future of the real effective exchange rate in the long term.
17

Modelos para projeção de inflação no Brasil com dados desagregados por regiões

Torres, Gustavo Dias 23 August 2017 (has links)
Submitted by Gustavo Dias Torres (gustavo.dias.torres@gmail.com) on 2017-09-11T23:39:51Z No. of bitstreams: 1 Dissertação Gustavo Dias Torres_vFinal.pdf: 1259234 bytes, checksum: 51ec6f205ed2f13e3636ef236d11d9dc (MD5) / Rejected by Joana Martorini (joana.martorini@fgv.br), reason: Prezado Gustavo, Favor corrigir o nome da Escola, Getulio, sem acento. Obrigada. Att. on 2017-09-11T23:53:47Z (GMT) / Submitted by Gustavo Dias Torres (gustavo.dias.torres@gmail.com) on 2017-09-12T00:02:33Z No. of bitstreams: 1 Dissertação Gustavo Dias Torres_vFinal.pdf: 1258567 bytes, checksum: 522209b93a243d0d3fd0dd9d9caffb9a (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2017-09-12T15:25:42Z (GMT) No. of bitstreams: 1 Dissertação Gustavo Dias Torres_vFinal.pdf: 1258567 bytes, checksum: 522209b93a243d0d3fd0dd9d9caffb9a (MD5) / Made available in DSpace on 2017-09-13T12:13:30Z (GMT). No. of bitstreams: 1 Dissertação Gustavo Dias Torres_vFinal.pdf: 1258567 bytes, checksum: 522209b93a243d0d3fd0dd9d9caffb9a (MD5) Previous issue date: 2017-08-23 / The objective of this study is to evaluate if there are gains in working with data disaggregated by regions to forecast inflation in Brazil. For this purpose, we constructed univariate autoregressive models with different types and levels of IPCA (main Brazilian consumer price index) disaggregation for a forecasting horizon of up to 12 months ahead. Monthly IPCA data were used between January 1996 and October 2016 for the national index and 11 metropolitan regions and capitals that make up the index. The analysis of out-of-sample projections was done in two distinct time sections. First between December 2006 and October 2016 and, secondly, for the period December 2006 and December 2012. The models were estimated by software Oxmetrics 7 and, in some cases, the Autometrics algorithm was also used. The comparisons of the models were made by the Mean Square Error and the Model Confidence Set technique, developed by Hansen, Lunde and Nason (2011). The results indicate that the performance of the disaggregated models is better than the aggregate models and, in particular, the disaggregation by regions may contribute to a smaller prediction error, although there is not a single model that is superior in all the forecast horizons and the result is Conditioned to the analyzed sample. / O objetivo deste estudo é avaliar se há ganhos em trabalhar com dados desagregados por regiões para projetar a inflação no Brasil. Para este fim, construímos modelos autoregressivos univariados para o agregado do IPCA (principal índice de preços ao consumidor brasileiro) e duas desagregações (por região ou por grupo e região) para um horizonte de projeção de até 12 meses à frente. Foram utilizados dados mensais do IPCA entre janeiro de 1996 e outubro de 2016 para o índice nacional e 11 regiões metropolitanas e capitais que compõem o índice. A análise das projeções fora da amostra foi feita em dois cortes distintos de tempo. Primeiro entre dezembro de 2006 e outubro de 2016 e, num segundo momento, para o período dezembro de 2006 a dezembro de 2012. Os modelos foram estimados pelo software Oxmetrics 7 e, em alguns casos, foi utilizado também o algoritmo Autometrics. As comparações dos modelos foram feitas pelo Erro Quadrático Médio e pela técnica Model Confidence Set, desenvolvida por Hansen, Lunde e Nason (2011). Os resultados indicam que o desempenho dos modelos desagregados é superior aos modelos agregados e, em especial, a desagregação por regiões pode contribuir para menor erro de previsão, embora não haja um único modelo que seja superior em todos os horizontes de projeção e o resultado esteja condicionado à amostra analisada.
18

Towards a flexible statistical modelling by latent factors for evaluation of simulated responses to climate forcings

Fetisova, Ekaterina January 2017 (has links)
In this thesis, using the principles of confirmatory factor analysis (CFA) and the cause-effect concept associated with structural equation modelling (SEM), a new flexible statistical framework for evaluation of climate model simulations against observational data is suggested. The design of the framework also makes it possible to investigate the magnitude of the influence of different forcings on the temperature as well as to investigate a general causal latent structure of temperature data. In terms of the questions of interest, the framework suggested here can be viewed as a natural extension of the statistical approach of 'optimal fingerprinting', employed in many Detection and Attribution (D&amp;A) studies. Its flexibility means that it can be applied under different circumstances concerning such aspects as the availability of simulated data, the number of forcings in question, the climate-relevant properties of these forcings, and the properties of the climate model under study, in particular, those concerning the reconstructions of forcings and their implementation. It should also be added that although the framework involves the near-surface temperature as a climate variable of interest and focuses on the time period covering approximately the last millennium prior to the industrialisation period, the statistical models, included in the framework, can in principle be generalised to any period in the geological past as soon as simulations and proxy data on any continuous climate variable are available.  Within the confines of this thesis, performance of some CFA- and SEM-models is evaluated in pseudo-proxy experiments, in which the true unobservable temperature series is replaced by temperature data from a selected climate model simulation. The results indicated that depending on the climate model and the region under consideration, the underlying latent structure of temperature data can be of varying complexity, thereby rendering our statistical framework, serving as a basis for a wide range of CFA- and SEM-models, a powerful and flexible tool. Thanks to these properties, its application ultimately may contribute to an increased confidence in the conclusions about the ability of the climate model in question to simulate observed climate changes. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 3: Manuscript.</p>
19

EFFICIENT CONFIDENCE SETS FOR DISEASE GENE LOCATIONS

Sinha, Ritwik 19 March 2007 (has links)
No description available.
20

Bootstrap confidence sets under model misspecification

Zhilova, Mayya 07 December 2015 (has links)
Diese Arbeit befasst sich mit einem Multiplier-Bootstrap Verfahren für die Konstruktion von Likelihood-basierten Konfidenzbereichen in zwei verschiedenen Fällen. Im ersten Fall betrachten wir das Verfahren für ein einzelnes parametrisches Modell und im zweiten Fall erweitern wir die Methode, um Konfidenzbereiche für eine ganze Familie von parametrischen Modellen simultan zu schätzen. Theoretische Resultate zeigen die Validität der Bootstrap-Prozedur für eine potenziell begrenzte Anzahl an Beobachtungen, eine große Anzahl an betrachteten parametrischen Modellen, wachsende Parameterdimensionen und eine mögliche Misspezifizierung der parametrischen Annahmen. Im Falle eines einzelnen parametrischen Modells funktioniert die Bootstrap-Approximation, wenn die dritte Potenz der Parameterdimension ist kleiner als die Anzahl an Beobachtungen. Das Hauptresultat über die Validität des Bootstrap gilt unter der sogenannten Small-Modeling-Bias Bedingung auch im Falle, dass das parametrische Modell misspezifiert ist. Wenn das wahre Modell signifikant von der betrachteten parametrischen Familie abweicht, ist das Bootstrap Verfahren weiterhin anwendbar, aber es führt zu etwas konservativeren Schätzungen: die Konfidenzbereiche werden durch den Modellfehler vergrößert. Für die Konstruktion von simultanen Konfidenzbereichen entwickeln wir ein Multiplier-Bootstrap Verfahren um die Quantile der gemeinsamen Verteilung der Likelihood-Quotienten zu schätzen und eine Multiplizitätskorrektur der Konfidenzlevels vorzunehmen. Theoretische Ergebnisse zeigen die Validität des Verfahrens; die resultierende Approximationsfehler hängt von der Anzahl an betrachteten parametrischen Modellen logarithmisch. Hier betrachten wir auch wieder den Fall, dass die parametrischen Modelle misspezifiziert sind. Wenn die Misspezifikation signifikant ist, werden Bootstrap-generierten kritischen Werte größer als die wahren Werte sein und die Bootstrap-Konfidenzmengen sind konservativ. / The thesis studies a multiplier bootstrap procedure for construction of likelihood-based confidence sets in two cases. The first one focuses on a single parametric model, while the second case extends the construction to simultaneous confidence estimation for a collection of parametric models. Theoretical results justify the validity of the bootstrap procedure for a limited sample size, a large number of considered parametric models, growing parameters’ dimensions, and possible misspecification of the parametric assumptions. In the case of one parametric model the bootstrap approximation works if the cube of the parametric dimension is smaller than the sample size. The main result about bootstrap validity continues to apply even if the underlying parametric model is misspecified under a so-called small modelling bias condition. If the true model deviates significantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes conservative: the size of the constructed confidence sets is increased by the modelling bias. For the problem of construction of simultaneous confidence sets we suggest a multiplier bootstrap procedure for estimating a joint distribution of the likelihood ratio statistics, and for adjustment of the confidence level for multiplicity. Theoretical results state the bootstrap validity; a number of parametric models enters a resulting approximation error logarithmically. Here we also consider the case when parametric models are misspecified. If the misspecification is significant, then the bootstrap critical values exceed the true ones and the bootstrap confidence set becomes conservative. The theoretical approach includes non-asymptotic square-root Wilks theorem, Gaussian approximation of Euclidean norm of a sum of independent vectors, comparison and anti-concentration bounds for Euclidean norm of Gaussian vectors. Numerical experiments for misspecified regression models nicely confirm our theoretical results.

Page generated in 0.0561 seconds