• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 13
  • 9
  • 9
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Núcleos de inflação no Brasil e poder preditivo da inflação total

Litvac, Basiliki Theophane Calochorios 05 February 2013 (has links)
Submitted by Basiliki Theophane Calochorios Litvac (basiliki.litvac@gmail.com) on 2013-03-06T22:27:11Z No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-03-07T12:53:10Z (GMT) No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) / Made available in DSpace on 2013-03-07T13:14:10Z (GMT). No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) Previous issue date: 2013-02-05 / Este trabalho tem por objetivo avaliar para o caso brasileiro uma das mais importantes propriedades esperadas de um núcleo: ser um bom previsor da inflação plena futura. Para tanto, foram utilizados como referência para comparação dois modelos construídos a partir das informações mensais do IPCA e seis modelos VAR referentes a cada uma das medidas de núcleo calculadas pelo Banco Central do Brasil. O desempenho das previsões foi avaliado pela comparação dos resultados do erro quadrático médio e pela aplicação da metodologia de Diebold-Mariano (1995) de comparação de modelos. Os resultados encontrados indicam que o atual conjunto de medidas de núcleos calculado pelo Banco Central não atende pelos critérios utilizados neste trabalho a essa característica desejada. / This paper aims at evaluating one of the most important desirable properties of a core inflation measure: to be a better predictor of headline inflation over the future. To achieve this goal, two benchmark models using monthly IPCA data were compared with six VAR models for each one of the core measures calculated by the Brazilian Central Bank. The forecasting performance was evaluated comparing the mean square error and by the Diebold-Mariano (1995) test for predictive accuracy. The evidence found indicates that the current set of core inflation measures calculated by the Central Bank does fulfill this desired property.
32

Econometrics on interactions-based models: methods and applications

Liu, Xiaodong 22 June 2007 (has links)
No description available.
33

Vliv sekuritizace na dynamiku cen bydlení ve Španělsku / Impact of Securitization on House Price Dynamics in Spain

Hejlová, Hana January 2014 (has links)
The thesis tries to explain different nature of the dynamics during the upward and downward part of the last house price cycle in Spain, characterized by important rigidities. Covered bonds are introduced as an instrument which may accelerate a house price boom, while it may also serve as a source of correction to overvalued house prices in downturn. In a serious economic stress, lack of investment opportunities motivates investors to buy the covered bonds due to the strong guarantees provided, which may in turn help to revitalize the credit and housing markets. To address such regime shift, house price dynamics is modelled within a framework of mutually related house price, credit and business cycles using smooth transition vector autoregressive model. Linear behaviour of such system is rejected, indicating the need to model house prices in a nonlinear framework. Also, importance of modelling house prices in the context of credit and business cycles is confirmed. Possible causality from issuance of covered bonds to house price dynamics was identified in this nonlinear structure. Finally, threat to financial stability resulting from rising asset encumbrance both in the upward and downward part of the house price cycle was identified, stressing the need to model impact of the covered bonds on house prices in...
34

Contextualizing the Dynamics of Affective Functioning: Conceptual and Statistical Considerations

Adolf, Janne K. 14 September 2018 (has links)
Aktuelle Affektforschung betont die Bedeutung mikrolängsschnittlicher Daten für das Verstehen täglichen affektiven Funktionierens, da sie es erlauben affektive Dynamiken und potentiell zugrunde liegende Prozesse zu beschreiben. Dynamische Längsschnittmodelle werden entsprechend attraktiver. In dieser Dissertation komme ich Forderungen nach einer Integration kontextueller Informationen in die Untersuchung täglichen affektiven Funktionierens nach. Speziell modifiziere ich populäre dynamische Modelle so, dass sie kontextuelle Variationen einbeziehen. In einem ersten Beitrag werden Personen als in Kontexte eingebettet begriffen. Der vorgeschlagene Ansatz der festen moderierten Zeitreihenanalyse berücksichtigt systemische Reaktionen auf kontextuelle Veränderungen, indem Veränderungen in allen Parametern eines dynamischen Zeitreihenmodells auf kontextuelle Veränderungen bedingt schätzt werden. Kontextuelle Veränderungen werden als bekannt und assoziierte Parameterveränderungen als deterministisch behandelt. Folglich sind Modellspezifikation und -schätzung erleichtert und in kleineren Stichproben praktikabel. Es sind allerdings Informationen über den Einfluss kontextueller Faktoren erforderlich. Anwendbar auf einzelne Personen erlaubt der Ansatz die uneingeschränkte Exploration interindividueller Unterschiede in kontextualisierten affektiven Dynamiken. In einem zweiten Beitrag werden Personen als mit Kontexten interagierend begriffen. Ich implementiere eine Prozessperspektive auf kontextuelle Schwankungen, die die Dynamiken täglicher Ereignisse über autoregressive Modelle mit Poisson Messfehler abbildet. Die Kombination von Poisson und Gaußscher autoregressiver Modellierung erlaubt eine Formalisierung des dynamischen Zusammenspiels kontextueller und affektiver Prozesse. Die Modelle sind hierarchisch aufgesetzt und erfassen so interindividuelle Unterschiede in intraindividuellen Dynamiken. Die Schätzung erfolgt über simulationsbasierte Verfahren der Bayesschen Statistik. / Recent affect research stresses the importance of micro-longitudinal data for understanding daily affective functioning, as they allow describing affective dynamics and potentially underlying processes. Accordingly, dynamic longitudinal models get increasingly promoted. In this dissertation, I address calls for an integration of contextual information into the study of daily affective functioning. Specifically, I modify popular dynamic models so that they incorporate contextual changes. In a first contribution, individuals are characterized as embedded in contexts. The proposed approach of fixed moderated time series analysis accounts for systemic reactions to contextual changes by estimating change in all parameters of a dynamic time series model conditional on contextual changes. It thus treats contextual changes as known and related parameter changes as deterministic. Consequently, model specification and estimation are facilitated and feasible in smaller samples, but information on which and how contextual factors matter is required. Applicable to single individuals, the approach permits an unconstrained exploration of inter-individual differences in contextualized affective dynamics. In a second contribution, individuals are characterized as interacting reciprocally with contexts. Implementing a process perspective on contextual changes, I model the dynamics of daily events using autoregressive models with Poisson measurement error. Combining Poisson and Gaussian autoregressive models can formalize the dynamic interplay between contextual and affective processes. It thereby distinguishes not only unique from joint dynamics, but also affective reactivity from situation selection, evocation, or anticipation. The models are set up as hierarchical to capture inter-individual differences in intra-individual dynamics. Estimation is carried out via simulation-based techniques in the Bayesian framework.
35

A Contribution to Multivariate Volatility Modeling with High Frequency Data

Marius, Matei 09 March 2012 (has links)
La tesi desenvolupa el tema de la predicció de la volatilitat financera en el context de l’ús de dades d’alta freqüència, i se centra en una línia de recerca doble: proposar models alternatius que millorarien la predicció de la volatilitat i classificar els models de volatilitat ja existents com els que es proposen en aquesta tesi. Els objectius es poden classificar en tres categories. El primer consisteix en la proposta d’un nou mètode de predicció de la volatilitat que segueix una línia de recerca desenvolupada recentment, la qual apunta al fet de mesurar la volatilitat intradia, com també la nocturna. Es proposa una categoria de models realized GARCH bivariants. El segon objectiu consisteix en la proposta d’una metodologia per predir la volatilitat diària multivariant amb models autoregressius que utilitzen estimacions de volatilitat diària (i nocturna, en el cas dels bivariants), a més d’informació d’alta freqüència, quan se’n disposava. S’aplica l’anàlisi de components principals (ACP) a un conjunt de models de tipus realized GARCH univariants i bivariants. El mètode representa una extensió d’un model ja existent (PC-GARCH) que estimava un model GARCH multivariant a partir de l’estimació de models GARCH univariants dels components principals de les variables inicials. El tercer objectiu de la tesi és classificar el rendiment dels models de predicció de la volatilitat ja existents o dels nous, a més de la precisió de les mesures intradia que s’utilitzaven en les estimacions dels models. En relació amb els resultats, s’observa que els models EGARCHX, realized EGARCH i realized GARCH(2,2) obtenen una millor valoració, mentre que els models GARCH i no realized EGARCH obtenen uns resultats inferiors en gairebé totes les proves. Això permet concloure que el fet d’incorporar mesures de volatilitat intradia millora el problema de la modelització. Quant a la classificació dels models realized bivariants, s’observa que tant els models realized GARCH bivariant (en versions completes i parcials) com el model realized EGARCH bivariant obtenen millors resultats; els segueixen els models realized GARCH(2,2) bivariant, EGARCH bivariant I EGARCHX bivariant. En comparar les versions bivariants amb les univariants, amb l’objectiu d’investigar si l’ús de mesures de volatilitat nocturna a les equacions dels models millora l’estimació de la volatilitat, es mostra que els models bivariants superen els univariants. Els resultats proven que els models bivariants no són totalment inferiors als seus homòlegs univariants, sinó que resulten ser bones alternatives per utilitzar-los en la predicció, juntament amb els models univariants, per tal d’obtenir unes estimacions més fiables. / La tesis desarrolla el tema de la predicción de la volatilidad financiera en el contexto del uso de datos de alta frecuencia, y se centra en una doble línea de investigación: la de proponer modelos alternativos que mejorarían la predicción de la volatilidad y la de clasificar modelos de volatilidad ya existentes como los propuestos en esta tesis. Los objetivos se pueden clasificar en tres categorías. El primero consiste en la propuesta de un nuevo método de predicción de la volatilidad que sigue una línea de investigación recientemente desarrollada, la cual apunta al hecho de medir la volatilidad intradía, así como la nocturna. Se propone una categoría de modelos realized GARCH bivariantes. El segundo objetivo consiste en proponer una metodología para predecir la volatilidad diaria multivariante con modelos autorregresivos que utilizaran estimaciones de volatilidad diaria (y nocturna, en el caso de los bivariantes), además de información de alta frecuencia, si la había disponible. Se aplica el análisis de componentes principales (ACP) a un conjunto de modelos de tipo realized GARCH univariantes y bivariantes. El método representa una extensión de un modelo ya existente (PCGARCH) que calculaba un modelo GARCH multivariante a partir de la estimación de modelos GARCH univariantes de los componentes principales de las variables iniciales. El tercer objetivo de la tesis es clasificar el rendimiento de los modelos de predicción de la volatilidad ya existentes o de los nuevos, así como la precisión de medidas intradía utilizadas en las estimaciones de los modelos. En relación con los resultados, se observa que los modelos EGARCHX, realized EGARCH y GARCH(2,2) obtienen una mejor valoración, mientras que los modelos GARCH y no realized EGARCH obtienen unos resultados inferiores en casi todas las pruebas. Esto permite concluir que el hecho de incorporar medidas de volatilidad intradía mejora el problema de la modelización. En cuanto a la clasificación de modelos realized bivariantes, se observa que tanto los modelos realized GARCH bivariante (en versiones completas y parciales) como realized EGARCH bivariante obtienen mejores resultados; les siguen los modelos realized GARCH(2,2) bivariante, EGARCH bivariante y EGARCHX bivariante. Al comparar las versiones bivariantes con las univariantes, con el objetivo de investigar si el uso de medidas de volatilidad nocturna en las ecuaciones de los modelos mejora la estimación de la volatilidad, se muestra que los modelos bivariantes superan los univariantes. Los resultados prueban que los modelos bivariantes no son totalmente inferiores a sus homólogos univariantes, sino que resultan ser buenas alternativas para utilizarlos en la predicción, junto con los modelos univariantes, para lograr unas estimaciones más fiables. / The thesis develops the topic of financial volatility forecasting in the context of the usage of high frequency data, and focuses on a twofold line of research: that of proposing alternative models that would enhance volatility forecasting and that of ranking existing or newly proposed volatility models. The objectives may be disseminated in three categories. The first scope constitutes of the proposal of a new method of volatility forecasting that follows a recently developed research line that pointed to using measures of intraday volatility and also of measures of night volatility, the need for new models being given by the question whether adding measures of night volatility improves day volatility estimations. As a result, a class of bivariate realized GARCH models was proposed. The second scope was to propose a methodology to forecast multivariate day volatility with autoregressive models that used day (and night for bivariate) volatility estimates, as well as high frequency information when that was available. For this, the Principal Component algorithm (PCA) was applied to a class of univariate and bivariate realized GARCH-type of models. The method represents an extension of one existing model (PC GARCH) that estimated a multivariate GARCH model by estimating univariate GARCH models of the principal components of the initial variables. The third goal of the thesis was to rank the performance of existing or newly proposed volatility forecasting models, as well as the accuracy of the intraday measures used in the realized models estimations. With regards to the univariate realized models’ rankings, it was found that EGARCHX, Realized EGARCH and Realized GARCH(2,2) models persistently ranked better, while the non-realized GARCH and EGARCH models performed poor in each stance almost. This allowed us to conclude that incorporating measures of intraday volatility enhances the modeling problem. With respect to the bivariate realized models’ ranking, it was found that Bivariate Realized GARCH (partial and complete versions) and Bivariate Realized EGARCH models performed the best, followed by the Bivariate Realized GARCH(2,2), Bivariate EGARCH and Bivariate EGARCHX models. When the bivariate versions were compared to the univariate ones in order to investigate whether using night volatility measurements in the models’ equations improves volatility estimation, it was found that the bivariate models surpassed the univariate ones when specific methodology, ranking criteria and stocks were used. The results were mixed, allowing us to conclude that the bivariate models did not prove totally inferior to their univariate counterparts, proving as good alternative options to be used in the forecasting exercise, together with the univariate models, for more reliable estimates. Finally, the PC realized models and PC bivariate realized models were estimated and their performances were ranked; improvements the PC methodology brought in high frequency multivariate modeling of stock returns were also discussed. PC models were found to be highly effective in estimating multivariate volatility of highly correlated stock assets and suggestions on how investors could use them for portfolio selection were made.
36

Understanding patterns of aggregation in count data

Sebatjane, Phuti 06 1900 (has links)
The term aggregation refers to overdispersion and both are used interchangeably in this thesis. In addressing the problem of prevalence of infectious parasite species faced by most rural livestock farmers, we model the distribution of faecal egg counts of 15 parasite species (13 internal parasites and 2 ticks) common in sheep and goats. Aggregation and excess zeroes is addressed through the use of generalised linear models. The abundance of each species was modelled using six different distributions: the Poisson, negative binomial (NB), zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), zero-altered Poisson (ZAP) and zero-altered negative binomial (ZANB) and their fit was later compared. Excess zero models (ZIP, ZINB, ZAP and ZANB) were found to be a better fit compared to standard count models (Poisson and negative binomial) in all 15 cases. We further investigated how distributional assumption a↵ects aggregation and zero inflation. Aggregation and zero inflation (measured by the dispersion parameter k and the zero inflation probability) were found to vary greatly with distributional assumption; this in turn changed the fixed-effects structure. Serial autocorrelation between adjacent observations was later taken into account by fitting observation driven time series models to the data. Simultaneously taking into account autocorrelation, overdispersion and zero inflation proved to be successful as zero inflated autoregressive models performed better than zero inflated models in most cases. Apart from contribution to the knowledge of science, predictability of parasite burden will help farmers with effective disease management interventions. Researchers confronted with the task of analysing count data with excess zeroes can use the findings of this illustrative study as a guideline irrespective of their research discipline. Statistical methods from model selection, quantifying of zero inflation through to accounting for serial autocorrelation are described and illustrated. / Statistics / M.Sc. (Statistics)

Page generated in 0.1516 seconds