Spelling suggestions: "subject:"statespace codels"" "subject:"statespace 2models""
61 |
Credit risk & forward price modelsGaspar, Raquel M. January 2006 (has links)
This thesis consists of three distinct parts. Part I introduces the basic concepts and the notion of general quadratic term structures (GQTS) essential in some of the following chapters. Part II focuses on credit risk models and Part III studies forward price term structure models using both the classical and the geometrical approach. Part I is organized as follows. Chapter 1 is divided in two main sections. The first section presents some of the fundamental concepts which are a pre-requisite to the papers that follow. All of the concepts and results are well known and hence the section can be regarded as an introduction to notation and the basic principles of arbitrage theory. The second part of the chapter is of a more technical nature and its purpose is to summarize some key results on point processes or differential geometry that will be used later in the thesis. For finite dimensional factor models, Chapter 2 studies GQTS. These term structures include, as special cases, the affine term structures and Gaussian quadratic term structures previously studied in the literature. We show, however, that there are other, non-Gaussian, quadratic term structures and derive sufficient conditions for the existence of these GQTS for zero-coupon bond prices. On Part II we focus on credit risk models. In Chapter 3 we propose a reduced form model for default that allows us to derive closed-form solutions for all the key ingredients in credit risk modeling: risk-free bond prices, defaultable bond prices (with and without stochastic recovery) and survival probabilities. We show that all these quantities can be represented in general exponential quadratic forms, despite the fact that the intensity of default is allowed to jump producing shot-noise effects. In addition, we show how to price defaultable digital puts, CDSs and options on defaultable bonds. Further on, we study a model for portfolio credit risk that considers both firm-specific and systematic risk. The model generalizes the attempt of Duffie and Garleanu (2001). We find that the model produces realistic default correlation and clustering effects. Next, we show how to price CDOs, options on CDOs and how to incorporate the link to currently proposed credit indices. In Chapter 4 we start by presenting a reduced-form multiple default type of model and derive abstract results on the influence of a state variable $X$ on credit spreads when both the intensity and the loss quota distribution are driven by $X$. The aim is to apply the results to a real life situation, namely, to the influence of macroeconomic risks on the term structure of credit spreads. There is increasing support in the empirical literature for the proposition that both the probability of default (PD) and the loss given default (LGD) are correlated and driven by macroeconomic variables. Paradoxically, there has been very little effort, from the theoretical literature, to develop credit risk models that would take this into account. One explanation might be the additional complexity this leads to, even for the ``treatable'' default intensity models. The goal of this paper is to develop the theoretical framework necessary to deal with this situation and, through numerical simulation, understand the impact of macroeconomic factors on the term structure of credit spreads. In the proposed setup, periods of economic depression are both periods of higher default intensity and lower recovery, producing a business cycle effect. Furthermore, we allow for the possibility of an index volatility that depends negatively on the index level and show that, when we include this realistic feature, the impacts on the credit spread term structure are emphasized. Part III studies forward price term structure models. Forward prices differ from futures prices in stochastic interest rate settings and become an interesting object of study in their own right. Forward prices with different maturities are martingales under different forward measures. This mathematical property implies that the term structure of forward prices is always linked to the term structure of bond prices, and this dependence makes forward price term structure models relatively harder to handle. For finite dimensional factor models, Chapter 5 applies the concept of GQTS to the term structure of forward prices. We show how the forward price term structure equation depends on the term structure of bond prices. We then exploit this connection and show that even in quadratic short rate settings we can have affine term structures for forward prices. Finally, we show how the study of futures prices is naturally embedded in the study of forward prices, that the difference between the two term structures may be deterministic in some (non-trivial) stochastic interest rate settings. In Chapter 6 we study a fairly general Wiener driven model for the term structure of forward prices. The model, under a fixed martingale measure, $\Q$, is described by using two infinite dimensional stochastic differential equations (SDEs). The first system is a standard HJM model for (forward) interest rates, driven by a multidimensional Wiener process $W$. The second system is an infinite SDE for the term structure of forward prices on some specified underlying asset driven by the same $W$. Since the zero coupon bond volatilities will enter into the drift part of the SDE for these forward prices, the interest rate system is needed as input to the forward price system. Given this setup, we use the Lie algebra methodology of Bj\o rk et al. to investigate under what conditions, on the volatility structure of the forward prices and/or interest rates, the inherently (doubly) infinite dimensional SDE for forward prices can be realized by a finite dimensional Markovian state space model. / Diss. Stockholm : Handelshögskolan, 2006
|
62 |
Australian takeover waves : a re-examination of patterns, causes and consequencesDuong, Lien Thi Hong January 2009 (has links)
This thesis provides more precise characterisation of patterns, causes and consequences of takeover activity in Australia over three decades spanning from 1972 to 2004. The first contribution of the thesis is to characterise the time series behaviour of takeover activity. It is found that linear models do not adequately capture the structure of merger activity; a non-linear two-state Markov switching model works better. A key contribution of the thesis is, therefore, to propose an approach of combining a State-Space model with the Markov switching regime model in describing takeover activity. Experimental results based on our approach show an improvement over other existing approaches. We find four waves, one in the 1980s, two in the 1990s, and one in the 2000s, with an expected duration of each wave state of approximately two years. The second contribution is an investigation of the extent to which financial and macro-economic factors predict takeover activity after controlling for the probability of takeover waves. A main finding is that while stock market boom periods are empirically associated with takeover waves, the underlying driver is interest rate level. A low interest rate environment is associated with higher aggregate takeover activity. This relationship is consistent with Shleifer and Vishny (1992)'s liquidity argument that takeover waves are symptoms of lower cost of capital. Replicating the analysis to the biggest takeover market in the world, the US, reveals a remarkable consistency of results. In short, the Australian findings are not idiosyncratic. Finally, the implications for target and bidder firm shareholders are explored via investigation of takeover bid premiums and long-term abnormal returns separately between the wave and non-wave periods. This represents the third contribution to the literature of takeover waves. Findings reveal that target shareholders earn abnormally positive returns in takeover bids and bid premiums are slightly lower in the wave periods. Analysis of the returns to bidding firm shareholders suggests that the lower premiums earned by target shareholders in the wave periods may simply reflect lower total economic gains, at the margin, to takeovers made in the wave periods. It is found that bidding firms earn normal post-takeover returns (relative to a portfolio of firms matched in size and survival) if their bids are made in the non-wave periods. However, bidders who announce their takeover bids during the wave periods exhibit significant under-performance. For mergers that took place within waves, there is no difference in bid premiums and nor is there a difference in the long-run returns of bidders involved in the first half and second half of the waves. We find that none of theories of merger waves (managerial, mis-valuation and neoclassical) can fully account for the Australian takeover waves and their effects. Instead, our results suggest that a combination of these theories may provide better explanation. Given that normal returns are observed for acquiring firms, taken as a whole, we are more likely to uphold the neoclassical argument for merger activity. However, the evidence is not entirely consistent with neo-classical rational models, the under-performance effect during the wave states is consistent with the herding behaviour by firms.
|
63 |
[en] ESTIMATION OF IBNR (INCURRED BUT NOT REPORTED) PROVISIONS IN INSURANCE VIA MODELS WHIT TIME-VARYING COEFFICIENTS / [pt] ESTIMAÇÃO DE PROVISÕES IBNR (INCURRED BUT NOT REPORTED) EM MERCADO DE SEGUROS VIA MODELOS COM COEFICIENTES VARIANTES NO TEMPODAIANE RODRIGUES DOS SANTOS 06 October 2017 (has links)
[pt] Esta tese apresente duas contribuições para a modelagem e previsão de sinistros já ocorridos e ainda não avisados (Incurred But Not Reported – IBNR), quando organizados numa estrutura de dados conhecida como triângulo de run-off. Ambas as contribuições são baseadas em arcabouços gerais para a construção de modelos para séries temporais com coeficientes variantes no tempo. Em nossa primeira contribuição desenvolvemos a extensão multivariada do modelo em espaço de estado proposto por Atherino em 2008. A partir dessa extensão é possível modelas simultaneamente um ou mais triângulos de run-off associados às diversas coberturas de uma seguradora, levando-se em consideração a dependência entre os distintos triângulos, capturada pela estrutura da matriz variância – covariância do modelo SUTSE e a dependência entre as células de cada triângulo de run-off, capturada pelas componentes de nível e de periodicidade, de acordo com a proposta de atherino et al. (2010). Em nossa segunda contribuição desenvolvemos um arcabouço geral para a modelagem univariada de triângulos de run-off a partir da estruturas dos modelos GAS (Generalized Autoregressive Score) desenvolvidos por Creal at al. (2013). Esse arcabouço, bastante flexível, permite a escolha de qualquer distribuição para as entradas do triângulo run-off, considerando que os seus parâmetros variem ao longo período de origem ou de desenvolvimento. Em particular consideramos as distribuições gama e log-normal. Nossos resultados foram comparados com os obtidos através do método chain ladder (Mack, 1993), utilizado como benchmark na indústria de seguros. O teste Diebold e Mariano (1995) evidenciou que os modelos propostos geram melhores previsões, comparadas as previsões do método chain ladder. / [en] This thesis presents two contributions to the modeling and prediction of a type of claims in the insurance industry known as IBNR (Incurred But Not Reported) when these are organized in a data structure known as the run-off triangle. Both contributions are based on general frameworks for building models
for time series with time varying coefficients. In our first contribution we developed the multivariate extension of the state space model proposed by Atherino in 2008. From this extension it is possible to model simultaneously one or more run-off triangles associated with different coverages from an insurer,
taking into account the dependence between different triangles, captured by the structure of the variance-covariance matrix of the SUTSE model, while the dependence between the cells of each triangle run-off is captured by the components of level and periodicity according to the model proposed by Atherino
(2008). In our second contribution we developed a general framework for univariate modeling of run-off triangles using the structure of GAS models (Generalized Autoregressive Score) developed by Creal et al. (2013). This framework, very flexible, allows one to choose any distribution to the inputs of the triangle run-off, considering that its parameters can vary over the period of origin or period of development. In particular we have considered both gamma and lognormal distributions. Our results were compared with those obtained by the chain ladder method (Mack, 1993) used as a benchmark in the insurance industry. The Diebold and Mariano test (1995) showed that the proposed models produced better predictions compared to the predictions of the chain ladder method.
|
64 |
Metodologia para identificação de sistemas em espaço de estados por meio de excitações pulsadas. / Methodology for identifying state space systems by means of pulsed excitations.LIMA, Rafael Bezerra Correia. 30 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-30T14:13:06Z
No. of bitstreams: 1
RAFAEL BEZERRA CORREIA LIMA - TESE PPGEE 2016..pdf: 2324960 bytes, checksum: db1b63193864e8e19bcba191952df2b9 (MD5) / Made available in DSpace on 2018-07-30T14:13:06Z (GMT). No. of bitstreams: 1
RAFAEL BEZERRA CORREIA LIMA - TESE PPGEE 2016..pdf: 2324960 bytes, checksum: db1b63193864e8e19bcba191952df2b9 (MD5)
Previous issue date: 2016-09-20 / Nesse trabalho são apresentadas contribuições na área de identificação de sistemas representados em espaço de estados. E proposta uma metodologia completa para estimação de modelos que representem as principais dinâmicas de processos industriais. O fluxo natural das procedimentos de identificação consiste da coleta experimental
dos dados, seguido pela escolha dos modelos candidatos e da utilização de um critério de
ajuste que selecione o melhor modelo possível. Nesse sentido é proposta uma metodologia para estimativa de modelos em espaço de estados, utilizando excitações pulsadas. A abordagem desenvolvida combina algoritmos precisos e eficientes com experimentos rápidos, adequados a ambientes industriais. O projeto das excitações é realizado em tempo real, por meio de informações coletadas em um curto experimento inicial, baseado em uma única oscilação de uma estrutura realimentada por um relê. Esse mecanismo possibilita uma estimativa preliminar do atraso e da constante de tempo dominante do sistema. O método de identificação proposto é baseado na teoria de realizações de Kalman. É apresentada uma reformulação do problema de realizações clássico, para comportar sinais de entrada pulsados. Essa abordagem se mostra computacionalmente eficiente, assim como apresenta resultados semelhantes aos métodos de benehmark. A técnica possibilita também a estimativa de atrasos de transporte e a inserção de conhecimentos prévios por meio de um problema de otimização com restrições via LMI Linear Matrix Incqualities. Em muitos casos, somente as características principais do sistema são relevantes em um projeto de sistema de controle. Portanto é proposta uma técnica para obtenção de modelos de primeira ordem com atraso, a partir da redução de modelos balanceados em espaço de estados. Por fim, todas as contribuições discutidas nesse trabalho de tese são validadas em uma série de plantas experimentais em escala de laboratório. Plantas essas, projetadas e construídas com o intuito de emular o cotidiano operacional de instalações industriais reais. / This work Íntroduces contributions related to thc field of systems identification of state
space models. It is proposed a complete methodology for modei estimation that encompasses the main dynamics of industrial processes. The natural flxix of the identification procedures rests on the the empirical collection of
data followed by the choice of candidate modela and posterior use ot an adjusting criteria
that drafts the best model amoug the contenders. In this sense. a uew methodology
is proposed for models estimation in state spaces using pulsed excitation signal. The
developed approach combines accurate and efhcient algorithms with quick experíments
whose are suitable for the industrial environment. The excitatiou design is performed in real time by means of information collected in a snort mitíai experíment based in an single oscillation of a relay feedback. This mechanism allows a preliminary estimation of both delay and time constant prevalent in the system. The identification method proposed is based on Kalmairs realization theory. The thesis íntroduces a reformulation of the classic realization problem so it can admit pulsed input signals. This approaíth show itself as computationally efficient as well as provides similar results eompared to those obtained when perfonning the benchmark methods. Moreover, the technic allows the transport delay estimation and insertion of prior knowledge by means of an optimization problem with restrictions via linear matrix inequalities restrictions. In many cases only the characteristics of the main system are relevant in control systems design. Therefore a technique for the attainment first order models with time delay based on balanced state space models reduction. Lastly ali the contributions provided aíong the thesis are discussed and validated in a series of pilot scale plants. designed and built to emulate the operational cycle in real industrial plants.
|
65 |
Aspects of bivariate time seriesSeeletse, Solly Matshonisa 11 1900 (has links)
Exponential smoothing algorithms are very attractive for the practical world
such as in industry. When considering bivariate exponential smoothing
methods, in addition to the properties of univariate methods, additional
properties give insight to relationships between the two components of a
process, and also to the overall structure of the model.
It is important to study these properties, but even with the merits the
bivariate exponential smoothing algorithms have, exponential smoothing
algorithms are nonstatistical/nonstochastic and to study the properties within
exponential smoothing may be worthless.
As an alternative approach, the (bivariate) ARIMA and the structural models
which are classes of statistical models, are shown to generalize the exponential
smoothing algorithms. We study these properties within these classes as they
will have implications on exponential smoothing algorithms.
Forecast properties are studied using the state space model and the Kalman
filter. Comparison of ARIMA and structural model completes the study. / Mathematical Sciences / M. Sc. (Statistics)
|
66 |
Modelo dinâmico de Nelson Siegel e política econômicaAndrade, Juliane Aparecida Lopes de 16 August 2018 (has links)
Submitted by Juliane Andrade (juliane.a.andrade@gmail.com) on 2018-09-25T12:53:53Z
No. of bitstreams: 1
DissertacaoFinalJulianeAndrade.pdf: 2648304 bytes, checksum: 0feea2eb88019ffdafb37180bd261f3b (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-09-25T15:17:07Z (GMT) No. of bitstreams: 1
DissertacaoFinalJulianeAndrade.pdf: 2648304 bytes, checksum: 0feea2eb88019ffdafb37180bd261f3b (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-09-25T16:40:46Z (GMT) No. of bitstreams: 1
DissertacaoFinalJulianeAndrade.pdf: 2648304 bytes, checksum: 0feea2eb88019ffdafb37180bd261f3b (MD5) / Made available in DSpace on 2018-09-25T16:40:46Z (GMT). No. of bitstreams: 1
DissertacaoFinalJulianeAndrade.pdf: 2648304 bytes, checksum: 0feea2eb88019ffdafb37180bd261f3b (MD5)
Previous issue date: 2018-08-16 / Esse trabalho apresenta análise combinada entre a macroeconomia e a estrutura a termo das taxas de juros, através de duas modelagens distintas. Primeiramente, utiliza-se o modelo Novo Keynesiano de pequeno porte, que é combinado com o modelo dinâmico de Nelson-Siegel. Em seguida estima-se o modelo dinâmico de Nelson-Siegel integrado com variáveis macroeconômicas. São empregados dados mensais referentes aos contratos futuros de DI, de Setembro de 2002 a Dezembro de 2017. A comparação das modelagens mostra que o modelo combinado apresenta resultados mais consistentes do que o modelo integrado. / This paper aims to present a combined analysis between macroeconomics and the term structure of interest rates, through two different models. Firstly, a small New Keynesian model is used, which is combined with the dynamic Nelson-Siegel model. Then the NelsonSiegel dynamic model integrated with macroeconomic variables is estimated. Monthly data on DI futures contracts are used from September 2002 to December 2017. Comparison of modeling shows that the combined model presents more consistent results than the integrated model.
|
67 |
É possível clonar fundos de investimento?Singer, Alice Sobral 31 January 2013 (has links)
Submitted by Alice Singer (lilicasinger@gmail.com) on 2013-02-27T16:25:59Z
No. of bitstreams: 1
Dissertacao_Alice.pdf: 1210322 bytes, checksum: a587136246bce1145c8096d499e28342 (MD5) / Approved for entry into archive by Eliene Soares da Silva (eliene.silva@fgv.br) on 2013-02-27T16:28:28Z (GMT) No. of bitstreams: 1
Dissertacao_Alice.pdf: 1210322 bytes, checksum: a587136246bce1145c8096d499e28342 (MD5) / Made available in DSpace on 2013-02-27T16:33:15Z (GMT). No. of bitstreams: 1
Dissertacao_Alice.pdf: 1210322 bytes, checksum: a587136246bce1145c8096d499e28342 (MD5)
Previous issue date: 2013-01-31 / Esse estudo foi motivado pela falta de bons fundos de investimento multimercado abertos para captação no Brasil e tem como objetivo analisar a viabilidade de utilizar a análise de estilo baseada em retorno para clonar retornos e comportamento de determinados fundos de investimento multimercado do mercado brasileiro. Modelos já testados no exterior e no Brasil foram pesquisados e optou-se por adaptar o modelo linear proposto por LIMA e VICENTE (2007). Verificou-se que o modelo de espaço de estados é mais adequado para clonar retornos de determinados fundos de investimento do que o modelo de regressão com parâmetros fixos. Resultados animadores foram obtidos para quatro dos cinco fundos analisados nesse estudo. / This work was motivated by the lack of hedge funds opened for new investments in Brazil and it aims to analyze the feasibility of using the style analyses to clone returns and behavior of certain Brazilian hedge funds. Models already tested abroad and in Brazil were investigated and it was decided to adapt the linear model proposed by LIMA and VICENTE (2007). It was found that the state space model is more suitable for cloning returns of certain hedge funds than fixed parameters regression models. Encouraging results were obtained for four of the five funds analyzed in this study.
|
68 |
Variabilidade de solos hidromórficos: uma abordagem de espaço de estados / Variability of hydromorphic soils: a state space approach.Aquino, Leandro Sanzi 25 February 2010 (has links)
Made available in DSpace on 2014-08-20T14:36:59Z (GMT). No. of bitstreams: 1
Dissertacao_Leandro_Sanzi_Aquino.pdf: 2633860 bytes, checksum: eeb09c0678ebe75556f513e8a4e089b7 (MD5)
Previous issue date: 2010-02-25 / Soil land leveling is a technique used in low land areas and has the objective to
improve agricultural use to facilitate the management of water both for irrigation and
drainage operations, for the establishment of agricultural practices and crop harvest.
However, it causes changes in the physical environment where the plant grows, and
many studies have sought to identify the effect of this practice in the structure of soil
spatial variability and in the relationship between the hydric-physical and chemical
soil attributes. Thus, the objective of this study was to identify and characterize the
structure of spatial variability of soil hydric-physical and chemical attributes of a low
land soil, before and after land leveling, and to study the relationship between these
soil attributes through an autoregressive state space model. In an experimental area
of 0.81 ha belongs to Embrapa Clima Temperado situated in Capão do Leão county,
state of Rio Grande do Sul, Brazil, was established a regular grid of 100 points
spaced 10 m apart in both directions. At each point, soil disturbed and undisturbed
samples were collected at the depth of 0-0.20 m to determine, before and after land
leveling, the following soil attributes: clay, silt and sand contents, soil macroporosity,
soil microporosity and soil total porosity, soil bulk density and soil water content at
field capacity and permanent wilting point, soil organic carbon and cation exchange
capacity. All data sets were organized into a spreadsheet in the form of a spatial
transect consisting of 100 points and they were ordered following the gradient slope
area resulting from the soil land leveling. Autocorrelograms and crosscorrelograms
were built to evaluate the structure of spatial correlation of all soil attributes having
served as a subsidy for the selection of variables in each autoregressive state-space
model. The results show that the soil land leveling changed the structure of soil
spatial dependence of all variables and between them as well. The soil cation
exchange capacity and soil microporosity variables were the variables that made up
the largest number of state space models, before and after soil land leveling. The
contribution of the each variable at position i-1 to estimate its value at position
increased to the sand content, silt content, soil bulk density, soil microporosity, soil
macroporosity, soil water content at permanent wilting point, soil organic carbon and
cation exchange capacity variables and decreased to soil water content at field
capacity variable after land leveling. Soil land leveling improved the state space
model performance for soil organic carbon content, sand content, soil bulk density,
soil total porosity and soil water content at field capacity and permanent wilting point
variables. The worst state space model performances, after soil land leveling, were
found taking silt content, soil microporosity and cation exchange capacity variables
as response variables. The best state space model performance, before land
leveling, was obtained taking the soil total porosity as response variable. / A sistematização do solo é uma técnica utilizada em regiões planas, com
características de várzea, e tem por objetivo aperfeiçoar o uso agrícola facilitando o
manejo da água tanto de irrigação como de drenagem, as operações de implantação
da lavoura, de tratos culturais e de colheita. No entanto, a sistematização do solo
provoca alterações no ambiente físico onde a planta se desenvolve, sendo que
muitos estudos têm buscado identificar o efeito dessa prática na estrutura de
variabilidade espacial e no relacionamento entre os atributos físico-hídricos e
químicos do solo. Dessa forma, o objetivo deste trabalho foi identificar e caracterizar
a estrutura de variabilidade espacial dos atributos físico-hídricos e químicos de um
solo de várzea, antes e depois da sistematização, assim como estudar o
relacionamento entre esses atributos por meio de um modelo autoregressivo de
espaço de estados. Em uma área experimental de 0,81 ha pertencente a Embrapa
Clima Temperado, Capão do Leão-RS, foi estabelecida uma malha regular de 100
pontos, espaçados de 10 m entre si em ambas as direções. Em cada ponto foram
coletadas amostras de solo deformadas e com estrutura preservada na profundidade
de 0-0,20 m para a determinação, antes e depois da sistematização, dos teores de
argila, silte e areia, macroporosidade, microporosidade e porosidade total, densidade
do solo, conteúdo de água retido na capacidade de campo e ponto de murcha
permanente, carbono orgânico e capacidade de troca de cátions. Os dados foram
organizados em uma planilha de cálculo na forma de uma transeção espacial
composta de 100 pontos e foram ordenados seguindo o gradiente de declividade da
área resultante do processo de sistematização do solo. Para avaliar a estrutura de
correlação espacial foram construídos autocorrelogramas e crosscorrelogramas que
serviram de subsídio para a seleção de variáveis em cada um dos modelos
autoregressivos de espaço de estados. Os resultados mostram que a sistematização
do solo alterou a estrutura de dependência espacial tanto da variável como entre as
variáveis deste estudo. A capacidade de troca de cátions e a microporosidade do
solo foram as variáveis que compuseram o maior número de modelos de espaço de
estados, antes e depois da sistematização. A contribuição da variável na posição i-1
na estimativa na posição i, por meio do modelo autoregressivo de espaço de
estados, aumentou com a sistematização para as variáveis teor de areia, teor de
silte, densidade do solo, microporosidade, macroporosidade, conteúdo de água no
solo retido no ponto de murcha permanente, carbono orgânico e da capacidade de
troca de cátions; e diminuiu para a variável conteúdo de água no solo retido na
capacidade de campo.A sistematização do solo melhorou a estimativa, por meio dos
modelos de espaço de estados, das variáveis carbono orgânico, teor de areia,
densidade do solo, macroporosidade e do conteúdo de água no solo retido na
capacidade de campo e no ponto de murcha permanente, sendo o modelo da
variável porosidade total, antes da sistematização, que apresentou o melhor
desempenho. Já os piores desempenhos dos modelos, depois da sistematização do
solo, foram encontrados quando utilizadas as variáveis teor de silte, microporosidade
e capacidade de troca de cátions como resposta.
|
69 |
Non-parametric methodologies for reconstruction and estimation in nonlinear state-space models / Méthodologies non-paramétriques pour la reconstruction et l’estimation dans les modèles d’états non linéairesChau, Thi Tuyet Trang 26 February 2019 (has links)
Le volume des données disponibles permettant de décrire l’environnement, en particulier l’atmosphère et les océans, s’est accru à un rythme exponentiel. Ces données regroupent des observations et des sorties de modèles numériques. Les observations (satellite, in situ, etc.) sont généralement précises mais sujettes à des erreurs de mesure et disponibles avec un échantillonnage spatio-temporel irrégulier qui rend leur exploitation directe difficile. L’amélioration de la compréhension des processus physiques associée à la plus grande capacité des ordinateurs ont permis des avancées importantes dans la qualité des modèles numériques. Les solutions obtenues ne sont cependant pas encore de qualité suffisante pour certaines applications et ces méthodes demeurent lourdes à mettre en œuvre. Filtrage et lissage (les méthodes d’assimilation de données séquentielles en pratique) sont développés pour abonder ces problèmes. Ils sont généralement formalisées sous la forme d’un modèle espace-état, dans lequel on distingue le modèle dynamique qui décrit l’évolution du processus physique (état), et le modèle d’observation qui décrit le lien entre le processus physique et les observations disponibles. Dans cette thèse, nous abordons trois problèmes liés à l’inférence statistique pour les modèles espace-états: reconstruction de l’état, estimation des paramètres et remplacement du modèle dynamique par un émulateur construit à partir de données. Pour le premier problème, nous introduirons tout d’abord un algorithme de lissage original qui combine les algorithmes Conditional Particle Filter (CPF) et Backward Simulation (BS). Cet algorithme CPF-BS permet une exploration efficace de l’état de la variable physique, en raffinant séquentiellement l’exploration autour des trajectoires qui respectent le mieux les contraintes du modèle dynamique et des observations. Nous montrerons sur plusieurs modèles jouets que, à temps de calcul égal, l’algorithme CPF-BS donne de meilleurs résultats que les autres CPF et l’algorithme EnKS stochastique qui est couramment utilisé dans les applications opérationnelles. Nous aborderons ensuite le problème de l’estimation des paramètres inconnus dans les modèles espace-état. L’algorithme le plus usuel en statistique pour estimer les paramètres d’un modèle espace-état est l’algorithme EM qui permet de calculer itérativement une approximation numérique des estimateurs du maximum de vraisemblance. Nous montrerons que les algorithmes EM et CPF-BS peuvent être combinés efficacement pour estimer les paramètres d’un modèle jouet. Pour certaines applications, le modèle dynamique est inconnu ou très coûteux à résoudre numériquement mais des observations ou des simulations sont disponibles. Il est alors possible de reconstruire l’état conditionnellement aux observations en utilisant des algorithmes de filtrage/lissage dans lesquels le modèle dynamique est remplacé par un émulateur statistique construit à partir des observations. Nous montrerons que les algorithmes EM et CPF-BS peuvent être adaptés dans ce cadre et permettent d’estimer de manière non-paramétrique le modèle dynamique de l’état à partir d'observations bruitées. Pour certaines applications, le modèle dynamique est inconnu ou très coûteux à résoudre numériquement mais des observations ou des simulations sont disponibles. Il est alors possible de reconstruire l’état conditionnellement aux observations en utilisant des algorithmes de filtrage/lissage dans lesquels le modèle dynamique est remplacé par un émulateur statistique construit à partir des observations. Nous montrerons que les algorithmes EM et CPF-BS peuvent être adaptés dans ce cadre et permettent d’estimer de manière non-paramétrique le modèle dynamique de l’état à partir d'observations bruitées. Enfin, les algorithmes proposés sont appliqués pour imputer les données de vent (produit par Météo France). / The amount of both observational and model-simulated data within the environmental, climate and ocean sciences has grown at an accelerating rate. Observational (e.g. satellite, in-situ...) data are generally accurate but still subject to observational errors and available with a complicated spatio-temporal sampling. Increasing computer power and understandings of physical processes have permitted to advance in models accuracy and resolution but purely model driven solutions may still not be accurate enough. Filtering and smoothing (or sequential data assimilation methods) have developed to tackle the issues. Their contexts are usually formalized under the form of a space-state model including the dynamical model which describes the evolution of the physical process (state), and the observation model which describes the link between the physical process and the available observations. In this thesis, we tackle three problems related to statistical inference for nonlinear state-space models: state reconstruction, parameter estimation and replacement of the dynamic model by an emulator constructed from data. For the first problem, we will introduce an original smoothing algorithm which combines the Conditional Particle Filter (CPF) and Backward Simulation (BS) algorithms. This CPF-BS algorithm allows for efficient exploration of the state of the physical variable, sequentially refining exploration around trajectories which best meet the constraints of the dynamic model and observations. We will show on several toy models that, at the same computation time, the CPF-BS algorithm gives better results than the other CPF algorithms and the stochastic EnKS algorithm which is commonly used in real applications. We will then discuss the problem of estimating unknown parameters in state-space models. The most common statistical algorithm for estimating the parameters of a space-state model is based on EM algorithm, which makes it possible to iteratively compute a numerical approximation of the maximum likelihood estimators. We will show that the EM and CPF-BS algorithms can be combined to effectively estimate the parameters in toy models. In some applications, the dynamical model is unknown or very expensive to solve numerically but observations or simulations are available. It is thence possible to reconstruct the state conditionally to the observations by using filtering/smoothing algorithms in which the dynamical model is replaced by a statistical emulator constructed from the observations. We will show that the EM and CPF-BS algorithms can be adapted in this framework and allow to provide non-parametric estimation of the dynamic model of the state from noisy observations. Finally the proposed algorithms are applied to impute wind data (produced by Méteo France).
|
70 |
Estimation of State Space Models and Stochastic VolatilityMiller Lira, Shirley 09 1900 (has links)
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique.
Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières.
Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés.
Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière. / My thesis consists of three chapters related to the estimation of state space models and stochastic volatility models.
In the first chapter we develop a computationally efficient procedure for state smoothing in Gaussian linear state space models. We show how to exploit the special structure of state-space models to draw latent states efficiently. We analyze the computational efficiency of Kalman-filter-based methods, the Cholesky Factor Algorithm, and our new method using counts of operations and computational experiments. We show that for many important cases, our method is most efficient. Gains are particularly large for cases where the dimension of observed variables is large or where one makes repeated draws of states for the same parameter values. We apply our method to a multivariate Poisson model with time-varying intensities, which we use to analyze financial market transaction count data.
In the second chapter, we propose a new technique for the analysis of multivariate stochastic volatility models, based on efficient draws of volatility from its conditional posterior distribution. It applies to models with several kinds of cross-sectional dependence. Full VAR coefficient and covariance matrices give cross-sectional volatility dependence. Mean factor structure allows conditional correlations, given states, to vary in time. The conditional return distribution features Student's t marginals, with asset-specific degrees of freedom, and copulas describing cross-sectional dependence. We draw volatility as a block in the time dimension and one-at-a-time in the cross-section. Following McCausland(2012), we use close approximations of the conditional posterior distributions of volatility blocks as Metropolis-Hastings proposal distributions. We illustrate using daily return data for ten currencies. We report results for univariate stochastic volatility models and two multivariate models.
In the third chapter, we evaluate the information contributed by (variations of) realized volatility to the estimation and forecasting of volatility when prices are measured with and without error using a stochastic volatility model. We consider the viewpoint of an investor for whom volatility is an unknown latent variable and realized volatility is a sample quantity which contains information about it. We use Bayesian Markov Chain Monte Carlo (MCMC) methods to estimate the models, which allow the formulation of the posterior densities of in-sample volatilities, and the predictive densities of future volatilities. We then compare the volatility forecasts and hit rates from predictions that use and do not use the information contained in realized volatility. This approach is in contrast with most of the empirical realized volatility literature which most often documents the ability of realized volatility to forecast itself. Our empirical applications use daily index returns and foreign exchange during the 2008-2009 financial crisis.
|
Page generated in 0.0539 seconds