Spelling suggestions: "subject:"nonlinear timeseries"" "subject:"nonlinear cimenteries""
21 |
Essays in Nonlinear Time Series AnalysisMichel, Jonathan R. 21 June 2019 (has links)
No description available.
|
22 |
Modelling economic high-frequency time seriesLundbergh, Stefan January 1999 (has links)
Diss. Stockholm : Handelshögsk.
|
23 |
Flood forecasting using time series data miningDamle, Chaitanya 01 June 2005 (has links)
Earthquakes, floods, rainfall represent a class of nonlinear systems termed chaotic, in which the relationships between variables in a system are dynamic and disproportionate, however completely deterministic. Classical linear time series models have proved inadequate in analysis and prediction of complex geophysical phenomena. Nonlinear approaches such as Artificial Neural Networks, Hidden Markov Models and Nonlinear Prediction are useful in forecasting of daily discharge values in a river. The focus of these methods is on forecasting magnitudes of future discharge values and not the prediction of floods. Chaos theory provides a structured explanation for irregular behavior and anomalies in systems that are not inherently stochastic. Time Series Data Mining methodology combines chaos theory and data mining to characterize and predict complex, nonperiodic and chaotic time series. Time Series Data Mining focuses on the prediction of events.
|
24 |
非線性時間序列轉折區間認定之模糊統計分析 / Fuzzy Statistical Analysis for Change Periods Detection in Nonlinear Time Series陳美惠 Unknown Date (has links)
Many papers have been presented on the study of change points detection. Nonetheless, we would like to point out that in dealing with the time series with switching regimes, we should also take the characteristics of change periods into account. Because many patterns of change structure in time series exhibit a certain kind of duration, those phenomena should not be treated as a mere sudden turning at a certain time.
In this paper, we propose procedures about change periods detection for nonlinear time series. One of the detecting statistical methods is an application of fuzzy classification and generalization of Inclan and Tiao’s result. Moreover, we develop the genetic-based searching procedure, which is based on the concepts of leading genetic model. Simulation results show that the performance of these procedures is efficient and successful. Finally, two empirical applications about change periods detection for Taiwan monthly visitors arrival and exchange rate are demonstrated.
|
25 |
Addressing nonlinear systems with information-theoretical techniquesCastelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
|
26 |
Electrochemical studies of external forcing of periodic oscillating systems and fabrication of coupled microelectrode array sensorsClark, David 01 May 2020 (has links)
This dissertation describes the electrochemical behavior of nickel and iron that was studied in different acid solutions via linear sweep voltammetry, cyclic voltammetry, and potentiostatic measurements over a range of temperatures at specific potential ranges. The presented work displays novel experiments where a nickel electrode was heated locally with an inductive heating system, and a platinum (Pt) electrode was used to change the proton concentration at iron and nickel electrode surfaces to control the periodic oscillations (frequency and amplitude) produced and to gain a greater understanding of the systems (kinetics), oscillatory processes, and corrosion processes. Temperature pulse voltammetry, linear sweep voltammetry, and cyclic voltammetry were used for temperature calibration at different heating conditions. Several other metal systems (bismuth, lead, zinc, and silver) also produce periodic oscillations as corrosion occurs; however, creating these with pure metal electrodes is very expensive. In this work, metal systems were created via electrodeposition by using inexpensive, efficient, coupled microelectrode array sensors (CMASs) as a substrate. CMASs are integrated devices with multiple electrodes that are connected externally in a circuit in which all of the electrodes have the same amount of potential applied or current passing through them. CMASs have been used for many years to study different forms of corrosion (crevice corrosion, pitting corrosion, intergranular corrosion, and galvanic corrosion), and they are beneficial because they can simulate single electrodes of the same size. The presented work also demonstrates how to construct CMASs and shows that the unique phenomena of periodic oscillations that can be created and studied by using coated and bare copper CMASs. Furthermore, these systems can be controlled by implementing external forcing with a Pt electrode at the CMAS surface. The data from the single Ni electrode experiments and CMAS experiments were analyzed by using the Nonlinear Time-Series Analysis approach.
|
27 |
Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dadosCosta, Fausto Guzzo da 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
|
28 |
[en] HIGH FREQUENCY DATA AND PRICE-MAKING PROCESS ANALYSIS: THE EXPONENTIAL MULTIVARIATE AUTOREGRESSIVE CONDITIONAL MODEL - EMACM / [pt] ANÁLISE DE DADOS DE ALTA FREQÜÊNCIA E DO PROCESSO DE FORMAÇÃO DE PREÇOS: O MODELO MULTIVARIADO EXPONENCIAL - EMACMGUSTAVO SANTOS RAPOSO 04 July 2006 (has links)
[pt] A modelagem de dados que qualificam as transações de ativos
financeiros,
tais como, preço, spread de compra e venda, volume e
duração, vem despertando
o interesse de pesquisadores na área de finanças, levando a
um aumento crescente
do número de publicações referentes ao tema. As primeiras
propostas se
limitaram aos modelos de duração. Mais tarde, o impacto da
duração sobre a
volatilidade instantânea foi analisado. Recentemente,
Manganelli (2002) incluiu
dados referentes aos volumes transacionados dentro de um
modelo vetorial. Neste
estudo, nós estendemos o trabalho de Manganelli através da
inclusão do spread de
compra e venda num modelo vetorial autoregressivo, onde as
médias condicionais
do spread, volume, duração e volatilidade instantânea são
descritas a partir de
uma formulação exponencial chamada Exponential Multivariate
Autoregressive
Conditional Model (EMACM). Nesta nova proposta, não se
fazem necessárias a
adoção de quaisquer restrições nos parâmetros do modelo, o
que facilita o
procedimento de estimação por máxima verossimilhança e
permite a utilização de
testes de Razão de Verossimilhança na especificação da
forma funcional do
modelo (estrutura de interdependência). Em paralelo, a
questão de antecipar
movimentos nos preços de ativos financeiros é analisada
mediante a utilização de
um procedimento integrado, no qual, além da modelagem de
dados financeiros de
alta freqüência, faz-se uso de um modelo probit ordenado
contemporâneo. O
EMACM é empregado com o objetivo de capturar a dinâmica
associada às
variáveis e sua função de previsão é utilizada como proxy
para a informação
contemporânea necessária ao modelo de previsão de preços
proposto. / [en] The availability of high frequency financial transaction
data - price,
spread, volume and duration -has contributed to the
growing number of scientific
articles on this topic. The first proposals were limited to
pure duration models.
Later, the impact of duration over instantaneous volatility
was analyzed. More
recently, Manganelli (2002) included volume into a vector
model. In this
document, we extended his work by including the bid-ask
spread into the analysis
through a vector autoregressive model. The conditional
means of spread, volume
and duration along with the volatility of returns evolve
through transaction events
based on an exponential formulation we called Exponential
Multivariate
Autoregressive Conditional Model (EMACM). In our proposal,
there are no
constraints on the parameters of the VAR model. This
facilitates the maximum
likelihood estimation of the model and allows the use of
simple likelihood ratio
hypothesis tests to specify the model and obtain some clues
about the
interdependency structure of the variables. In parallel,
the problem of stock price
forecasting is faced through an integrated approach in
which, besides the
modeling of high frequency financial data, a contemporary
ordered probit model
is used. Here, EMACM captures the dynamic that high
frequency variables
present, and its forecasting function is taken as a proxy
to the contemporaneous
information necessary to the pricing model.
|
29 |
Determinism and predictability in extreme event systemsBirkholz, Simon 12 May 2016 (has links)
In den vergangenen Jahrzehnten wurden extreme Ereignisse, die nicht durch Gauß-Verteilungen beschrieben werden können, in einer Vielzahl an physikalischen Systemen beobachtet. Während statistische Methoden eine zuverlässige Identifikation von extremen Ereignissen ermöglichen, ist deren Entstehungsmechanismus nicht vollständig geklärt. Das Auftreten von extremen Ereignissen ist nicht vollkommen verstanden, da sie nur selten beobachtet werden können und häufig unter schwer reproduzierbaren Bedingungen auftreten. Deshalb ist es erstrebenswert Experimente zu entwickeln, die eine einfache Beobachtung von extremen Ereignissen erlauben. In dieser Dissertation werden extreme Ereignisse untersucht, die bei Multi-Filamentation von Femtosekundenlaserimpulsen entstehen. In den Experimenten, die in dieser Dissertation vorgestellt werden, werden Multi-Filamente durch Hochgeschwindigkeitskameras analysiert. Die Untersuchung der raum-zeitlichen Dynamik der Multi-Filamente zeigt eine L-förmige Wahrscheinlichkeitsverteilung, Diese Beobachtung impliziert das Auftreten von extremen Ereignissen. Lineare Analyse liefert Hinweise auf die physikalischen Prozesse, die zur Entstehung der extremen Ereignisse führen und nicht-lineare Zeitreihen-Analyse charakterisiert die Dynamik des Systems. Die Analyse der Multi-Filamente wird außerdem auf extreme Ereignisse in Wellen-Messungen und optische Superkontinua angewandt. Die durchgeführten Analysen zeigen Unterschiede in den physikalischen Prozessen, die zur Entstehung von extremen Ereignissen führen. Extreme Ereignisse in optischen Fasern werden durch stochastische Fluktuationen von verstärktem Quantenrauschen dominiert. In Multi-Filamenten und Ozeanwellen resultieren extreme Ereignisse dagegen aus klassischer mechanischer Turbulenz, was deren Vorhersagbarkeit impliziert. In dieser Arbeit wird anhand der von Multi-Filament-Zeitreihen die Vorhersagbarkeit in einem kurzen Zeitfenster vor Auftreten des extremen Ereignisses bewiesen. / In the last decades, extreme events, i.e., high-magnitude phenomena that cannot be described within the realm of Gaussian probability distributions have been observed in a multitude of physical systems. While statistical methods allow for a reliable identification of extreme event systems, the underlying mechanism behind extreme events is not understood. Extreme events are not well understood due to their rare occurrence and their onset under conditions that are difficult to reproduce. Thus, it is desirable to identify extreme event scenarios that can serve as a test bed. Optical systems exhibiting extreme events have been discovered to be ideal for such tests, and it is now desired to find more different examples to improve the understanding of extreme events. In this thesis, multifilamentation formed by femtosecond laser pulses is analyzed. Observation of the spatio-temporal dynamics of multifilamentation shows a heavy-tailed fluence probability distribution. This finding implies the onset of extreme events during multifilamentation. Linear analysis gives hints on the processes that drive the formation of extreme events. The multifilaments are also analyzed by nonlinear time series analysis, which provides information on determinism and chaos in the system. The analysis of the multifilament s is compared to an analysis of extreme event time series from ocean wave measurements and the supercontinuum output of an optical fiber. The analysis performed in this work shows fundamental differences in the extreme event mechnaism. While the extreme events in the optical fiber system are ruled by the stochastic changes of amplified quantum noise, in the multifilament and the ocean system extreme events appear as a result of the classical mechanical process of turbulence. This implies the predictability of extreme events. In this work, the predictability of extreme events is proven to be possible in a brief time window before the onset of the extreme event.
|
30 |
Essays on nonparametric estimation of asset pricing modelsDalderop, Jeroen Wilhelmus Paulus January 2018 (has links)
This thesis studies the use of nonparametric econometric methods to reconcile the empirical behaviour of financial asset prices with theoretical valuation models. The confrontation of economic theory with asset price data requires various functional form assumptions about the preferences and beliefs of investors. Nonparametric methods provide a flexible class of models that can prevent misspecification of agents’ utility functions or the distribution of asset returns. Evidence for potential nonlinearity is seen in the presence of non-Gaussian distributions and excessive volatility of stock returns, or non-monotonic stochastic discount factors in option prices. More robust model specifications are therefore likely to contribute to risk management and return predictability, and lend credibility to economists’ assertions. Each of the chapters in this thesis relaxes certain functional form assumptions that seem most important for understanding certain asset price data. Chapter 1 focuses on the state-price density in option prices, which confounds the nonlinearity in both the preferences and the beliefs of investors. To understand both sources of nonlinearity in equity prices, Chapter 2 introduces a semiparametric generalization of the standard representative agent consumption-based asset pricing model. Chapter 3 returns to option prices to understand the relative importance of changes in the distribution of returns and in the shape of the pricing kernel. More specifically, Chapter 1 studies the use of noisy high-frequency data to estimate the time-varying state-price density implicit in European option prices. A dynamic kernel estimator of the conditional pricing function and its derivatives is proposed that can be used for model-free risk measurement. Infill asymptotic theory is derived that applies when the pricing function is either smoothly varying or driven by diffusive state variables. Trading times and moneyness levels are modelled by marked point processes to capture intraday trading patterns. A simulation study investigates the performance of the estimator using an iterated plug-in bandwidth in various scenarios. Empirical results using S&P 500 E-mini European option quotes finds significant time-variation at intraday frequencies. An application towards delta- and minimum variance-hedging further illustrates the use of the estimator. Chapter 2 proposes a semiparametric asset pricing model to measure how consumption and dividend policies depend on unobserved state variables, such as economic uncertainty and risk aversion. Under a flexible specification of the stochastic discount factor, the state variables are recovered from cross-sections of asset prices and volatility proxies, and the shape of the policy functions is identified from the pricing functions. The model leads to closed-form price-dividend ratios under polynomial approximations of the unknown functions and affine state variable dynamics. In the empirical application uncertainty and risk aversion are separately identified from size-sorted stock portfolios exploiting the heterogeneous impact of uncertainty on dividend policy across small and large firms. I find an asymmetric and convex response in consumption (-) and dividend growth (+) towards uncertainty shocks, which together with moderate uncertainty aversion, can generate large leverage effects and divergence between macroeconomic and stock market volatility. Chapter 3 studies the nonparametric identification and estimation of projected pricing kernels implicit in the pricing of options, the underlying asset, and a riskfree bond. The sieve minimum-distance estimator based on conditional moment restrictions avoids the need to compute ratios of estimated risk-neutral and physical densities, and leads to stable estimates even in regions with low probability mass. The conditional empirical likelihood (CEL) variant of the estimator is used to extract implied densities that satisfy the pricing restrictions while incorporating the forwardlooking information from option prices. Moreover, I introduce density combinations in the CEL framework to measure the relative importance of changes in the physical return distribution and in the pricing kernel. The nonlinear dynamic pricing kernels can be used to understand return predictability, and provide model-free quantities that can be compared against those implied by structural asset pricing models.
|
Page generated in 0.0998 seconds