• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 45
  • 18
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

非線性時間序列轉折區間認定之模糊統計分析 / Fuzzy Statistical Analysis for Change Periods Detection in Nonlinear Time Series

陳美惠 Unknown Date (has links)
Many papers have been presented on the study of change points detection. Nonetheless, we would like to point out that in dealing with the time series with switching regimes, we should also take the characteristics of change periods into account. Because many patterns of change structure in time series exhibit a certain kind of duration, those phenomena should not be treated as a mere sudden turning at a certain time. In this paper, we propose procedures about change periods detection for nonlinear time series. One of the detecting statistical methods is an application of fuzzy classification and generalization of Inclan and Tiao’s result. Moreover, we develop the genetic-based searching procedure, which is based on the concepts of leading genetic model. Simulation results show that the performance of these procedures is efficient and successful. Finally, two empirical applications about change periods detection for Taiwan monthly visitors arrival and exchange rate are demonstrated.
22

Addressing nonlinear systems with information-theoretical techniques

Castelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
23

Electrochemical studies of external forcing of periodic oscillating systems and fabrication of coupled microelectrode array sensors

Clark, David 01 May 2020 (has links)
This dissertation describes the electrochemical behavior of nickel and iron that was studied in different acid solutions via linear sweep voltammetry, cyclic voltammetry, and potentiostatic measurements over a range of temperatures at specific potential ranges. The presented work displays novel experiments where a nickel electrode was heated locally with an inductive heating system, and a platinum (Pt) electrode was used to change the proton concentration at iron and nickel electrode surfaces to control the periodic oscillations (frequency and amplitude) produced and to gain a greater understanding of the systems (kinetics), oscillatory processes, and corrosion processes. Temperature pulse voltammetry, linear sweep voltammetry, and cyclic voltammetry were used for temperature calibration at different heating conditions. Several other metal systems (bismuth, lead, zinc, and silver) also produce periodic oscillations as corrosion occurs; however, creating these with pure metal electrodes is very expensive. In this work, metal systems were created via electrodeposition by using inexpensive, efficient, coupled microelectrode array sensors (CMASs) as a substrate. CMASs are integrated devices with multiple electrodes that are connected externally in a circuit in which all of the electrodes have the same amount of potential applied or current passing through them. CMASs have been used for many years to study different forms of corrosion (crevice corrosion, pitting corrosion, intergranular corrosion, and galvanic corrosion), and they are beneficial because they can simulate single electrodes of the same size. The presented work also demonstrates how to construct CMASs and shows that the unique phenomena of periodic oscillations that can be created and studied by using coated and bare copper CMASs. Furthermore, these systems can be controlled by implementing external forcing with a Pt electrode at the CMAS surface. The data from the single Ni electrode experiments and CMAS experiments were analyzed by using the Nonlinear Time-Series Analysis approach.
24

Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dados

Costa, Fausto Guzzo da 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
25

[en] HIGH FREQUENCY DATA AND PRICE-MAKING PROCESS ANALYSIS: THE EXPONENTIAL MULTIVARIATE AUTOREGRESSIVE CONDITIONAL MODEL - EMACM / [pt] ANÁLISE DE DADOS DE ALTA FREQÜÊNCIA E DO PROCESSO DE FORMAÇÃO DE PREÇOS: O MODELO MULTIVARIADO EXPONENCIAL - EMACM

GUSTAVO SANTOS RAPOSO 04 July 2006 (has links)
[pt] A modelagem de dados que qualificam as transações de ativos financeiros, tais como, preço, spread de compra e venda, volume e duração, vem despertando o interesse de pesquisadores na área de finanças, levando a um aumento crescente do número de publicações referentes ao tema. As primeiras propostas se limitaram aos modelos de duração. Mais tarde, o impacto da duração sobre a volatilidade instantânea foi analisado. Recentemente, Manganelli (2002) incluiu dados referentes aos volumes transacionados dentro de um modelo vetorial. Neste estudo, nós estendemos o trabalho de Manganelli através da inclusão do spread de compra e venda num modelo vetorial autoregressivo, onde as médias condicionais do spread, volume, duração e volatilidade instantânea são descritas a partir de uma formulação exponencial chamada Exponential Multivariate Autoregressive Conditional Model (EMACM). Nesta nova proposta, não se fazem necessárias a adoção de quaisquer restrições nos parâmetros do modelo, o que facilita o procedimento de estimação por máxima verossimilhança e permite a utilização de testes de Razão de Verossimilhança na especificação da forma funcional do modelo (estrutura de interdependência). Em paralelo, a questão de antecipar movimentos nos preços de ativos financeiros é analisada mediante a utilização de um procedimento integrado, no qual, além da modelagem de dados financeiros de alta freqüência, faz-se uso de um modelo probit ordenado contemporâneo. O EMACM é empregado com o objetivo de capturar a dinâmica associada às variáveis e sua função de previsão é utilizada como proxy para a informação contemporânea necessária ao modelo de previsão de preços proposto. / [en] The availability of high frequency financial transaction data - price, spread, volume and duration -has contributed to the growing number of scientific articles on this topic. The first proposals were limited to pure duration models. Later, the impact of duration over instantaneous volatility was analyzed. More recently, Manganelli (2002) included volume into a vector model. In this document, we extended his work by including the bid-ask spread into the analysis through a vector autoregressive model. The conditional means of spread, volume and duration along with the volatility of returns evolve through transaction events based on an exponential formulation we called Exponential Multivariate Autoregressive Conditional Model (EMACM). In our proposal, there are no constraints on the parameters of the VAR model. This facilitates the maximum likelihood estimation of the model and allows the use of simple likelihood ratio hypothesis tests to specify the model and obtain some clues about the interdependency structure of the variables. In parallel, the problem of stock price forecasting is faced through an integrated approach in which, besides the modeling of high frequency financial data, a contemporary ordered probit model is used. Here, EMACM captures the dynamic that high frequency variables present, and its forecasting function is taken as a proxy to the contemporaneous information necessary to the pricing model.
26

Determinism and predictability in extreme event systems

Birkholz, Simon 12 May 2016 (has links)
In den vergangenen Jahrzehnten wurden extreme Ereignisse, die nicht durch Gauß-Verteilungen beschrieben werden können, in einer Vielzahl an physikalischen Systemen beobachtet. Während statistische Methoden eine zuverlässige Identifikation von extremen Ereignissen ermöglichen, ist deren Entstehungsmechanismus nicht vollständig geklärt. Das Auftreten von extremen Ereignissen ist nicht vollkommen verstanden, da sie nur selten beobachtet werden können und häufig unter schwer reproduzierbaren Bedingungen auftreten. Deshalb ist es erstrebenswert Experimente zu entwickeln, die eine einfache Beobachtung von extremen Ereignissen erlauben. In dieser Dissertation werden extreme Ereignisse untersucht, die bei Multi-Filamentation von Femtosekundenlaserimpulsen entstehen. In den Experimenten, die in dieser Dissertation vorgestellt werden, werden Multi-Filamente durch Hochgeschwindigkeitskameras analysiert. Die Untersuchung der raum-zeitlichen Dynamik der Multi-Filamente zeigt eine L-förmige Wahrscheinlichkeitsverteilung, Diese Beobachtung impliziert das Auftreten von extremen Ereignissen. Lineare Analyse liefert Hinweise auf die physikalischen Prozesse, die zur Entstehung der extremen Ereignisse führen und nicht-lineare Zeitreihen-Analyse charakterisiert die Dynamik des Systems. Die Analyse der Multi-Filamente wird außerdem auf extreme Ereignisse in Wellen-Messungen und optische Superkontinua angewandt. Die durchgeführten Analysen zeigen Unterschiede in den physikalischen Prozessen, die zur Entstehung von extremen Ereignissen führen. Extreme Ereignisse in optischen Fasern werden durch stochastische Fluktuationen von verstärktem Quantenrauschen dominiert. In Multi-Filamenten und Ozeanwellen resultieren extreme Ereignisse dagegen aus klassischer mechanischer Turbulenz, was deren Vorhersagbarkeit impliziert. In dieser Arbeit wird anhand der von Multi-Filament-Zeitreihen die Vorhersagbarkeit in einem kurzen Zeitfenster vor Auftreten des extremen Ereignisses bewiesen. / In the last decades, extreme events, i.e., high-magnitude phenomena that cannot be described within the realm of Gaussian probability distributions have been observed in a multitude of physical systems. While statistical methods allow for a reliable identification of extreme event systems, the underlying mechanism behind extreme events is not understood. Extreme events are not well understood due to their rare occurrence and their onset under conditions that are difficult to reproduce. Thus, it is desirable to identify extreme event scenarios that can serve as a test bed. Optical systems exhibiting extreme events have been discovered to be ideal for such tests, and it is now desired to find more different examples to improve the understanding of extreme events. In this thesis, multifilamentation formed by femtosecond laser pulses is analyzed. Observation of the spatio-temporal dynamics of multifilamentation shows a heavy-tailed fluence probability distribution. This finding implies the onset of extreme events during multifilamentation. Linear analysis gives hints on the processes that drive the formation of extreme events. The multifilaments are also analyzed by nonlinear time series analysis, which provides information on determinism and chaos in the system. The analysis of the multifilament s is compared to an analysis of extreme event time series from ocean wave measurements and the supercontinuum output of an optical fiber. The analysis performed in this work shows fundamental differences in the extreme event mechnaism. While the extreme events in the optical fiber system are ruled by the stochastic changes of amplified quantum noise, in the multifilament and the ocean system extreme events appear as a result of the classical mechanical process of turbulence. This implies the predictability of extreme events. In this work, the predictability of extreme events is proven to be possible in a brief time window before the onset of the extreme event.
27

Essays on nonparametric estimation of asset pricing models

Dalderop, Jeroen Wilhelmus Paulus January 2018 (has links)
This thesis studies the use of nonparametric econometric methods to reconcile the empirical behaviour of financial asset prices with theoretical valuation models. The confrontation of economic theory with asset price data requires various functional form assumptions about the preferences and beliefs of investors. Nonparametric methods provide a flexible class of models that can prevent misspecification of agents’ utility functions or the distribution of asset returns. Evidence for potential nonlinearity is seen in the presence of non-Gaussian distributions and excessive volatility of stock returns, or non-monotonic stochastic discount factors in option prices. More robust model specifications are therefore likely to contribute to risk management and return predictability, and lend credibility to economists’ assertions. Each of the chapters in this thesis relaxes certain functional form assumptions that seem most important for understanding certain asset price data. Chapter 1 focuses on the state-price density in option prices, which confounds the nonlinearity in both the preferences and the beliefs of investors. To understand both sources of nonlinearity in equity prices, Chapter 2 introduces a semiparametric generalization of the standard representative agent consumption-based asset pricing model. Chapter 3 returns to option prices to understand the relative importance of changes in the distribution of returns and in the shape of the pricing kernel. More specifically, Chapter 1 studies the use of noisy high-frequency data to estimate the time-varying state-price density implicit in European option prices. A dynamic kernel estimator of the conditional pricing function and its derivatives is proposed that can be used for model-free risk measurement. Infill asymptotic theory is derived that applies when the pricing function is either smoothly varying or driven by diffusive state variables. Trading times and moneyness levels are modelled by marked point processes to capture intraday trading patterns. A simulation study investigates the performance of the estimator using an iterated plug-in bandwidth in various scenarios. Empirical results using S&P 500 E-mini European option quotes finds significant time-variation at intraday frequencies. An application towards delta- and minimum variance-hedging further illustrates the use of the estimator. Chapter 2 proposes a semiparametric asset pricing model to measure how consumption and dividend policies depend on unobserved state variables, such as economic uncertainty and risk aversion. Under a flexible specification of the stochastic discount factor, the state variables are recovered from cross-sections of asset prices and volatility proxies, and the shape of the policy functions is identified from the pricing functions. The model leads to closed-form price-dividend ratios under polynomial approximations of the unknown functions and affine state variable dynamics. In the empirical application uncertainty and risk aversion are separately identified from size-sorted stock portfolios exploiting the heterogeneous impact of uncertainty on dividend policy across small and large firms. I find an asymmetric and convex response in consumption (-) and dividend growth (+) towards uncertainty shocks, which together with moderate uncertainty aversion, can generate large leverage effects and divergence between macroeconomic and stock market volatility. Chapter 3 studies the nonparametric identification and estimation of projected pricing kernels implicit in the pricing of options, the underlying asset, and a riskfree bond. The sieve minimum-distance estimator based on conditional moment restrictions avoids the need to compute ratios of estimated risk-neutral and physical densities, and leads to stable estimates even in regions with low probability mass. The conditional empirical likelihood (CEL) variant of the estimator is used to extract implied densities that satisfy the pricing restrictions while incorporating the forwardlooking information from option prices. Moreover, I introduce density combinations in the CEL framework to measure the relative importance of changes in the physical return distribution and in the pricing kernel. The nonlinear dynamic pricing kernels can be used to understand return predictability, and provide model-free quantities that can be compared against those implied by structural asset pricing models.
28

Extremes in events and dynamics : a nonlinear data analysis perspective on the past and present dynamics of the Indian summer monsoon

Malik, Nishant January 2011 (has links)
To identify extreme changes in the dynamics of the Indian Summer Monsoon (ISM) in the past, I propose a new approach based on the quantification of fluctuations of a nonlinear similarity measure, to identify regimes of distinct dynamical complexity in short time series. I provide an analytical derivation for the relationship of the new measure with the dynamical invariants such as dimension and Lyapunov exponents of the underlying system. A statistical test is also developed to estimate the significance of the identified transitions. Our method is justified by uncovering bifurcation structures in several paradigmatic models, providing more complex transitions compared with traditional Lyapunov exponents. In a real world situation, we apply the method to identify millennial-scale dynamical transitions in Pleistocene proxy records of the south Asian summer monsoon system. We infer that many of these transitions are induced by the external forcing of solar insolation and are also affected by internal forcing on Monsoonal dynamics, i.e., the glaciation cycles of the Northern Hemisphere and the onset of the tropical Walker circulation. Although this new method has general applicability, it is particularly useful in analysing short palaeo-climate records. Rainfall during the ISM over the Indian subcontinent occurs in form of enormously complex spatiotemporal patterns due to the underlying dynamics of atmospheric circulation and varying topography. I present a detailed analysis of summer monsoon rainfall over the Indian peninsular using Event Synchronization (ES), a measure of nonlinear correlation for point processes such as rainfall. First, using hierarchical clustering I identify principle regions where the dynamics of monsoonal rainfall is more coherent or homogenous. I also provide a method to reconstruct the time delay patterns of rain events. Moreover, further analysis is carried out employing the tools of complex network theory. This study provides valuable insights into the spatial organization, scales, and structure of the 90th and 94th percentile rainfall events during the ISM (June to September). I furthermore analyse the influence of different critical synoptic atmospheric systems and the impact of the steep Himalayan topography on rainfall patterns. The presented method not only helps in visualising the structure of the extremeevent rainfall fields, but also identifies the water vapor pathways and decadal-scale moisture sinks over the region. Furthermore a simple scheme based on complex networks is presented to decipher the spatial intricacies and temporal evolution of monsoonal rainfall patterns over the last six decades. Some supplementary results on the evolution of monsoonal rainfall extremes over the last sixty years are also presented. / Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
29

Zu cervicalen Distorsionsverletzungen und deren Auswirkungen auf posturographische Schwankungsmuster / To cervical whiplash injuries and their effects on postural fluctuation models

Gutschow, Stephan January 2008 (has links)
Einleitung & Problemstellung: Beschwerden nach Beschleunigungsverletzungen der Halswirbel-säule sind oft nur unzureichend einzuordnen und diagnostizierbar. Eine eindeutige Diagnostik ist jedoch für eine entsprechende Therapie wie auch möglicherweise entstehende versicherungsrechtliche Forderungen notwendig. Die Entwicklung eines geeigneten Diagnoseverfahrens liegt damit im Interesse von Betroffenen wie auch Kostenträgern. Neben Störungen der Weichteilgewebe ist fast immer die Funktion der Halsmuskulatur in Folge eines Traumas beeinträchtigt. Dabei wird vor allem die sensorische Funktion der HWS-Muskulatur, die an der Regulation des Gleichgewichts beteiligt ist, gestört. In Folge dessen kann angenommen werden, dass es zu einer Beeinträchtigung der Gleichgewichtsregulation kommt. Die Zielstellung der Arbeit lautete deshalb, die möglicherweise gestörte Gleichgewichtsregulation nach einem Trauma im HWS-Bereich apparativ zu erfassen, um so die Verletzung eindeutig diagnostizieren zu können. Methodik: Unter Verwendung eines posturographischen Messsystems mit Kraftmomentensensorik wurden bei 478 Probanden einer Vergleichsgruppe und bei 85 Probanden eines Patientenpools Kraftmomente unter der Fußsohle als Äußerung der posturalen Balanceregulation aufgezeichnet. Die gemessenen Balancezeitreihen wurden nichtlinear analysiert, um die hohe Variabilität der Gleichgewichtsregulation optimal zu beschreiben. Über die dabei gewonnenen Parameter kann überprüft werden, ob sich spezifische Unterschiede im Schwankungsverhalten anhand der plantaren Druckverteilung zwischen HWS-Traumatisierten und den Probanden der Kontrollgruppe klassifizieren lassen. Ergebnisse: Die beste Klassifizierung konnte dabei über Parameter erzielt werden, die das Schwankungsverhalten in Phasen beschreiben, in denen die Amplitudenschwankungen relativ gering ausgeprägt waren. Die Analysen ergaben signifikante Unterschiede im Balanceverhalten zwischen der Gruppe HWS-traumatisierter Probanden und der Vergleichsgruppe. Die höchsten Trennbarkeitsraten wurden dabei durch Messungen im ruhigen beidbeinigen Stand mit geschlossenen Augen erzielt. Diskussion: Das posturale Balanceverhalten wies jedoch in allen Messpositionen eine hohe individuelle Varianz auf, so dass kein allgemeingültiges Schwankungsmuster für eine Gruppen-gesamtheit klassifiziert werden konnte. Eine individuelle Vorhersage der Gruppenzugehörigkeit ist damit nicht möglich. Die verwendete Messtechnik und die angewandten Auswerteverfahren tragen somit zwar zu einem Erkenntnisgewinn und zur Beschreibung des Gleichgewichtsverhaltens nach HWS-Traumatisierung bei. Sie können jedoch zum derzeitigen Stand für den Einzelfall keinen Beitrag zu einer eindeutigen Bestimmung eines Schleudertraumas leisten. / Introduction & Problem definition: Disorders after acceleration injuries of the cervical spine can often be classified and diagnosed only inadequately. But an explicit diagnosis is necessary as a basis for an adequate therapy as well as for possibly arising demands pursuant to insurance law. The development of suitable diagnosis methods is in the interest of patients as well as the cost units. Apart from disorders of the soft tissues there are almost always impairments of the function of the neck musculature. Particularly the sensory function of the cervical spine musculature, which participates in the regulation of the equilibrium, is disturbed by that. As a result in can be assumed that the postural control is also disturbed. Therefore the aim of this study was to examine the possibly disturbed postural motor balance after a whiplash injury of the cervical spine with the help of apparatus-supported methods to be able to unambigiously diagnose. Methods: postural measuring system based on the force-moment sensortechnique was used to record the postural balance regulation of 478 test persons and 85 patients which had suffered a whiplash injury. Data analysis was accomplished by linear as well as by nonlinear time series methods in order to characterise the balance regulation in an optimal way. Thus it can be determined whether there can be classified specific differences in the plantar pressure distribution covering patients with a whiplash injury and the test persons of the control group. Results: The best classification could be achieved by parameters which describe the variation of the postural balance regulation in phases in which the differences of the amplitudes of the plantar pressure distribution were relatively small. The analyses showed significant differences in the postural motor balance between the group of patients with whiplash injuries and the control group. The most significant differences (highest discriminate rates) could be observed by measurements in both-legged position with closed eyes. Discussion: Although the results achieved support the hypothesis mentioned above, is must be conceded that the postural motor balance showed a high individual variation in all positions of measurement. Therefore no universal variation model could be classified for the entirety of either group. This way an individual forecast of the group membership is impossible. As a result the measurement technology being used and the nonlinear time series analyses can contribute to the gain of knowledge and to the description of the regulation of postural control after whiplash injury. But at present they cannot contribute to an explicit determination of a whiplash injury for a particular case.
30

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks

Rech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&amp;P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>

Page generated in 0.04 seconds