• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 48
  • 48
  • 48
  • 18
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Extremes in events and dynamics : a nonlinear data analysis perspective on the past and present dynamics of the Indian summer monsoon

Malik, Nishant January 2011 (has links)
To identify extreme changes in the dynamics of the Indian Summer Monsoon (ISM) in the past, I propose a new approach based on the quantification of fluctuations of a nonlinear similarity measure, to identify regimes of distinct dynamical complexity in short time series. I provide an analytical derivation for the relationship of the new measure with the dynamical invariants such as dimension and Lyapunov exponents of the underlying system. A statistical test is also developed to estimate the significance of the identified transitions. Our method is justified by uncovering bifurcation structures in several paradigmatic models, providing more complex transitions compared with traditional Lyapunov exponents. In a real world situation, we apply the method to identify millennial-scale dynamical transitions in Pleistocene proxy records of the south Asian summer monsoon system. We infer that many of these transitions are induced by the external forcing of solar insolation and are also affected by internal forcing on Monsoonal dynamics, i.e., the glaciation cycles of the Northern Hemisphere and the onset of the tropical Walker circulation. Although this new method has general applicability, it is particularly useful in analysing short palaeo-climate records. Rainfall during the ISM over the Indian subcontinent occurs in form of enormously complex spatiotemporal patterns due to the underlying dynamics of atmospheric circulation and varying topography. I present a detailed analysis of summer monsoon rainfall over the Indian peninsular using Event Synchronization (ES), a measure of nonlinear correlation for point processes such as rainfall. First, using hierarchical clustering I identify principle regions where the dynamics of monsoonal rainfall is more coherent or homogenous. I also provide a method to reconstruct the time delay patterns of rain events. Moreover, further analysis is carried out employing the tools of complex network theory. This study provides valuable insights into the spatial organization, scales, and structure of the 90th and 94th percentile rainfall events during the ISM (June to September). I furthermore analyse the influence of different critical synoptic atmospheric systems and the impact of the steep Himalayan topography on rainfall patterns. The presented method not only helps in visualising the structure of the extremeevent rainfall fields, but also identifies the water vapor pathways and decadal-scale moisture sinks over the region. Furthermore a simple scheme based on complex networks is presented to decipher the spatial intricacies and temporal evolution of monsoonal rainfall patterns over the last six decades. Some supplementary results on the evolution of monsoonal rainfall extremes over the last sixty years are also presented. / Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
32

Zu cervicalen Distorsionsverletzungen und deren Auswirkungen auf posturographische Schwankungsmuster / To cervical whiplash injuries and their effects on postural fluctuation models

Gutschow, Stephan January 2008 (has links)
Einleitung & Problemstellung: Beschwerden nach Beschleunigungsverletzungen der Halswirbel-säule sind oft nur unzureichend einzuordnen und diagnostizierbar. Eine eindeutige Diagnostik ist jedoch für eine entsprechende Therapie wie auch möglicherweise entstehende versicherungsrechtliche Forderungen notwendig. Die Entwicklung eines geeigneten Diagnoseverfahrens liegt damit im Interesse von Betroffenen wie auch Kostenträgern. Neben Störungen der Weichteilgewebe ist fast immer die Funktion der Halsmuskulatur in Folge eines Traumas beeinträchtigt. Dabei wird vor allem die sensorische Funktion der HWS-Muskulatur, die an der Regulation des Gleichgewichts beteiligt ist, gestört. In Folge dessen kann angenommen werden, dass es zu einer Beeinträchtigung der Gleichgewichtsregulation kommt. Die Zielstellung der Arbeit lautete deshalb, die möglicherweise gestörte Gleichgewichtsregulation nach einem Trauma im HWS-Bereich apparativ zu erfassen, um so die Verletzung eindeutig diagnostizieren zu können. Methodik: Unter Verwendung eines posturographischen Messsystems mit Kraftmomentensensorik wurden bei 478 Probanden einer Vergleichsgruppe und bei 85 Probanden eines Patientenpools Kraftmomente unter der Fußsohle als Äußerung der posturalen Balanceregulation aufgezeichnet. Die gemessenen Balancezeitreihen wurden nichtlinear analysiert, um die hohe Variabilität der Gleichgewichtsregulation optimal zu beschreiben. Über die dabei gewonnenen Parameter kann überprüft werden, ob sich spezifische Unterschiede im Schwankungsverhalten anhand der plantaren Druckverteilung zwischen HWS-Traumatisierten und den Probanden der Kontrollgruppe klassifizieren lassen. Ergebnisse: Die beste Klassifizierung konnte dabei über Parameter erzielt werden, die das Schwankungsverhalten in Phasen beschreiben, in denen die Amplitudenschwankungen relativ gering ausgeprägt waren. Die Analysen ergaben signifikante Unterschiede im Balanceverhalten zwischen der Gruppe HWS-traumatisierter Probanden und der Vergleichsgruppe. Die höchsten Trennbarkeitsraten wurden dabei durch Messungen im ruhigen beidbeinigen Stand mit geschlossenen Augen erzielt. Diskussion: Das posturale Balanceverhalten wies jedoch in allen Messpositionen eine hohe individuelle Varianz auf, so dass kein allgemeingültiges Schwankungsmuster für eine Gruppen-gesamtheit klassifiziert werden konnte. Eine individuelle Vorhersage der Gruppenzugehörigkeit ist damit nicht möglich. Die verwendete Messtechnik und die angewandten Auswerteverfahren tragen somit zwar zu einem Erkenntnisgewinn und zur Beschreibung des Gleichgewichtsverhaltens nach HWS-Traumatisierung bei. Sie können jedoch zum derzeitigen Stand für den Einzelfall keinen Beitrag zu einer eindeutigen Bestimmung eines Schleudertraumas leisten. / Introduction & Problem definition: Disorders after acceleration injuries of the cervical spine can often be classified and diagnosed only inadequately. But an explicit diagnosis is necessary as a basis for an adequate therapy as well as for possibly arising demands pursuant to insurance law. The development of suitable diagnosis methods is in the interest of patients as well as the cost units. Apart from disorders of the soft tissues there are almost always impairments of the function of the neck musculature. Particularly the sensory function of the cervical spine musculature, which participates in the regulation of the equilibrium, is disturbed by that. As a result in can be assumed that the postural control is also disturbed. Therefore the aim of this study was to examine the possibly disturbed postural motor balance after a whiplash injury of the cervical spine with the help of apparatus-supported methods to be able to unambigiously diagnose. Methods: postural measuring system based on the force-moment sensortechnique was used to record the postural balance regulation of 478 test persons and 85 patients which had suffered a whiplash injury. Data analysis was accomplished by linear as well as by nonlinear time series methods in order to characterise the balance regulation in an optimal way. Thus it can be determined whether there can be classified specific differences in the plantar pressure distribution covering patients with a whiplash injury and the test persons of the control group. Results: The best classification could be achieved by parameters which describe the variation of the postural balance regulation in phases in which the differences of the amplitudes of the plantar pressure distribution were relatively small. The analyses showed significant differences in the postural motor balance between the group of patients with whiplash injuries and the control group. The most significant differences (highest discriminate rates) could be observed by measurements in both-legged position with closed eyes. Discussion: Although the results achieved support the hypothesis mentioned above, is must be conceded that the postural motor balance showed a high individual variation in all positions of measurement. Therefore no universal variation model could be classified for the entirety of either group. This way an individual forecast of the group membership is impossible. As a result the measurement technology being used and the nonlinear time series analyses can contribute to the gain of knowledge and to the description of the regulation of postural control after whiplash injury. But at present they cannot contribute to an explicit determination of a whiplash injury for a particular case.
33

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks

Rech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&amp;P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>
34

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks /

Rech, Gianluigi, January 1900 (has links)
Diss. Stockholm : Handelshögskolan, 2002.
35

[en] IDENTIFICATION MECHANISMS OF SPURIOUS DIVISIONS IN THRESHOLD AUTOREGRESSIVE MODELS / [pt] MECANISMOS DE IDENTIFICAÇÃO DE DIVISÕES ESPÚRIAS EM MODELOS DE REGRESSÃO COM LIMIARES

ANGELO SERGIO MILFONT PEREIRA 10 December 2002 (has links)
[pt] O objetivo desta dissertação é propor um mecanismo de testes para a avaliação dos resultados obtidos em uma modelagem TS-TARX.A principal motivação é encontrar uma solução para um problema comum na modelagem TS-TARX : os modelos espúrios que são gerados durante o processo de divisão do espaço das variáveis independentes.O modelo é uma heurística baseada em análise de árvore de regressão, como discutido por Brieman -3, 1984-. O modelo proposto para a análise de séries temporais é chamado TARX - Threshold Autoregressive with eXternal variables-. A idéia central é encontrar limiares que separem regimes que podem ser explicados através de modelos lineares. Este processo é um algoritmo que preserva o método de regressão por mínimos quadrados recursivo -MQR-. Combinando a árvore de decisão com a técnica de regressão -MQR-, o modelo se tornou o TS-TARX -Tree Structured - Threshold AutoRegression with external variables-.Será estendido aqui o trabalho iniciado por Aranha em -1, 2001-. Onde a partir de uma base de dados conhecida, um algoritmo eficiente gera uma árvore de decisão por meio de regras, e as equações de regressão estimadas para cada um dos regimes encontrados. Este procedimento pode gerar alguns modelos espúrios ou por construção,devido a divisão binária da árvore, ou pelo fato de não existir neste momento uma metodologia de comparação dos modelos resultantes.Será proposta uma metodologia através de sucessivos testes de Chow -5, 1960- que identificará modelos espúrios e reduzirá a quantidade de regimes encontrados, e consequentemente de parâmetros a estimar. A complexidade do modelo final gerado é reduzida a partir da identificação de redundâncias, sem perder o poder preditivo dos modelos TS-TARX .O trabalho conclui com exemplos ilustrativos e algumas aplicações em bases de dados sintéticas, e casos reais que auxiliarão o entendimento. / [en] The goal of this dissertation is to propose a test mechanism to evaluate the results obtained from the TS-TARX modeling procedure.The main motivation is to find a solution to a usual problem related to TS-TARX modeling: spurious models are generated in the process of dividing the space state of the independent variables.The model is a heuristics based on regression tree analysis, as discussed by Brieman -3, 1984-. The model used to estimate the parameters of the time series is a TARX -Threshold Autoregressive with eXternal variables-.The main idea is to find thresholds that split the independent variable space into regimes which can be described by a local linear model. In this process, the recursive least square regression model is preserved. From the combination of regression tree analysis and recursive least square regression techniques, the model becomes TS-TARX -Tree Structured - Threshold Autoregression with eXternal variables-.The works initiated by Aranha in -1, 2001- will be extended. In his works, from a given data base, one efficient algorithm generates a decision tree based on splitting rules, and the corresponding regression equations for each one of the regimes found.Spurious models may be generated either from its building procedure, or from the fact that a procedure to compare the resulting models had not been proposed.To fill this gap, a methodology will be proposed. In accordance with the statistical tests proposed by Chow in -5, 196-, a series of consecutive tests will be performed.The Chow tests will provide the tools to identify spurious models and to reduce the number of regimes found. The complexity of the final model, and the number of parameters to estimate are therefore reduced by the identification and elimination of redundancies, without bringing risks to the TS-TARX model predictive power.This work is concluded with illustrative examples and some applications to real data that will help the readers understanding.
36

Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dados

Fausto Guzzo da Costa 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
37

Detekce kauzality v časových řadách pomocí extrémních hodnot / Detection of causality in time series using extreme values

Bodík, Juraj January 2021 (has links)
Juraj Bodík Abstract This thesis is dealing with the following problem: Let us have two stationary time series with heavy- tailed marginal distributions. We want to detect whether they have a causal relation, i.e. if a change in one of them causes a change in the other. The question of distinguishing between causality and correlation is essential in many different science fields. Usual methods for causality detection are not well suited if the causal mechanisms only manifest themselves in extremes. In this thesis, we propose a new method that can help us in such a nontraditional case distinguish between correlation and causality. We define the so-called causal tail coefficient for time series, which, under some assumptions, correctly detects the asymmetrical causal relations between different time series. We will rigorously prove this claim, and we also propose a method on how to statistically estimate the causal tail coefficient from a finite number of data. The advantage is that this method works even if nonlinear relations and common ancestors are present. Moreover, we will mention how our method can help detect a time delay between the two time series. We will show how our method performs on some simulations. Finally, we will show on a real dataset how this method works, discussing a cause of...
38

Applying Goodness-Of-Fit Techniques In Testing Time Series Gaussianity And Linearity

Jahan, Nusrat 05 August 2006 (has links)
In this study, we present two new frequency domain tests for testing the Gaussianity and linearity of a sixth-order stationary univariate time series. Both are two-stage tests. The first stage is a test for the Gaussianity of the series. Under Gaussianity, the estimated normalized bispectrum has an asymptotic chi-square distribution with two degrees of freedom. If Gaussianity is rejected, the test proceeds to the second stage, which tests for linearity. Under linearity, with non-Gaussian errors, the estimated normalized bispectrum has an asymptotic non-central chi-square distribution with two degrees of freedom and constant noncentrality parameter. If the process is nonlinear, the noncentrality parameter is nonconstant. At each stage, empirical distribution function (EDF) goodness-ofit (GOF) techniques are applied to the estimated normalized bispectrum by comparing the empirical CDF with the appropriate null asymptotic distribution. The two specific methods investigated are the Anderson-Darling and Cramer-von Mises tests. Under Gaussianity, the distribution is completely specified, and application is straight forward. However, if Gaussianity is rejected, the proposed application of the EDF tests involves a transformation to normality. The performance of the tests and a comparison of the EDF tests to existing time and frequency domain tests are investigated under a variety of circumstances through simulation. For illustration, the tests are applied to a number of data sets popular in the time series literature.
39

Estimation of a class of nonlinear time series models.

Sando, Simon Andrew January 2004 (has links)
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
40

Detecting and quantifying causality from time series of complex systems

Runge, Jakob 18 August 2014 (has links)
Der technologische Fortschritt hat in jüngster Zeit zu einer großen Zahl von Zeitreihenmessdaten über komplexe dynamische Systeme wie das Klimasystem, das Gehirn oder das globale ökonomische System geführt. Beispielsweise treten im Klimasystem Prozesse wie El Nino-Southern Oscillation (ENSO) mit dem indischen Monsun auf komplexe Art und Weise durch Telekonnektionen und Rückkopplungen in Wechselwirkung miteinander. Die Analyse der Messdaten zur Rekonstruktion der diesen Wechselwirkungen zugrunde liegenden kausalen Mechanismen ist eine Möglichkeit komplexe Systeme zu verstehen, insbesondere angesichts der unendlich-dimensionalen Komplexität der physikalischen Prozesse. Diese Dissertation verfolgt zwei Hauptfragen: (i) Wie können, ausgehend von multivariaten Zeitreihen, kausale Wechselwirkungen praktisch detektiert werden? (ii) Wie kann die Stärke kausaler Wechselwirkungen zwischen mehreren Prozessen in klar interpretierbarer Weise quantifiziert werden? Im ersten Teil der Arbeit werden die Theorie zur Detektion und Quantifikation nichtlinearer kausaler Wechselwirkungen (weiter-)entwickelt und wichtige Aspekte der Schätztheorie untersucht. Zur Quantifikation kausaler Wechselwirkungen wird ein physikalisch motivierter, informationstheoretischer Ansatz vorgeschlagen, umfangreich numerisch untersucht und durch analytische Resultate untermauert. Im zweiten Teil der Arbeit werden die entwickelten Methoden angewandt, um Hypothesen über kausale Wechselwirkungen in Klimadaten der vergangenen hundert Jahre zu testen und zu generieren. In einem zweiten, eher explorativen Schritt wird ein globaler Luftdruck-Datensatz analysiert, um wichtige treibende Prozesse in der Atmosphäre zu identifizieren. Abschließend wird aufgezeigt, wie die Quantifizierung von Wechselwirkungen Aufschluss über mögliche qualitative Veränderungen in der Klimadynamik (Kipppunkte) geben kann und wie kausal treibende Prozesse zur optimalen Vorhersage von Zeitreihen genutzt werden können. / Today''s scientific world produces a vastly growing and technology-driven abundance of time series data of such complex dynamical systems as the Earth''s climate, the brain, or the global economy. In the climate system multiple processes (e.g., El Nino-Southern Oscillation (ENSO) or the Indian Monsoon) interact in a complex, intertwined way involving teleconnections and feedback loops. Using the data to reconstruct the causal mechanisms underlying these interactions is one way to better understand such complex systems, especially given the infinite-dimensional complexity of the underlying physical equations. In this thesis, two main research questions are addressed: (i) How can general causal interactions be practically detected from multivariate time series? (ii) How can the strength of causal interactions between multiple processes be quantified in a well-interpretable way? In the first part of this thesis, the theory of detecting and quantifying general (linear and nonlinear) causal interactions is developed alongside with the important practical issues of estimation. To quantify causal interactions, a physically motivated, information-theoretic formalism is introduced. The formalism is extensively tested numerically and substantiated by rigorous mathematical results. In the second part of this thesis, the novel methods are applied to test and generate hypotheses on causal interactions in climate time series covering the 20th century up to the present. The results yield insights on an understanding of the Walker circulation and teleconnections of the ENSO system, for example with the Indian Monsoon. Further, in an exploratory way, a global surface pressure dataset is analyzed to identify key processes that drive and govern interactions in the global atmosphere. Finally, it is shown how quantifying interactions can be used to determine possible structural changes, termed tipping points, and as optimal predictors, here applied to the prediction of ENSO.

Page generated in 0.0852 seconds