• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 48
  • 48
  • 48
  • 18
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Zu cervicalen Distorsionsverletzungen und deren Auswirkungen auf posturographische Schwankungsmuster / To cervical whiplash injuries and their effects on postural fluctuation models

Gutschow, Stephan January 2008 (has links)
Einleitung & Problemstellung: Beschwerden nach Beschleunigungsverletzungen der Halswirbel-säule sind oft nur unzureichend einzuordnen und diagnostizierbar. Eine eindeutige Diagnostik ist jedoch für eine entsprechende Therapie wie auch möglicherweise entstehende versicherungsrechtliche Forderungen notwendig. Die Entwicklung eines geeigneten Diagnoseverfahrens liegt damit im Interesse von Betroffenen wie auch Kostenträgern. Neben Störungen der Weichteilgewebe ist fast immer die Funktion der Halsmuskulatur in Folge eines Traumas beeinträchtigt. Dabei wird vor allem die sensorische Funktion der HWS-Muskulatur, die an der Regulation des Gleichgewichts beteiligt ist, gestört. In Folge dessen kann angenommen werden, dass es zu einer Beeinträchtigung der Gleichgewichtsregulation kommt. Die Zielstellung der Arbeit lautete deshalb, die möglicherweise gestörte Gleichgewichtsregulation nach einem Trauma im HWS-Bereich apparativ zu erfassen, um so die Verletzung eindeutig diagnostizieren zu können. Methodik: Unter Verwendung eines posturographischen Messsystems mit Kraftmomentensensorik wurden bei 478 Probanden einer Vergleichsgruppe und bei 85 Probanden eines Patientenpools Kraftmomente unter der Fußsohle als Äußerung der posturalen Balanceregulation aufgezeichnet. Die gemessenen Balancezeitreihen wurden nichtlinear analysiert, um die hohe Variabilität der Gleichgewichtsregulation optimal zu beschreiben. Über die dabei gewonnenen Parameter kann überprüft werden, ob sich spezifische Unterschiede im Schwankungsverhalten anhand der plantaren Druckverteilung zwischen HWS-Traumatisierten und den Probanden der Kontrollgruppe klassifizieren lassen. Ergebnisse: Die beste Klassifizierung konnte dabei über Parameter erzielt werden, die das Schwankungsverhalten in Phasen beschreiben, in denen die Amplitudenschwankungen relativ gering ausgeprägt waren. Die Analysen ergaben signifikante Unterschiede im Balanceverhalten zwischen der Gruppe HWS-traumatisierter Probanden und der Vergleichsgruppe. Die höchsten Trennbarkeitsraten wurden dabei durch Messungen im ruhigen beidbeinigen Stand mit geschlossenen Augen erzielt. Diskussion: Das posturale Balanceverhalten wies jedoch in allen Messpositionen eine hohe individuelle Varianz auf, so dass kein allgemeingültiges Schwankungsmuster für eine Gruppen-gesamtheit klassifiziert werden konnte. Eine individuelle Vorhersage der Gruppenzugehörigkeit ist damit nicht möglich. Die verwendete Messtechnik und die angewandten Auswerteverfahren tragen somit zwar zu einem Erkenntnisgewinn und zur Beschreibung des Gleichgewichtsverhaltens nach HWS-Traumatisierung bei. Sie können jedoch zum derzeitigen Stand für den Einzelfall keinen Beitrag zu einer eindeutigen Bestimmung eines Schleudertraumas leisten. / Introduction & Problem definition: Disorders after acceleration injuries of the cervical spine can often be classified and diagnosed only inadequately. But an explicit diagnosis is necessary as a basis for an adequate therapy as well as for possibly arising demands pursuant to insurance law. The development of suitable diagnosis methods is in the interest of patients as well as the cost units. Apart from disorders of the soft tissues there are almost always impairments of the function of the neck musculature. Particularly the sensory function of the cervical spine musculature, which participates in the regulation of the equilibrium, is disturbed by that. As a result in can be assumed that the postural control is also disturbed. Therefore the aim of this study was to examine the possibly disturbed postural motor balance after a whiplash injury of the cervical spine with the help of apparatus-supported methods to be able to unambigiously diagnose. Methods: postural measuring system based on the force-moment sensortechnique was used to record the postural balance regulation of 478 test persons and 85 patients which had suffered a whiplash injury. Data analysis was accomplished by linear as well as by nonlinear time series methods in order to characterise the balance regulation in an optimal way. Thus it can be determined whether there can be classified specific differences in the plantar pressure distribution covering patients with a whiplash injury and the test persons of the control group. Results: The best classification could be achieved by parameters which describe the variation of the postural balance regulation in phases in which the differences of the amplitudes of the plantar pressure distribution were relatively small. The analyses showed significant differences in the postural motor balance between the group of patients with whiplash injuries and the control group. The most significant differences (highest discriminate rates) could be observed by measurements in both-legged position with closed eyes. Discussion: Although the results achieved support the hypothesis mentioned above, is must be conceded that the postural motor balance showed a high individual variation in all positions of measurement. Therefore no universal variation model could be classified for the entirety of either group. This way an individual forecast of the group membership is impossible. As a result the measurement technology being used and the nonlinear time series analyses can contribute to the gain of knowledge and to the description of the regulation of postural control after whiplash injury. But at present they cannot contribute to an explicit determination of a whiplash injury for a particular case.
32

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks

Rech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&amp;P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>
33

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks /

Rech, Gianluigi, January 1900 (has links)
Diss. Stockholm : Handelshögskolan, 2002.
34

[en] IDENTIFICATION MECHANISMS OF SPURIOUS DIVISIONS IN THRESHOLD AUTOREGRESSIVE MODELS / [pt] MECANISMOS DE IDENTIFICAÇÃO DE DIVISÕES ESPÚRIAS EM MODELOS DE REGRESSÃO COM LIMIARES

ANGELO SERGIO MILFONT PEREIRA 10 December 2002 (has links)
[pt] O objetivo desta dissertação é propor um mecanismo de testes para a avaliação dos resultados obtidos em uma modelagem TS-TARX.A principal motivação é encontrar uma solução para um problema comum na modelagem TS-TARX : os modelos espúrios que são gerados durante o processo de divisão do espaço das variáveis independentes.O modelo é uma heurística baseada em análise de árvore de regressão, como discutido por Brieman -3, 1984-. O modelo proposto para a análise de séries temporais é chamado TARX - Threshold Autoregressive with eXternal variables-. A idéia central é encontrar limiares que separem regimes que podem ser explicados através de modelos lineares. Este processo é um algoritmo que preserva o método de regressão por mínimos quadrados recursivo -MQR-. Combinando a árvore de decisão com a técnica de regressão -MQR-, o modelo se tornou o TS-TARX -Tree Structured - Threshold AutoRegression with external variables-.Será estendido aqui o trabalho iniciado por Aranha em -1, 2001-. Onde a partir de uma base de dados conhecida, um algoritmo eficiente gera uma árvore de decisão por meio de regras, e as equações de regressão estimadas para cada um dos regimes encontrados. Este procedimento pode gerar alguns modelos espúrios ou por construção,devido a divisão binária da árvore, ou pelo fato de não existir neste momento uma metodologia de comparação dos modelos resultantes.Será proposta uma metodologia através de sucessivos testes de Chow -5, 1960- que identificará modelos espúrios e reduzirá a quantidade de regimes encontrados, e consequentemente de parâmetros a estimar. A complexidade do modelo final gerado é reduzida a partir da identificação de redundâncias, sem perder o poder preditivo dos modelos TS-TARX .O trabalho conclui com exemplos ilustrativos e algumas aplicações em bases de dados sintéticas, e casos reais que auxiliarão o entendimento. / [en] The goal of this dissertation is to propose a test mechanism to evaluate the results obtained from the TS-TARX modeling procedure.The main motivation is to find a solution to a usual problem related to TS-TARX modeling: spurious models are generated in the process of dividing the space state of the independent variables.The model is a heuristics based on regression tree analysis, as discussed by Brieman -3, 1984-. The model used to estimate the parameters of the time series is a TARX -Threshold Autoregressive with eXternal variables-.The main idea is to find thresholds that split the independent variable space into regimes which can be described by a local linear model. In this process, the recursive least square regression model is preserved. From the combination of regression tree analysis and recursive least square regression techniques, the model becomes TS-TARX -Tree Structured - Threshold Autoregression with eXternal variables-.The works initiated by Aranha in -1, 2001- will be extended. In his works, from a given data base, one efficient algorithm generates a decision tree based on splitting rules, and the corresponding regression equations for each one of the regimes found.Spurious models may be generated either from its building procedure, or from the fact that a procedure to compare the resulting models had not been proposed.To fill this gap, a methodology will be proposed. In accordance with the statistical tests proposed by Chow in -5, 196-, a series of consecutive tests will be performed.The Chow tests will provide the tools to identify spurious models and to reduce the number of regimes found. The complexity of the final model, and the number of parameters to estimate are therefore reduced by the identification and elimination of redundancies, without bringing risks to the TS-TARX model predictive power.This work is concluded with illustrative examples and some applications to real data that will help the readers understanding.
35

Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dados

Fausto Guzzo da Costa 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
36

Detekce kauzality v časových řadách pomocí extrémních hodnot / Detection of causality in time series using extreme values

Bodík, Juraj January 2021 (has links)
Juraj Bodík Abstract This thesis is dealing with the following problem: Let us have two stationary time series with heavy- tailed marginal distributions. We want to detect whether they have a causal relation, i.e. if a change in one of them causes a change in the other. The question of distinguishing between causality and correlation is essential in many different science fields. Usual methods for causality detection are not well suited if the causal mechanisms only manifest themselves in extremes. In this thesis, we propose a new method that can help us in such a nontraditional case distinguish between correlation and causality. We define the so-called causal tail coefficient for time series, which, under some assumptions, correctly detects the asymmetrical causal relations between different time series. We will rigorously prove this claim, and we also propose a method on how to statistically estimate the causal tail coefficient from a finite number of data. The advantage is that this method works even if nonlinear relations and common ancestors are present. Moreover, we will mention how our method can help detect a time delay between the two time series. We will show how our method performs on some simulations. Finally, we will show on a real dataset how this method works, discussing a cause of...
37

Applying Goodness-Of-Fit Techniques In Testing Time Series Gaussianity And Linearity

Jahan, Nusrat 05 August 2006 (has links)
In this study, we present two new frequency domain tests for testing the Gaussianity and linearity of a sixth-order stationary univariate time series. Both are two-stage tests. The first stage is a test for the Gaussianity of the series. Under Gaussianity, the estimated normalized bispectrum has an asymptotic chi-square distribution with two degrees of freedom. If Gaussianity is rejected, the test proceeds to the second stage, which tests for linearity. Under linearity, with non-Gaussian errors, the estimated normalized bispectrum has an asymptotic non-central chi-square distribution with two degrees of freedom and constant noncentrality parameter. If the process is nonlinear, the noncentrality parameter is nonconstant. At each stage, empirical distribution function (EDF) goodness-ofit (GOF) techniques are applied to the estimated normalized bispectrum by comparing the empirical CDF with the appropriate null asymptotic distribution. The two specific methods investigated are the Anderson-Darling and Cramer-von Mises tests. Under Gaussianity, the distribution is completely specified, and application is straight forward. However, if Gaussianity is rejected, the proposed application of the EDF tests involves a transformation to normality. The performance of the tests and a comparison of the EDF tests to existing time and frequency domain tests are investigated under a variety of circumstances through simulation. For illustration, the tests are applied to a number of data sets popular in the time series literature.
38

Determinism and predictability in extreme event systems

Birkholz, Simon 12 May 2016 (has links)
In den vergangenen Jahrzehnten wurden extreme Ereignisse, die nicht durch Gauß-Verteilungen beschrieben werden können, in einer Vielzahl an physikalischen Systemen beobachtet. Während statistische Methoden eine zuverlässige Identifikation von extremen Ereignissen ermöglichen, ist deren Entstehungsmechanismus nicht vollständig geklärt. Das Auftreten von extremen Ereignissen ist nicht vollkommen verstanden, da sie nur selten beobachtet werden können und häufig unter schwer reproduzierbaren Bedingungen auftreten. Deshalb ist es erstrebenswert Experimente zu entwickeln, die eine einfache Beobachtung von extremen Ereignissen erlauben. In dieser Dissertation werden extreme Ereignisse untersucht, die bei Multi-Filamentation von Femtosekundenlaserimpulsen entstehen. In den Experimenten, die in dieser Dissertation vorgestellt werden, werden Multi-Filamente durch Hochgeschwindigkeitskameras analysiert. Die Untersuchung der raum-zeitlichen Dynamik der Multi-Filamente zeigt eine L-förmige Wahrscheinlichkeitsverteilung, Diese Beobachtung impliziert das Auftreten von extremen Ereignissen. Lineare Analyse liefert Hinweise auf die physikalischen Prozesse, die zur Entstehung der extremen Ereignisse führen und nicht-lineare Zeitreihen-Analyse charakterisiert die Dynamik des Systems. Die Analyse der Multi-Filamente wird außerdem auf extreme Ereignisse in Wellen-Messungen und optische Superkontinua angewandt. Die durchgeführten Analysen zeigen Unterschiede in den physikalischen Prozessen, die zur Entstehung von extremen Ereignissen führen. Extreme Ereignisse in optischen Fasern werden durch stochastische Fluktuationen von verstärktem Quantenrauschen dominiert. In Multi-Filamenten und Ozeanwellen resultieren extreme Ereignisse dagegen aus klassischer mechanischer Turbulenz, was deren Vorhersagbarkeit impliziert. In dieser Arbeit wird anhand der von Multi-Filament-Zeitreihen die Vorhersagbarkeit in einem kurzen Zeitfenster vor Auftreten des extremen Ereignisses bewiesen. / In the last decades, extreme events, i.e., high-magnitude phenomena that cannot be described within the realm of Gaussian probability distributions have been observed in a multitude of physical systems. While statistical methods allow for a reliable identification of extreme event systems, the underlying mechanism behind extreme events is not understood. Extreme events are not well understood due to their rare occurrence and their onset under conditions that are difficult to reproduce. Thus, it is desirable to identify extreme event scenarios that can serve as a test bed. Optical systems exhibiting extreme events have been discovered to be ideal for such tests, and it is now desired to find more different examples to improve the understanding of extreme events. In this thesis, multifilamentation formed by femtosecond laser pulses is analyzed. Observation of the spatio-temporal dynamics of multifilamentation shows a heavy-tailed fluence probability distribution. This finding implies the onset of extreme events during multifilamentation. Linear analysis gives hints on the processes that drive the formation of extreme events. The multifilaments are also analyzed by nonlinear time series analysis, which provides information on determinism and chaos in the system. The analysis of the multifilament s is compared to an analysis of extreme event time series from ocean wave measurements and the supercontinuum output of an optical fiber. The analysis performed in this work shows fundamental differences in the extreme event mechnaism. While the extreme events in the optical fiber system are ruled by the stochastic changes of amplified quantum noise, in the multifilament and the ocean system extreme events appear as a result of the classical mechanical process of turbulence. This implies the predictability of extreme events. In this work, the predictability of extreme events is proven to be possible in a brief time window before the onset of the extreme event.
39

Estimation of a class of nonlinear time series models.

Sando, Simon Andrew January 2004 (has links)
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
40

遺傳模式在匯率上分析與預測之應用 / Genetic Models and Its Application in Exchange Rates Analysis and Forecasting

許毓云, Hsu, Yi-Yun Unknown Date (has links)
Abstract In time series analysis, we often find the trend of dynamic data changing with time. Using the traditional model fitting can't get a good explanation for dynamic data. Therefore, many scholars developed various methods for model construction. The major drawback with most of the methods is that personal viewpoint and experience in model selection are usually influenced in them. Therefore, this paper presents a new approach on genetic-based modeling for the nonlinear time series. The research is based on the concepts of evolution theory as well as natural selection. In order to find a leading model from the nonlinear time series, we make use of the evolution rule: survival of the fittest. Through the process of genetic evolution, the AIC (Akaike information criteria) is used as the adjust function, and the membership function of the best-fitted models are calculated as performance index of chromosome. Empirical example shows that the genetic model can give an efficient explanation in analyzing Taiwan exchange rates, especially when the structure change occurs.

Page generated in 0.0735 seconds