• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 48
  • 33
  • 17
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

[en] IDENTIFICATION MECHANISMS OF SPURIOUS DIVISIONS IN THRESHOLD AUTOREGRESSIVE MODELS / [pt] MECANISMOS DE IDENTIFICAÇÃO DE DIVISÕES ESPÚRIAS EM MODELOS DE REGRESSÃO COM LIMIARES

ANGELO SERGIO MILFONT PEREIRA 10 December 2002 (has links)
[pt] O objetivo desta dissertação é propor um mecanismo de testes para a avaliação dos resultados obtidos em uma modelagem TS-TARX.A principal motivação é encontrar uma solução para um problema comum na modelagem TS-TARX : os modelos espúrios que são gerados durante o processo de divisão do espaço das variáveis independentes.O modelo é uma heurística baseada em análise de árvore de regressão, como discutido por Brieman -3, 1984-. O modelo proposto para a análise de séries temporais é chamado TARX - Threshold Autoregressive with eXternal variables-. A idéia central é encontrar limiares que separem regimes que podem ser explicados através de modelos lineares. Este processo é um algoritmo que preserva o método de regressão por mínimos quadrados recursivo -MQR-. Combinando a árvore de decisão com a técnica de regressão -MQR-, o modelo se tornou o TS-TARX -Tree Structured - Threshold AutoRegression with external variables-.Será estendido aqui o trabalho iniciado por Aranha em -1, 2001-. Onde a partir de uma base de dados conhecida, um algoritmo eficiente gera uma árvore de decisão por meio de regras, e as equações de regressão estimadas para cada um dos regimes encontrados. Este procedimento pode gerar alguns modelos espúrios ou por construção,devido a divisão binária da árvore, ou pelo fato de não existir neste momento uma metodologia de comparação dos modelos resultantes.Será proposta uma metodologia através de sucessivos testes de Chow -5, 1960- que identificará modelos espúrios e reduzirá a quantidade de regimes encontrados, e consequentemente de parâmetros a estimar. A complexidade do modelo final gerado é reduzida a partir da identificação de redundâncias, sem perder o poder preditivo dos modelos TS-TARX .O trabalho conclui com exemplos ilustrativos e algumas aplicações em bases de dados sintéticas, e casos reais que auxiliarão o entendimento. / [en] The goal of this dissertation is to propose a test mechanism to evaluate the results obtained from the TS-TARX modeling procedure.The main motivation is to find a solution to a usual problem related to TS-TARX modeling: spurious models are generated in the process of dividing the space state of the independent variables.The model is a heuristics based on regression tree analysis, as discussed by Brieman -3, 1984-. The model used to estimate the parameters of the time series is a TARX -Threshold Autoregressive with eXternal variables-.The main idea is to find thresholds that split the independent variable space into regimes which can be described by a local linear model. In this process, the recursive least square regression model is preserved. From the combination of regression tree analysis and recursive least square regression techniques, the model becomes TS-TARX -Tree Structured - Threshold Autoregression with eXternal variables-.The works initiated by Aranha in -1, 2001- will be extended. In his works, from a given data base, one efficient algorithm generates a decision tree based on splitting rules, and the corresponding regression equations for each one of the regimes found.Spurious models may be generated either from its building procedure, or from the fact that a procedure to compare the resulting models had not been proposed.To fill this gap, a methodology will be proposed. In accordance with the statistical tests proposed by Chow in -5, 196-, a series of consecutive tests will be performed.The Chow tests will provide the tools to identify spurious models and to reduce the number of regimes found. The complexity of the final model, and the number of parameters to estimate are therefore reduced by the identification and elimination of redundancies, without bringing risks to the TS-TARX model predictive power.This work is concluded with illustrative examples and some applications to real data that will help the readers understanding.
52

Employing nonlinear time series analysis tools with stable clustering algorithms for detecting concept drift on data streams / Aplicando ferramentas de análise de séries temporais não lineares e algoritmos de agrupamento estáveis para a detecção de mudanças de conceito em fluxos de dados

Fausto Guzzo da Costa 17 August 2017 (has links)
Several industrial, scientific and commercial processes produce open-ended sequences of observations which are referred to as data streams. We can understand the phenomena responsible for such streams by analyzing data in terms of their inherent recurrences and behavior changes. Recurrences support the inference of more stable models, which are deprecated by behavior changes though. External influences are regarded as the main agent actuacting on the underlying phenomena to produce such modifications along time, such as new investments and market polices impacting on stocks, the human intervention on climate, etc. In the context of Machine Learning, there is a vast research branch interested in investigating the detection of such behavior changes which are also referred to as concept drifts. By detecting drifts, one can indicate the best moments to update modeling, therefore improving prediction results, the understanding and eventually the controlling of other influences governing the data stream. There are two main concept drift detection paradigms: the first based on supervised, and the second on unsupervised learning algorithms. The former faces great issues due to the labeling infeasibility when streams are produced at high frequencies and large volumes. The latter lacks in terms of theoretical foundations to provide detection guarantees. In addition, both paradigms do not adequately represent temporal dependencies among data observations. In this context, we introduce a novel approach to detect concept drifts by tackling two deficiencies of both paradigms: i) the instability involved in data modeling, and ii) the lack of time dependency representation. Our unsupervised approach is motivated by Carlsson and Memolis theoretical framework which ensures a stability property for hierarchical clustering algorithms regarding to data permutation. To take full advantage of such framework, we employed Takens embedding theorem to make data statistically independent after being mapped to phase spaces. Independent data were then grouped using the Permutation-Invariant Single-Linkage Clustering Algorithm (PISL), an adapted version of the agglomerative algorithm Single-Linkage, respecting the stability property proposed by Carlsson and Memoli. Our algorithm outputs dendrograms (seen as data models), which are proven to be equivalent to ultrametric spaces, therefore the detection of concept drifts is possible by comparing consecutive ultrametric spaces using the Gromov-Hausdorff (GH) distance. As result, model divergences are indeed associated to data changes. We performed two main experiments to compare our approach to others from the literature, one considering abrupt and another with gradual changes. Results confirm our approach is capable of detecting concept drifts, both abrupt and gradual ones, however it is more adequate to operate on complicated scenarios. The main contributions of this thesis are: i) the usage of Takens embedding theorem as tool to provide statistical independence to data streams; ii) the implementation of PISL in conjunction with GH (called PISLGH); iii) a comparison of detection algorithms in different scenarios; and, finally, iv) an R package (called streamChaos) that provides tools for processing nonlinear data streams as well as other algorithms to detect concept drifts. / Diversos processos industriais, científicos e comerciais produzem sequências de observações continuamente, teoricamente infinitas, denominadas fluxos de dados. Pela análise das recorrências e das mudanças de comportamento desses fluxos, é possível obter informações sobre o fenômeno que os produziu. A inferência de modelos estáveis para tais fluxos é suportada pelo estudo das recorrências dos dados, enquanto é prejudicada pelas mudanças de comportamento. Essas mudanças são produzidas principalmente por influências externas ainda desconhecidas pelos modelos vigentes, tal como ocorre quando novas estratégias de investimento surgem na bolsa de valores, ou quando há intervenções humanas no clima, etc. No contexto de Aprendizado de Máquina (AM), várias pesquisas têm sido realizadas para investigar essas variações nos fluxos de dados, referidas como mudanças de conceito. Sua detecção permite que os modelos possam ser atualizados a fim de apurar a predição, a compreensão e, eventualmente, controlar as influências que governam o fluxo de dados em estudo. Nesse cenário, algoritmos supervisionados sofrem com a limitação para rotular os dados quando esses são gerados em alta frequência e grandes volumes, e algoritmos não supervisionados carecem de fundamentação teórica para prover garantias na detecção de mudanças. Além disso, algoritmos de ambos paradigmas não representam adequadamente as dependências temporais entre observações dos fluxos. Nesse contexto, esta tese de doutorado introduz uma nova metodologia para detectar mudanças de conceito, na qual duas deficiências de ambos paradigmas de AM são confrontados: i) a instabilidade envolvida na modelagem dos dados, e ii) a representação das dependências temporais. Essa metodologia é motivada pelo arcabouço teórico de Carlsson e Memoli, que provê uma propriedade de estabilidade para algoritmos de agrupamento hierárquico com relação à permutação dos dados. Para usufruir desse arcabouço, as observações são embutidas pelo teorema de imersão de Takens, transformando-as em independentes. Esses dados são então agrupados pelo algoritmo Single-Linkage Invariante à Permutação (PISL), o qual respeita a propriedade de estabilidade de Carlsson e Memoli. A partir dos dados de entrada, esse algoritmo gera dendrogramas (ou modelos), que são equivalentes a espaços ultramétricos. Modelos sucessivos são comparados pela distância de Gromov-Hausdorff a fim de detectar mudanças de conceito no fluxo. Como resultado, as divergências dos modelos são de fato associadas a mudanças nos dados. Experimentos foram realizados, um considerando mudanças abruptas e o outro mudanças graduais. Os resultados confirmam que a metodologia proposta é capaz de detectar mudanças de conceito, tanto abruptas quanto graduais, no entanto ela é mais adequada para cenários mais complicados. As contribuições principais desta tese são: i) o uso do teorema de imersão de Takens para transformar os dados de entrada em independentes; ii) a implementação do algoritmo PISL em combinação com a distância de Gromov-Hausdorff (chamado PISLGH); iii) a comparação da metodologia proposta com outras da literatura em diferentes cenários; e, finalmente, iv) a disponibilização de um pacote em R (chamado streamChaos) que provê tanto ferramentas para processar fluxos de dados não lineares quanto diversos algoritmos para detectar mudanças de conceito.
53

Detekce kauzality v časových řadách pomocí extrémních hodnot / Detection of causality in time series using extreme values

Bodík, Juraj January 2021 (has links)
Juraj Bodík Abstract This thesis is dealing with the following problem: Let us have two stationary time series with heavy- tailed marginal distributions. We want to detect whether they have a causal relation, i.e. if a change in one of them causes a change in the other. The question of distinguishing between causality and correlation is essential in many different science fields. Usual methods for causality detection are not well suited if the causal mechanisms only manifest themselves in extremes. In this thesis, we propose a new method that can help us in such a nontraditional case distinguish between correlation and causality. We define the so-called causal tail coefficient for time series, which, under some assumptions, correctly detects the asymmetrical causal relations between different time series. We will rigorously prove this claim, and we also propose a method on how to statistically estimate the causal tail coefficient from a finite number of data. The advantage is that this method works even if nonlinear relations and common ancestors are present. Moreover, we will mention how our method can help detect a time delay between the two time series. We will show how our method performs on some simulations. Finally, we will show on a real dataset how this method works, discussing a cause of...
54

Seismic Performance Comparison of a Fixed-Base Versus a Base-Isolated Office Building

Marrs, Nicholas Reidar 01 June 2013 (has links) (PDF)
The topic of this thesis is base isolation. The purpose of this thesis is to offer a relative understanding of the seismic performance enhancements that a typical 12-story steel office building can achieve through the implementation of base isolation technology. To reach this understanding, the structures of a fixed-base office building and a base-isolated office building of similar size and layout are designed, their seismic performance is compared, and a cost-benefit analysis is completed. The base isolation system that is utilized is composed of Triple Friction Pendulum (TFP) bearings. The work of this thesis is divided into four phases. First, in the building selection phase, the structural systems (SMF and SCBF), layout, location (San Diego, CA), and design parameters of the buildings are selected. Then, in the design phase, each structure is designed using modal response spectrum analysis in ETABS. In the analysis phase, nonlinear time history analyses at DBE and MCE levels are conducted in PERFORM-3D to obtain the related floor accelerations and interstory drifts. Finally, in the performance assessment phase, probable damage costs are computed using fragility curves and FEMA P-58 methodology in PACT. Damage costs are computed for each building and seismic demand level and the results are compared.
55

Mechanics of Flapping Flight: Analytical Formulations of Unsteady Aerodynamics, Kinematic Optimization, Flight Dynamics and Control

Taha, Haithem Ezzat Mohammed 04 December 2013 (has links)
A flapping-wing micro-air-vehicle (FWMAV) represents a complex multi-disciplinary system whose analysis invokes the frontiers of the aerospace engineering disciplines. From the aerodynamic point of view, a nonlinear, unsteady flow is created by the flapping motion. In addition, non-conventional contributors, such as the leading edge vortex, to the aerodynamic loads become dominant in flight. On the other hand, the flight dynamics of a FWMAV constitutes a nonlinear, non-autonomous dynamical system. Furthermore, the stringent weight and size constraints that are always imposed on FWMAVs invoke design with minimal actuation. In addition to the numerous motivating applications, all these features of FWMAVs make it an interesting research point for engineers. In this Dissertation, some challenging points related to FWMAVs are considered. First, an analytical unsteady aerodynamic model that accounts for the leading edge vortex contribution by a feasible computational burden is developed to enable sensitivity and optimization analyses, flight dynamics analysis, and control synthesis. Second, wing kinematics optimization is considered for both aerodynamic performance and maneuverability. For each case, an infinite-dimensional optimization problem is formulated using the calculus of variations to relax any unnecessary constraints induced by approximating the problem as a finite-dimensional one. As such, theoretical upper bounds for the aerodynamic performance and maneuverability are obtained. Third, a design methodology for the actuation mechanism is developed. The proposed actuation mechanism is able to provide the required kinematics for both of hovering and forward flight using only one actuator. This is achieved by exploiting the nonlinearities of the wing dynamics to induce the saturation phenomenon to transfer energy from one mode to another. Fourth, the nonlinear, time-periodic flight dynamics of FWMAVs is analyzed using direct and higher-order averaging. The region of applicability of direct averaging is determined and the effects of the aerodynamic-induced parametric excitation are assessed. Finally, tools combining geometric control theory and averaging are used to derive analytic expressions for the textit{Symmetric Products}, which are vector fields that directly affect the acceleration of the averaged dynamics. A design optimization problem is then formulated to bring the maneuverability index/criterion early in the design process to maximize the FWMAV maneuverability near hover. / Ph. D.
56

Applying Goodness-Of-Fit Techniques In Testing Time Series Gaussianity And Linearity

Jahan, Nusrat 05 August 2006 (has links)
In this study, we present two new frequency domain tests for testing the Gaussianity and linearity of a sixth-order stationary univariate time series. Both are two-stage tests. The first stage is a test for the Gaussianity of the series. Under Gaussianity, the estimated normalized bispectrum has an asymptotic chi-square distribution with two degrees of freedom. If Gaussianity is rejected, the test proceeds to the second stage, which tests for linearity. Under linearity, with non-Gaussian errors, the estimated normalized bispectrum has an asymptotic non-central chi-square distribution with two degrees of freedom and constant noncentrality parameter. If the process is nonlinear, the noncentrality parameter is nonconstant. At each stage, empirical distribution function (EDF) goodness-ofit (GOF) techniques are applied to the estimated normalized bispectrum by comparing the empirical CDF with the appropriate null asymptotic distribution. The two specific methods investigated are the Anderson-Darling and Cramer-von Mises tests. Under Gaussianity, the distribution is completely specified, and application is straight forward. However, if Gaussianity is rejected, the proposed application of the EDF tests involves a transformation to normality. The performance of the tests and a comparison of the EDF tests to existing time and frequency domain tests are investigated under a variety of circumstances through simulation. For illustration, the tests are applied to a number of data sets popular in the time series literature.
57

Dynamical System Representation and Analysis of Unsteady Flow and Fluid-Structure Interactions

Hussein, Ahmed Abd Elmonem Ahmed 01 November 2018 (has links)
A dynamical system approach is utilized to reduce the representation order of unsteady fluid flows and fluid-structure interaction systems. This approach allows for significant reduction in the computational cost of their numerical simulations, implementation of optimization and control methodologies and assessment of their dynamic stability. In the first chapter, I present a new Lagrangian function to derive the equations of motion of unsteady point vortices. This representation is a reconciliation between Newtonian and Lagrangian mechanics yielding a new approach to model the dynamics of these vortices. In the second chapter, I investigate the flutter of a helicopter rotor blade using finite-state time approximation of the unsteady aerodynamics. The analysis showed a new stability region that could not be determined under the assumption of a quasi-steady flow. In the third chapter, I implement the unsteady vortex lattice method to quantify the effects of tail flexibility on the propulsive efficiency of a fish. I determine that flexibility enhances the propulsion. In the fourth chapter, I consider the stability of a flapping micro air vehicle and use different approaches to design the transition from hovering to forward flight. I determine that first order averaging is not suitable and that time periodic dynamics are required for the controller to achieve this transition. In the fifth chapter, I derive a mathematical model for the free motion of a two-body planar system representing a fish under the action of coupled dynamics and hydrodynamics loads. I conclude that the psicform fish family are inherently stable under certain conditions that depend on the location of the center of mass. / Ph. D. / We present modeling approaches of the interaction between flying or swimming bodies and the surrounding fluids. We consider their stability as they perform special maneuvers. The approaches are applied to rotating blades of helicopters, fish-like robots, and micro-air vehicles. We develop and validate a new mathematical representation for the flow generated by moving or deforming elements. We also assess the effects of fast variations in the flow on the stability of a rotating helicopter blade. The results point to a new stable regime for their operation. In other words, the fast flow variations could stabilize the rotating blades. These results can also be applied to the analysis of stability of rotating blades of wind turbines. We consider the effects of flexing a tail on the propulsive force of fish-like robots. The results show that adding flexibility enhances the efficiency of the fish propulsion. Inspired by the ability of some birds and insects to transition from hovering to forward motion, we thoroughly investigate different approaches to model and realize this transition. We determine that no simplification should be applied to the rigorous model representing the flapping flight in order to model transition phenomena correctly. Finally, we model the forward-swim dynamics of psciform and determine the condition on the center of mass for which a robotic fish can maintain its stability. This condition could help in designing fish-like robots that perform stable underwater maneuvers.
58

Estimation of a class of nonlinear time series models.

Sando, Simon Andrew January 2004 (has links)
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
59

Detecting and quantifying causality from time series of complex systems

Runge, Jakob 18 August 2014 (has links)
Der technologische Fortschritt hat in jüngster Zeit zu einer großen Zahl von Zeitreihenmessdaten über komplexe dynamische Systeme wie das Klimasystem, das Gehirn oder das globale ökonomische System geführt. Beispielsweise treten im Klimasystem Prozesse wie El Nino-Southern Oscillation (ENSO) mit dem indischen Monsun auf komplexe Art und Weise durch Telekonnektionen und Rückkopplungen in Wechselwirkung miteinander. Die Analyse der Messdaten zur Rekonstruktion der diesen Wechselwirkungen zugrunde liegenden kausalen Mechanismen ist eine Möglichkeit komplexe Systeme zu verstehen, insbesondere angesichts der unendlich-dimensionalen Komplexität der physikalischen Prozesse. Diese Dissertation verfolgt zwei Hauptfragen: (i) Wie können, ausgehend von multivariaten Zeitreihen, kausale Wechselwirkungen praktisch detektiert werden? (ii) Wie kann die Stärke kausaler Wechselwirkungen zwischen mehreren Prozessen in klar interpretierbarer Weise quantifiziert werden? Im ersten Teil der Arbeit werden die Theorie zur Detektion und Quantifikation nichtlinearer kausaler Wechselwirkungen (weiter-)entwickelt und wichtige Aspekte der Schätztheorie untersucht. Zur Quantifikation kausaler Wechselwirkungen wird ein physikalisch motivierter, informationstheoretischer Ansatz vorgeschlagen, umfangreich numerisch untersucht und durch analytische Resultate untermauert. Im zweiten Teil der Arbeit werden die entwickelten Methoden angewandt, um Hypothesen über kausale Wechselwirkungen in Klimadaten der vergangenen hundert Jahre zu testen und zu generieren. In einem zweiten, eher explorativen Schritt wird ein globaler Luftdruck-Datensatz analysiert, um wichtige treibende Prozesse in der Atmosphäre zu identifizieren. Abschließend wird aufgezeigt, wie die Quantifizierung von Wechselwirkungen Aufschluss über mögliche qualitative Veränderungen in der Klimadynamik (Kipppunkte) geben kann und wie kausal treibende Prozesse zur optimalen Vorhersage von Zeitreihen genutzt werden können. / Today''s scientific world produces a vastly growing and technology-driven abundance of time series data of such complex dynamical systems as the Earth''s climate, the brain, or the global economy. In the climate system multiple processes (e.g., El Nino-Southern Oscillation (ENSO) or the Indian Monsoon) interact in a complex, intertwined way involving teleconnections and feedback loops. Using the data to reconstruct the causal mechanisms underlying these interactions is one way to better understand such complex systems, especially given the infinite-dimensional complexity of the underlying physical equations. In this thesis, two main research questions are addressed: (i) How can general causal interactions be practically detected from multivariate time series? (ii) How can the strength of causal interactions between multiple processes be quantified in a well-interpretable way? In the first part of this thesis, the theory of detecting and quantifying general (linear and nonlinear) causal interactions is developed alongside with the important practical issues of estimation. To quantify causal interactions, a physically motivated, information-theoretic formalism is introduced. The formalism is extensively tested numerically and substantiated by rigorous mathematical results. In the second part of this thesis, the novel methods are applied to test and generate hypotheses on causal interactions in climate time series covering the 20th century up to the present. The results yield insights on an understanding of the Walker circulation and teleconnections of the ENSO system, for example with the Indian Monsoon. Further, in an exploratory way, a global surface pressure dataset is analyzed to identify key processes that drive and govern interactions in the global atmosphere. Finally, it is shown how quantifying interactions can be used to determine possible structural changes, termed tipping points, and as optimal predictors, here applied to the prediction of ENSO.
60

遺傳模式在匯率上分析與預測之應用 / Genetic Models and Its Application in Exchange Rates Analysis and Forecasting

許毓云, Hsu, Yi-Yun Unknown Date (has links)
Abstract In time series analysis, we often find the trend of dynamic data changing with time. Using the traditional model fitting can't get a good explanation for dynamic data. Therefore, many scholars developed various methods for model construction. The major drawback with most of the methods is that personal viewpoint and experience in model selection are usually influenced in them. Therefore, this paper presents a new approach on genetic-based modeling for the nonlinear time series. The research is based on the concepts of evolution theory as well as natural selection. In order to find a leading model from the nonlinear time series, we make use of the evolution rule: survival of the fittest. Through the process of genetic evolution, the AIC (Akaike information criteria) is used as the adjust function, and the membership function of the best-fitted models are calculated as performance index of chromosome. Empirical example shows that the genetic model can give an efficient explanation in analyzing Taiwan exchange rates, especially when the structure change occurs.

Page generated in 0.0763 seconds