• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 639
  • 99
  • 46
  • 40
  • 22
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 989
  • 989
  • 989
  • 139
  • 127
  • 105
  • 105
  • 94
  • 92
  • 88
  • 84
  • 83
  • 79
  • 67
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

An evaluation of a methodology for the analysis of time series behavioral data /

Reid, Richard Alan January 1970 (has links)
No description available.
222

The fast Fourier transform and the spectral analysis of stationery time series /

Nobile, Marc January 1979 (has links)
No description available.
223

An assessment of an alternative method of ARIMA model identification /

Rivet, Michel, 1951- January 1982 (has links)
No description available.
224

Improving Separation of Signals from Multiple Physical Quantities Detected by Sensor Arrays

Morgan, Sarah Elizabeth 31 May 2022 (has links)
Modern array sensing systems, such as distributed fiber optic sensing, are used in many applications which may record a mixture of responses to multiple physical quantities. In these applications, it may be helpful to be able to separate this mixture of responses into the signals resulting from the individual sources. This is similar to the cocktail party problem posed with Independent Component Analysis (ICA), in which we use gradient ascent and fixed point iteration optimization algorithms to achieve this separation. We then seek to apply the problem setup from ICA to mixed signals resulting from a sensor array with the goal of maintaining coherence throughout resulting spatial arrays. We propose a new post-processing technique after separation to pair up the signals from different types of physical quantities based on the Symmetric Reverse Cuthill-McKee (SRCM) and Symmetric Approximate Minimum Degree (SAMD) permutations of the coherence matrix. / Master of Science / Some modern sensing systems are able to collect data resulting from different types of sources, such as vibrations and electromagnetic waves, at the same time. This means we have signals resulting from a mixture of sources. An example of one such modern sensing system is distributed fiber optic sensors used in geoscience applications, such as seismology and subsurface imaging, which measures strain along the fiber optic cable. In many applications, it may be helpful to obtain the signals from each of these sources separately, instead of having a mixture of these sources. We propose the use of optimization algorithms, in particular two algorithms arising from Independent Component Analysis (ICA), which seek to maximize a function in order to separate these signals. We then explore changes required to the algorithms for scenarios in which we have multiple sensors spaced some distance away from each other which record signals from two different sources. We also present a method of determining which separated signals correspond to which sensors after performing signal separation.
225

Continuous-time multivariable system identification

Cooper, David L. 14 April 2009 (has links)
In this thesis, we consider the identification of continuous-time multivariable systems. Direct methods of identification, i.e. identifying a continuous-time model directly from samples of input-output data, are considered briefly. Of primary consideration is the indirect method of identification, which can be considered as a two stage method. First, a discrete system model is identified from samples of input-output data. The next step is to transform this discrete time model to an equivalent continuous-time representation. The classical ZeroOrder hold (ZOH) transformation is presented primarily for comparison with the derived First-Order hold (FOH) technique. Involved in both of these methods is the transformation of the discrete-time state transition matrix to the continuous time system matrix. A new method for this transformation is presented also. This method along with the presented FOH transformation method have been published in Electronics Letters and another paper on this FOH method has been submitted as an invited paper at the 1991 IFAC Symposium on Identification. / Master of Science
226

A random parameter approach to modeling and forecasting time series

Guyton, Deborah A. January 1979 (has links)
The dependence structure of a stationary time series can be described by its autocorrelation function ρ<sup>k</sup>. Consider the simple autoregressive model of order 1: y<sub>t</sub> = αy<sub>t-1</sub> + u<sub>t</sub> where α ε (-1, 1) is a fixed constant and the u<sub>t</sub>'s are i.i.d. N(O,σ²). Here ρ<sup>k</sup> = α<sup>|k|</sup>, k = 0, ± 1, ± 2, . . . . It can be argued that as α ranges from 1 to -1, the behavior of the corresponding AR(1) model changes from that of a slowly changing, smooth time series to that of a rapidly changing time series. This motivates a generalized AR(1) model where the coefficient itself changes stochastically with time: y<sub>t</sub> = α(t)y<sub>t-1</sub> + u<sub>t</sub> where α(t) is a random function of time. This dissertation gives necessary and sufficient conditions for the existence of a mean zero stochastic process with finite second-order moments which is a solution to the generalized AR(1) model and gives sufficient conditions for the existence of a weakly stationary solution. The theory is illustrated with a specific model structure imposed on the random coefficient α(t); α(t) is modeled as a strictly stationary, two-state Markov chain with states taking on values between 0 and 1. The resulting generalized AR(1) process is shown to be weakly stationary. Techniques are provided for estimating the parameters of this specific model and for obtaining the optimal predictor from the estimated model. / Ph. D.
227

Complexity as a Form of Transition From Dynamics to Thermodynamics: Application to Sociological and Biological Processes.

Ignaccolo, Massimiliano 05 1900 (has links)
This dissertation addresses the delicate problem of establishing the statistical mechanical foundation of complex processes. These processes are characterized by a delicate balance of randomness and order, and a correct paradigm for them seems to be the concept of sporadic randomness. First of all, we have studied if it is possible to establish a foundation of these processes on the basis of a generalized version of thermodynamics, of non-extensive nature. A detailed account of this attempt is reported in Ignaccolo and Grigolini (2001), which shows that this approach leads to inconsistencies. It is shown that there is no need to generalize the Kolmogorov-Sinai entropy by means of a non-extensive indicator, and that the anomaly of these processes does not rest on their non-extensive nature, but rather in the fact that the process of transition from dynamics to thermodynamics, this being still extensive, occurs in an exceptionally extended time scale. Even, when the invariant distribution exists, the time necessary to reach the thermodynamic scaling regime is infinite. In the case where no invariant distribution exists, the complex system lives forever in a condition intermediate between dynamics and thermodynamics. This discovery has made it possible to create a new method of analysis of non-stationary time series which is currently applied to problems of sociological and physiological interest.
228

Essays in High Dimensional Time Series Analysis

Yousuf, Kashif January 2019 (has links)
Due to the rapid improvements in the information technology, high dimensional time series datasets are frequently encountered in a variety of fields such as macroeconomics, finance, neuroscience, and meteorology. Some examples in economics and finance include forecasting low frequency macroeconomic indicators, such as GDP or inflation rate, or financial asset returns using a large number of macroeconomic and financial time series and their lags as possible covariates. In these settings, the number of candidate predictors (pT) can be much larger than the number of samples (T), and accurate estimation and prediction is made possible by relying on some form of dimension reduction. Given this ubiquity of time series data, it is surprising that few works on high dimensional statistics discuss the time series setting, and even fewer works have developed methods which utilize the unique features of time series data. This chapter consists of three chapters, and each one is self contained. The first chapter deals with high dimensional predictive regressions which are widely used in economics and finance. However, the theory and methodology is mainly developed assuming that the model is stationary with time invariant parameters. This is at odds with the prevalent evidence for parameter instability in economic time series. To remedy this, we present two L2 boosting algorithms for estimating high dimensional models in which the coefficients are modeled as functions evolving smoothly over time and the predictors are locally stationary. The first method uses componentwise local constant estimators as base learner, while the second relies on componentwise local linear estimators. We establish consistency of both methods, and address the practical issues of choosing the bandwidth for the base learners and the number of boosting iterations. In an extensive application to macroeconomic forecasting with many potential predictors, we find that the benefits to modeling time variation are substantial and are present across a wide range of economic series. Furthermore, these benefits increase with the forecast horizon and with the length of the time series available for estimation. This chapter is jointly written with Serena Ng. The second chapter deals with high dimensional non-linear time series models, and deals with the topic of variable screening/targeting predictors. Rather than assume a specific parametric model a priori, this chapter introduces several model free screening methods based on the partial distance correlation and developed specifically to deal with time dependent data. Methods are developed both for univariate models, such as nonlinear autoregressive models with exogenous predictors (NARX), and multivariate models such as linear or nonlinear VAR models. Sure screening properties are proved for our methods, which depend on the moment conditions, and the strength of dependence in the response and covariate processes, amongst other factors. Finite sample performance of our methods is shown through extensive simulation studies, and we show the effectiveness of our algorithms at forecasting US market returns. This chapter is jointly written with Yang Feng. The third chapter deals with variable selection for high dimensional linear stationary time series models. This chapter analyzes the theoretical properties of Sure Independence Screening (SIS), and its two stage combination with the adaptive Lasso, for high dimensional linear models with dependent and/or heavy tailed covariates and errors. We also introduce a generalized least squares screening (GLSS) procedure which utilizes the serial correlation present in the data. By utilizing this serial correlation when estimating our marginal effects, GLSS is shown to outperform SIS in many cases. For both procedures we prove two stage variable selection consistency when combined with the adaptive Lasso.
229

An experiment with turning point forecasts using Hong Kong time seriesdata

梁桂鏈, Leung, Kwai-lin. January 1989 (has links)
published_or_final_version / Statistics / Master / Master of Social Sciences
230

Time series modelling with application to South African inflation data

January 2009 (has links)
The research is based on financial time series modelling with special application / Thesis (M.Sc.) - University of KwaZulu-Natal, Pietermaritzburg, 2009.

Page generated in 0.1048 seconds