• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 468
  • 99
  • 36
  • 26
  • 22
  • 13
  • 10
  • 9
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 781
  • 781
  • 781
  • 101
  • 94
  • 91
  • 89
  • 88
  • 83
  • 80
  • 66
  • 63
  • 58
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Simultaneous prediction intervals for autoregressive integrated moving average models in the presence of outliers.

January 2001 (has links)
Cheung Tsai-Yee Crystal. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 83-85). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Importance of Forecasting --- p.1 / Chapter 2 --- Methodology --- p.5 / Chapter 2.1 --- Basic Idea --- p.5 / Chapter 2.2 --- Outliers in Time Series --- p.9 / Chapter 2.2.1 --- One Outlier Case --- p.9 / Chapter 2.2.2 --- Two Outliers Case --- p.17 / Chapter 2.2.3 --- General Case --- p.22 / Chapter 2.2.4 --- Time Series Parameters are Unknown --- p.24 / Chapter 2.3 --- Iterative Procedure for Detecting Outliers --- p.25 / Chapter 2.3.1 --- General Procedure for Detecting Outliers --- p.25 / Chapter 2.4 --- Methods of Constructing Simultaneous Prediction Intervals --- p.27 / Chapter 2.4.1 --- The Bonferroni Method --- p.28 / Chapter 2.4.2 --- The Exact Method --- p.28 / Chapter 3 --- An Illustrative Example --- p.29 / Chapter 3.1 --- Case A --- p.31 / Chapter 3.2 --- Case B --- p.32 / Chapter 3.3 --- Comparison --- p.33 / Chapter 4 --- Simulation Study --- p.36 / Chapter 4.1 --- Generate AR(1) with an Outlier --- p.36 / Chapter 4.1.1 --- Case A --- p.38 / Chapter 4.1.2 --- Case B --- p.40 / Chapter 4.2 --- Simulation Results I --- p.42 / Chapter 4.3 --- Generate AR(1) with Two Outliers --- p.45 / Chapter 4.4 --- Simulation Results II --- p.46 / Chapter 4.5 --- Concluding Remarks --- p.47 / Bibliography --- p.83
62

Time Series Modeling with Shape Constraints

Zhang, Jing January 2017 (has links)
This thesis focuses on the development of semiparametric estimation methods for a class of time series models using shape constraints. Many of the existing time series models assume the noise follows some known parametric distributions. Typical examples are the Gaussian and t distributions. Then the model parameters are estimated by maximizing the resultant likelihood function. As an example, the autoregressive moving average (ARMA) models (Brockwell and Davis, 2009) assume Gaussian noise sequence and are estimated under the causal-invertible constraint by maximizing the Gaussian likelihood. Although the same estimates can also be used in the causal-invertible non-Gaussian case, they are not asymptotically optimal (Rosenblatt, 2012). Moreover, for the noncausal/noninvertible cases, the Gaussian likelihood estimation procedure is not applicable, since any second-order based methods cannot distinguish between causal-invertible and noncausal/noninvertible models (Brockwell and Davis,2009). As a result, many estimation methods for noncausal/noninvertible ARMA models assume the noise follows a known non-Gaussian distribution, like a Laplace distribution or a t distribution. To relax this distributional assumption and allow noncausal/noninvertible models, we borrow ideas from nonparametric shape-constraint density estimation and propose a semiparametric estimation procedure for general ARMA models by projecting the underlying noise distribution onto the space of log-concave measures (Cule and Samworth, 2010; Dümbgen et al., 2011). We show the maximum likelihood estimators in this semiparametric setting are consistent. In fact, the MLE is robust to the misspecification of log-concavity in cases where the true distribution of the noise is close to its log-concave projection. We derive a lower bound for the best asymptotic variance of regular estimators at rate sqrt(n) for AR models and construct a semiparametric efficient estimator. We also consider modeling time series of counts with shape constraints. Many of the formulated models for count time series are expressed via a pair of generalized state-space equations. In this set-up, the observation equation specifies the conditional distribution of the observation Yt at time t given a state-variable Xt. For count time series, this conditional distribution is usually specified as coming from a known parametric family such as the Poisson or the Negative Binomial distribution. To relax this formal parametric framework, we introduce a concave shape constraint into the one-parameter exponential family. This essentially amounts to assuming that the reference measure is log-concave. In this fashion, we are able to extend the class of observation-driven models studied in Davis and Liu (2016). Under this formulation, there exists a stationary and ergodic solution to the state-space model. In this new modeling framework, we consider the inference problem of estimating both the parameters of the mean model and the log-concave function, corresponding to the reference measure. We then compute and maximize the likelihood function over both the parameters associated with the mean function and the reference measure subject to a concavity constraint. The estimator of the mean function and the conditional distribution are shown to be consistent and perform well compared to a full parametric model specification. The finite sample behavior of the estimators are studied via simulation and two empirical examples are provided to illustrate the methodology.
63

Nonparametric methods in financial time series analysis

Hong, Seok Young January 2018 (has links)
The fundamental objective of the analysis of financial time series is to unveil the random mechanism, i.e. the probability law, underlying financial data. The effort to identify the truth that governs the observations involves proposing and estimating reasonable statistical models that well explain the empirical features of data. This thesis develops some new nonparametric tools that can be exploited in this context; the efficacy and validity of their use are supported by computational advancements and surging availability of large/complex (`big') data sets. Chapter 1 investigates the conditional first moment properties of financial returns. We propose multivariate extensions of the popular Variance Ratio (VR) statistic, aiming to test linear predictability of returns and weak-form market efficiency. We construct asymptotic distribution theories for the statistics and scalar functions thereof under the null hypothesis of no predictability. The imposed assumptions are weaker than those widely adopted in the literature, and in our view more credible with regard to the underlying data generating process we expect for stock returns. It is also shown that the limit theories can be extended to the long horizon and large dimension cases, and also to allow for a time varying risk premium. Our methods are applied to CRSP weekly returns from 1962 to 2013; the joint tests of the multivariate hypothesis reject the null at the 1% level for all horizons considered. Chapter 2 is about nonparametric estimation of conditional moments. We propose a local constant type estimator that operates with an infinite number of conditioning variables; this enables a direct estimation of many objects of econometric interest that have dependence upon the infinite past. We show pointwise and uniform consistency of the estimator and establish its asymptotic nomality in various static and dynamic regressions context. The optimal rate of estimation turns out to be of logarithmic order, and the precise rate depends on the Lambert W function, the smoothness of the regression operator and the dependence of the data in a non-trivial way. The theories are applied to investigate the intertemporal risk-return relation for the aggregate stock market. We report an overall positive risk-return relation on the S&P 500 daily data from 1950-2017, and find evidence of strong time variation and counter-cyclical behaviour in risk aversion. Lastly, Chapter 3 concerns nonparametric volatility estimation with high frequency time series. While data observed at finer time scale than daily provide rich information, their distinctive empirical properties bring new challenges in their analysis. We propose a Fourier domain based estimator for multivariate ex-post volatility that is robust to two major hurdles in high frequency finance: asynchronicity in observations and the presence of microstructure noise. Asymptotic properties are derived under some mild conditions. Simulation studies show our method outperforms time domain estimators when two assets with different liquidity are traded asynchronously.
64

A new approach of classification of time series database.

January 2011 (has links)
Chan, Hon Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 57-59). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cluster Analysis in Time Series --- p.1 / Chapter 1.2 --- Dissimilarity Measure --- p.2 / Chapter 1.2.1 --- Euclidean Distance --- p.3 / Chapter 1.2.2 --- Pearson's Correlation Coefficient --- p.3 / Chapter 1.2.3 --- Other Measure --- p.4 / Chapter 1.3 --- Summary --- p.5 / Chapter 2 --- Algorithm and Methodology --- p.8 / Chapter 2.1 --- Algorithm and Methodology --- p.8 / Chapter 2.2 --- Illustrative Examples --- p.14 / Chapter 3 --- Simulation Study --- p.20 / Chapter 3.1 --- Simulation Plan --- p.20 / Chapter 3.2 --- Measure of Performance --- p.24 / Chapter 3.3 --- Simulation Results --- p.27 / Chapter 3.4 --- Results of k-means Clustering --- p.33 / Chapter 4 --- Application on Gene Expression --- p.37 / Chapter 4.1 --- Dataset --- p.37 / Chapter 4.2 --- Parameter Settings --- p.38 / Chapter 4.3 --- Results --- p.38 / Chapter 5 --- Conclusion and Further Research --- p.55
65

Time series analysis of beef price spreads

Mukhebi, Adrian W January 2011 (has links)
Digitized by Kansas Correctional Industries
66

The development and validation of a fuzzy logic method for time-series extrapolation /

Plouffe, Jeffrey Stewart. January 2005 (has links)
Thesis (Ph. D.)--University of Rhode Island, 2005. / Typescript. Includes bibliographical references (v. 2: leaves 582-593).
67

Bootstrap procedures for dynamic factor analysis

Zhang, Guangjian, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 110-114).
68

Kernel-based Copula Processes

Ng, Eddie Kai Ho 22 February 2011 (has links)
The field of time-series analysis has made important contributions to a wide spectrum of applications such as tide-level studies in hydrology, natural resource prospecting in geo-statistics, speech recognition, weather forecasting, financial trading, and economic forecasts and analysis. Nevertheless, the analysis of the non-Gaussian and non-stationary features of time-series remains challenging for the current state-of-art models. This thesis proposes an innovative framework that leverages the theory of copula, combined with a probabilistic framework from the machine learning community, to produce a versatile tool for multiple time-series analysis. I coined this new model Kernel-based Copula Processes (KCPs). Under the new proposed framework, various idiosyncracies can be modeled compactly via a kernel function for each individual time-series, and long-range dependency can be captured by a copula function. The copula function separates the marginal behavior and serial dependency structures, thus allowing them to be modeled separately and with much greater flexibility. Moreover, the codependent structure of a large number of time-series with potentially vastly different characteristics can be captured in a compact and elegant fashion through the notion of a binding copula. This feature allows a highly heterogeneous model to be built, breaking free from the homogeneous limitation of most conventional models. The KCPs have demonstrated superior predictive power when used to forecast a multitude of data sets from meteorological and financial areas. Finally, the versatility of the KCP model is exemplified when it was successfully applied to non-trivial classification problems unaltered.
69

Financial time series analysis

Yin, Jiang Ling January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
70

Kernel-based Copula Processes

Ng, Eddie Kai Ho 22 February 2011 (has links)
The field of time-series analysis has made important contributions to a wide spectrum of applications such as tide-level studies in hydrology, natural resource prospecting in geo-statistics, speech recognition, weather forecasting, financial trading, and economic forecasts and analysis. Nevertheless, the analysis of the non-Gaussian and non-stationary features of time-series remains challenging for the current state-of-art models. This thesis proposes an innovative framework that leverages the theory of copula, combined with a probabilistic framework from the machine learning community, to produce a versatile tool for multiple time-series analysis. I coined this new model Kernel-based Copula Processes (KCPs). Under the new proposed framework, various idiosyncracies can be modeled compactly via a kernel function for each individual time-series, and long-range dependency can be captured by a copula function. The copula function separates the marginal behavior and serial dependency structures, thus allowing them to be modeled separately and with much greater flexibility. Moreover, the codependent structure of a large number of time-series with potentially vastly different characteristics can be captured in a compact and elegant fashion through the notion of a binding copula. This feature allows a highly heterogeneous model to be built, breaking free from the homogeneous limitation of most conventional models. The KCPs have demonstrated superior predictive power when used to forecast a multitude of data sets from meteorological and financial areas. Finally, the versatility of the KCP model is exemplified when it was successfully applied to non-trivial classification problems unaltered.

Page generated in 0.0351 seconds