• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 32
  • 10
  • 4
  • 1
  • Tagged with
  • 1313
  • 484
  • 92
  • 86
  • 67
  • 54
  • 49
  • 43
  • 42
  • 41
  • 40
  • 39
  • 36
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Heterogeneity and aggregation in seasonal time series

Tripodis, Georgios January 2007 (has links)
Seasonality is an important part of many real time series. While issues of seasonal heteroscedasticity and aggregation have been a cause of concern for data users, there has not been a great deal of theoretical research in this area. This thesis concentrates on these two issues. We consider seasonal time series with single season heteroscedasticity. We show that when only one month has different variability from others there are constraints on the seasonal models that can be used. We show that both the dummy and the trigonometric models are not effective in modelling seasonal series with this type of variability. We suggest two models that permit single season heteroscedasticity as a special case. We show that seasonal heteroscedasticity gives rise to periodic autocorrelation function. We propose a new class, called periodic structural time series models (PSTSM) to deal with such periodicities. We show that PSTSM have correlation structure equivalent to that of a periodic integrated moving average (PIMA) process. In a comparison of forecast performance for a set of quarterly macroeconomic series, PSTSM outperform periodic autoregressive (PAR) models both within and out of sample. We also consider the problem of contemporaneous aggregation of time series using the structural time series framework. We consider the conditions of identifiability for the aggregate series. We show that the identifiability of the models for the component series is not sufficient for the identifiability of the model for the aggregate series. We also consider the case where there is no estimation error as well as the case of modeling an unknown process. For the case of the unknown process we provide recursions based on the Kalman filter that give the asymptotic variance of the estimated parameters.
62

Higher order asymptotic theory for nonparametric time series analysis and related contributions

Velasco, Carlos January 1996 (has links)
We investigate higher order asymptotic theory in nonparametric time series analysis. The aim of these techniques is to approximate the finite sample distribution of estimates and test statistics. This is specially relevant for smoothed nonparametric estimates in the presence of autocorrelation, which have slow rates of convergence so that inference rules based on first-order asymptotic approximations may not be very precise. First we review the literature on autocorrelation-robust inference and higher order asymptotics in time series. We evaluate the effect of the nonparametric estimation of the variance in the studentization of least squares estimates in linear regression models by means of asymptotic expansions. Then, we obtain an Edgeworth expansion for the distribution of nonparametric estimates of the spectral density and studentized sample mean. Only local smoothness conditions on the spectrum of the time series are assumed, so long range dependence behaviour in the series is allowed at remote frequencies, not necessary only at zero frequency but at possible cyclical and seasonal ones. The nonparametric methods described rely on a bandwidth or smoothing number. We propose a cross-validation algorithm for the choice of the optimal bandwidth, in a mean square sense, at a single point without restrictions on the spectral density at other frequencies. Then, we focus on the performance of the spectral density estimates around a singularity due to long range dependence and we obtain their asymptotic distribution in the Gaussian case. Semiparametric inference procedures about the long memory parameter based on these nonparametric estimates are justified under mild conditions on the distribution of the observed time series. Using a fixed average of periodogram ordinates, we also prove the consistency of the log-periodogram regression estimate of the memory parameter for linear but non-Gaussian time series.
63

Robust estimation of multivariate location and scatter with application to financial portfolio selection

Costanzo, Simona January 2004 (has links)
The thesis studies robust methods for estimating location and scatter of multivariate distributions and contributes to the development of some aspects regarding the detection of multiple outliers. A variety of methods have been designed for detecting single point outliers which, when applied to groups of contaminated data, lead to problems of "masking", that is when an outlier appears as a "good" data. Robust high-breakdown estimators overcome the masking effect, also allowing for a high tolerance of "bad" data. The Minimum Volume Ellipsoid (MVE) and the Minimum Covariance Determinant estimator (MCD) are the most widely used high-breakdown estimators. The central problem when identifying an anomaly is setting a decision rule. The exact distribution of the MCD and MVE is not known, implying that the diagnostics constructed as a function of these robust estimates have also an unknown distribution. Single point oultiers can be recognized using Mahalanobis distances; multivariate outliers are detected by robust (via MCD and MVE) distances of Mahalanobis type. The thesis obtains the small sample distribution of the first ones in an alternative simpler way than the proof existing in the literature. Furthermore, some empirical evidences show the need of a correction factor to improve the approximation to the expected distribution. Some graphical devices are suggested to enhance the results. One of the limiting aspects of the literature on robustness is the lack of real data applications beside the literature examples. The personal interest in financial subjects has driven the thesis to consider applications in this area. Particular attention is paid to methods for optimal selection of financial portfolios. Mean-Variance portfolio theory selects the assets which maximize the return and minimize the risk of the investment using Maximum Likelihood Estimates (MLE). However, MLE are known to be sensitive to relatively small fractions of outliers. Furthermore, a wide financial literature provides evidence of the non-gaussian distribution of the stock returns. All these reasons motivate the construction of a robust portfolio selection model proposed in the thesis.
64

Non-Gaussian structural time series models

Fernandes, Cristiano Augusto Coelho January 1991 (has links)
This thesis aims to develop a class of state space models for non-Gaussian time series. Our models are based on distributions of the exponential family, such as the Poisson, the negative-binomial, the binomial and the gamma. In these distributions the mean is allowed to change over time through a mechanism which mimics a random walk. By adopting a closed sampling analysis we are able to derive finite dimensional filters, similar to the Kalman filter. These are then used to construct the likelihood function and to make forecasts of future observations. In fact for all the specifications here considered we have been able to show that the predictions give rise to schemes based on an exponentially weighted moving average (EWMA). The models may be extended to include explanatory variables via the kind of link functions that appear in GLIM models. This enables nonstochastic slope and seasonal components to be included. The Poisson, negative binomial and bivariate Poisson models are illustrated by considering applications to real data. Monte Carlo experiments are also conducted in order to investigate properties of maximum likelihood estimators and power studies of a post sample predictive test developed for the Poisson model.
65

Dynamic structural equation models : estimation and interference

Ciraki, Dario January 2007 (has links)
The thesis focuses on estimation of dynamic structural equation models in which some or all variables might be unobservable (latent) or measured with error. Moreover, we consider the situation where latent variables can be measured with multiple observable indicators and where lagged values of latent variables might be included in the model. This situation leads to a dynamic structural equation model (DSEM), which can be viewed as dynamic generalisation of the structural equation model (SEM). Taking the mismeasurement problem into account aims at reducing or eliminating the errors-in-variables bias and hence at minimising the chance of obtaining incorrect coefficient estimates. Furthermore, such methods can be used to improve measurement of latent variables and to obtain more accurate forecasts. The thesis aims to make a contribution to the literature in four areas. Firstly, we propose a unifying theoretical framework for the analysis of dynamic structural equation models. Secondly, we provide analytical results for both panel and time series DSEM models along with the software implementation suggestions. Thirdly, we propose non-parametric estimation methods that can also be used for obtaining starting values in maximum likelihood estimation. Finally, we illustrate these methods on several real data examples demonstrating the capabilities of the currently available software as well as importance of good starting values.
66

State space models : univariate representation of a multivariate model, partial interpolation and periodic convergence

Mavrakakis, Miltiadis C. January 2008 (has links)
This thesis examines several issues that arise from the state space representation of a multivariate time series model. Original proofs of the algorithms for obtaining interpolated estimates of the state and observation vectors from the Kalman filter smoother (KFS) output are presented, particularly for the formulae for which rigorous proofs do not appear in the existing literature. The notion of partially interpolated estimates is introduced and algorithms for constructing these estimates are established. An existing method for constructing a univariate representation (UR) of a multivariate model is developed further, and applied to a wider class of state space models. The computational benefits of filtering and smoothing with the UR, rather than the original multivariate model, are discussed. The UR KFS recursions produce useful quantities that cannot be obtained from the original multivariate model. The mathematical properties of these quantities are examined and the process of reconstructing the original multivariate KFS output is demonstrated By reversing the UR process, a time-invariant state space form (SSF) is proposed for models with periodic system matrices. This SSF is used to explore the novel concept of periodic convergence of the KFS. Necessary and sufficient conditions for periodic convergence are asserted and proved. The techniques developed are then applied to the problem of missing- value estimation in long multivariate temperature series, which can arise due to gaps in the historical records. These missing values are a hindrance to the study of weather risk and pricing of weather derivatives, as well as the development of climate-dependent models. The proposed model-based techniques are compared to existing methods in the field, as well as an original ad hoc approach. The relative performance of these methods is assessed by their application to data from weather stations in the state of Texas, for daily maximum temperatures from 1950 to 2001.
67

The Bayesian and the realist : friends or foes?

Farmakis, Eleftherios January 2008 (has links)
The main purpose of my thesis is to bring together two seemingly unrelated topics in the philosophy of science and extract the philosophical consequences of this exercise. The first topic is Bayesianism - a well-developed, and popular, probabilistic theory of confirmation. The second topic is Scientific Realism - the thesis that we have good reason to believe that our best scientific theories are (approximately) true. It seems natural to assume that a sophisticated probabilistic theory of confirmation is the most appropriate framework for the treatment of the issue of scientific realism. Despite this intuition, however, the bulk of the literature is conspicuous for its failure to apply the Bayesian apparatus when discussing scientific realism. Furthermore, on the rare occasions that this has been attempted, its outcomes have been strikingly negative. In my thesis I systematise and critically examine the segmented literature in order to investigate whether, and how, Bayesianism and scientific realism can be reconciled. I argue for the following claims: 1) that those realists who claim that Bayesians lack a proper notion of 'theory acceptance' have misunderstood the nature of Bayesianism as a reductive account of 'theory acceptance'; 2) that it is possible to reconstruct most of the significant alternative positions involved in the realism debate using this new account of 'theory acceptance'; 3) that Bayesianism is best seen as a general framework within which the standard informal arguments for and against realism become transparent, thus greatly clarifying the force of the realist argument; 4) that a Bayesian reconstruction does not commit one to any particular position as ultimately the right one, and, 5) that this result does not amount to succumbing to relativism. I conclude that the attempt to apply Bayesianism to the realism issue enjoys a considerable amount of success, though not enough to resolve the dispute definitively.
68

Factor modeling for high dimensional time series

Bathia, Neil January 2009 (has links)
Chapter 1: Identifying the finite dimensionality of curve time series The curve time series framework provides a convenient vehicle to model some types of nonstationary time series in a stationary framework. We propose a new method to identify the finite dimensionality of curve time series based on the autocorrelation between different curves. Based upon the duality relation between row and column subspaces of a data matrix, we show that the practical implementation of our methodology reduces to the eigenanalysis of a real matrix. Furthermore, the determination of the dimensionality is equivalent to indentifying the number of non-zero eigenvalues of this same matrix. For this purpose we propose a simple bootstrap test. Asymptotic properties of our methodology are investigated. The proposed methodology is illustrated with some simulation studies as well as an application to IBM intraday return densities. Chapter 2: Methodology and convergence rates for factor modeling of multiple time series An important task in modeling multiple time series is to obtain some form of dimension reduction. We tackle this problem using a factor model where the estimation of the factor loading space is constructed via eigenanalysis of a matrix which is a simple function of the sample autocovariance matrices. The number of factors is then equal to the number of "non-zero" eigenvalues of this matrix. We use the term "non-zero" loosely because in practice it is unlikely that there will be any eigenvalues which are exactly zero. However, our theoretical results suggest that the sample eigenvalues whose population counterparts are zero are "super-consistent" (i.e. they converge to zero at a n rate) whereas the sample eigenvalues whose population counterparts are non-zero converge at an ordinary parametric rate of root-n. Here n denotes the sample size. This striking result is supported by simulation evidence and consequences for inference are discussed. In addition, we study the properties of the factor loading space under very general conditions (including possible non-stationarity) and a simple white noise test for empirically determining the number of non-zero eigenvalues is proposed and theoretically justified. We also provide an example of a heuristic threshold based estimator for the number of factors and prove that it yields a consistent estimator provided that the threshold is chosen to be of an appropriate order. Finally we conclude with an analysis of some implied volatility datasets.
69

Hierarchical and multidimensional smoothing with applications to longitudinal and mortality data

Biatat, Viani Aime Djeundje January 2011 (has links)
This thesis is concerned with two themes: (a) smooth mixed models in hierarchical settings with applications to grouped longitudinal data and (b) multi-dimensional smoothing with reference to the modelling and forecasting of mortality data. In part (a), we examine a popular method to smooth models for longitudinal data, which consists of expressing the model as a mixed model. This approach is particularly appealing when truncated polynomials are used as a basis for the smoothing, as the mixed model representation is almost immediate. We show that this approach can lead to a severely biased estimate of the group and subject effects, and to confidence intervals with undesirable properties. We use penalization to investigate an alternative approach with either B-spline or truncated polynomial bases and show that this new approach does not suffer from the same defects. Our models are defined in terms of B-splines or truncated polynomials with appropriate penalties, but we re-parametrize them as mixed models and this gives access to fitting with standard procedures. In part (b), we first demonstrate the adverse impact of over-dispersion (and heterogeneity) in the modelling of mortality data, and describe the resolution of this problem through a two-stage smoothing of mean and dispersion effects via penalized quasi-likelihoods. Next, we propose a method for the joint modelling of several mortality tables (e.g. male and female mortality in Demography, mortality by lives and by amounts in Life Insurance, etc) and describe how this joint approach leads to the classification and simple comparison of these tables. Finally, we deal with the smooth modelling of mortality improvement factors, which are two-dimensional correlated data; here we first form a basic flexible model incorporating the correlation structure, and then extend this model to cope with cohort and period shock effects
70

Contact processes on the integers

Tzioufas, Achilleas January 2011 (has links)
The three state contact process is the modi cation of the contact process at rate in which rst infections occur at rate instead. Chapters 2 and 3 consider the three state contact process on (graphs that have as set of sites) the integers with nearest neighbours interaction (that is, edges are placed among sites at Euclidean distance one apart). Results in Chapter 2 are meant to illustrate regularity of the growth of the process under the assumption that , that is, reverse immunization. While in Chapter 3 two results regarding the convergence rates of the process are given. Chapter 4 is concerned with the i.i.d. behaviour of the right endpoint of contact processes on the integers with symmetric, translation invariant interaction. Finally, Chapter 5 is concerned with two monotonicity properties of the three state contact process.

Page generated in 0.0331 seconds