• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 88
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 556
  • 132
  • 100
  • 97
  • 76
  • 69
  • 68
  • 58
  • 53
  • 48
  • 46
  • 43
  • 40
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

High frequency and large dimension volatility

Shi, Zhangbo January 2010 (has links)
Three main issues are explored in this thesis—volatility measurement, volatility spillover and large-dimension covariance matrices. For the first question of volatility measurement, this thesis compares two newly-proposed, high-frequency volatility measurement models, namely realized volatility and realized range-based volatility. It does so in the aim of trying to use empirical results to assess whether one volatility model is better than the other. The realized volatility model and realized range-based volatility model are compared based on three markets, five forecast models, two data frequencies and two volatility proxies, making sixty scenarios in total. Seven different loss functions are also used for the evaluation tests. This necessarily ensures that the empirical results are highly robust. After making some simple adjustments to the original realized range-based volatility, this thesis concludes that it is clear that the scaled realized range-based volatility model outperforms the realized volatility model. For the second research question on volatility spillover, realized range-based volatility and realized volatility models are employed to study the volatility spillover among the S&P 500 index markets, with the aim of finding out empirically whether volatility spillover exists between the markets. Volatility spillover is divided into the two categories of statistically significant volatility spillover and economically significant volatility spillover. Economically significant spillover is defined as spillover that can help forecast the volatility of another market, and is therefore a more powerful measurement than statistically significant spillover. The findings show that, in reality, the existence of volatility spillover depends on the choice of model, choice of volatility proxy and value of parameters used. The third and final research question in this thesis involves the comparison of various large-dimension multivariate models. The main contribution made by this specific study is threefold. First, a number of good performance multivariate volatility models are introduced by adjusting some commonly used models. Second, different models and various choices of parameters for these models are tested based on 26 currency pairs. Third, the evaluation criteria adopted possess much more practical implications than those used in most other papers on this subject area.
22

Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Model (CALM) Within the Markowitz Framework

McArthur, Gregory D 09 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer reflect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry-standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio’s aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
23

Identify influential observations in the estimation of covariance matrix.

January 2000 (has links)
Wong Yuen Kwan Virginia. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 85-86). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Deletion and Distance Measure --- p.6 / Chapter 2.1 --- Mahalanobis and Cook's Distances --- p.6 / Chapter 2.2 --- Defining New Measure Di --- p.8 / Chapter 2.3 --- Derivation of cov(s(i) ´ؤ s) --- p.10 / Chapter 3 --- Procedures for Detecting Influential Observations --- p.18 / Chapter 3.1 --- The One-Step Method --- p.18 / Chapter 3.1.1 --- The Method --- p.18 / Chapter 3.1.2 --- Design of Simulation Studies --- p.19 / Chapter 3.1.3 --- Results of Simulation Studies --- p.21 / Chapter 3.1.4 --- Higher Dimensional Cases --- p.24 / Chapter 3.2 --- The Forward Search Procedure --- p.24 / Chapter 3.2.1 --- Idea of the Forward Search Procedure --- p.25 / Chapter 3.2.2 --- The Algorithm --- p.26 / Chapter 4 --- Examples and Observations --- p.29 / Chapter 4.1 --- Example 1: Brain and Body Weight Data --- p.29 / Chapter 4.2 --- Example 2: Stack Loss Data --- p.34 / Chapter 4.3 --- Example 3: Percentage of Cloud Cover --- p.40 / Chapter 4.4 --- Example 4: Synthetic data of Hawkins et al.(1984) . --- p.46 / Chapter 4.5 --- Observations and Comparison --- p.52 / Chapter 5 --- Discussion and Conclusion --- p.54 / Tables --- p.56 / Figures --- p.77 / Bibliography --- p.85
24

Methods for handling measurement error and sources of variation in functional data models

Cai, Xiaochen January 2015 (has links)
The overall theme of this thesis work concerns the problem of handling measurement error and sources of variation in functional data models. The first part introduces a wavelet-based sparse principal component analysis approach for characterizing the variability of multilevel functional data that are characterized by spatial heterogeneity and local features. The total covariance of the data can be decomposed into three hierarchical levels: between subjects, between sessions and measurement error. Sparse principal component analysis in the wavelet domain allows for reducing dimension and deriving main directions of random effects that may vary for each hierarchical level. The method is illustrated by application to data from a study of human vision. The second part considers the problem of scalar-on-function regression when the functional regressors are observed with measurement error. We develop a simulation-extrapolation method for scalar-on-function regression, which first estimates the error variance, establishes the relationship between a sequence of added error variance and the corresponding estimates of coefficient functions, and then extrapolates to the zero-error. We introduce three methods to extrapolate the sequence of estimated coefficient functions. In a simulation study, we compare the performance of the simulation-extrapolation method with two pre-smoothing methods based on smoothing splines and functional principal component analysis. The third part discusses several extensions of the simulation-extrapolation method developed in the second part. Some of the extensions are illustrated by application to diffusion tensor imaging data.
25

The effects and detection of collinearity in an analysis of covariance

Giacomini, Jo Jane January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
26

Energy fluxes at the air-snow interface

Helgason, Warren Douglas 11 March 2010
Modelling the energy exchange between the snowpack and the atmosphere is critical for many hydrological applications. Of the terms present in the snow energy balance, the turbulent fluxes of sensible and latent heat are the most challenging to estimate, particularly within mountain environments where the hydrological importance is great. Many of the flux estimation techniques, such as the bulk transfer method, are poorly adapted for use in complex terrain. In order to characterize the turbulence and to assess the suitability of flux estimation techniques, eddy covariance flux measurements and supporting meteorological data were collected from two mountain valley forest openings in Kananaskis Country, AB. These sites were generally calm, however wind gusts were frequently observed which markedly affected the turbulence characteristics and increased the rates of momentum and heat transfer. In order to successfully apply the bulk transfer technique at these sites, it was necessary to use environment-specific transfer coefficients to account for the effect of the surrounding complex terrain. These observations were compared with data collected on a treeless alpine ridge near Whitehorse, YT, where it was found that many of the turbulence characteristics were similar to flat sites. However, the boundary layer formed over the alpine ridge was very thin and the site was poorly suited for estimating surface fluxes. The mountain results were further contrasted with data collected over a homogeneous and flat prairie site located near Saskatoon, SK. This site included measurement of all of the snow energy terms, permitting an estimate of the energy balance closure obtainable over snow surfaces. The observed energy balance residual was very large, indicating that the eddy covariance technique was unable to capture all of the turbulent energy. It was concluded that an unmeasured transfer of sensible heat was occurring which was strongly correlated with the long-wave radiation balance. Mechanisms for this relationship were hypothesized. Two snow energy balance models were used to investigate the energy imbalance, where it was observed that the flux terms could be suitably simulated if effective parameters were used to augment the sensible heat transfer rate. The results from this thesis contribute to the understanding of heat transfer processes over snow surfaces during mid-winter conditions and improve the ability to model turbulent heat and mass fluxes from snow surfaces in complex environments.
27

Energy fluxes at the air-snow interface

Helgason, Warren Douglas 11 March 2010 (has links)
Modelling the energy exchange between the snowpack and the atmosphere is critical for many hydrological applications. Of the terms present in the snow energy balance, the turbulent fluxes of sensible and latent heat are the most challenging to estimate, particularly within mountain environments where the hydrological importance is great. Many of the flux estimation techniques, such as the bulk transfer method, are poorly adapted for use in complex terrain. In order to characterize the turbulence and to assess the suitability of flux estimation techniques, eddy covariance flux measurements and supporting meteorological data were collected from two mountain valley forest openings in Kananaskis Country, AB. These sites were generally calm, however wind gusts were frequently observed which markedly affected the turbulence characteristics and increased the rates of momentum and heat transfer. In order to successfully apply the bulk transfer technique at these sites, it was necessary to use environment-specific transfer coefficients to account for the effect of the surrounding complex terrain. These observations were compared with data collected on a treeless alpine ridge near Whitehorse, YT, where it was found that many of the turbulence characteristics were similar to flat sites. However, the boundary layer formed over the alpine ridge was very thin and the site was poorly suited for estimating surface fluxes. The mountain results were further contrasted with data collected over a homogeneous and flat prairie site located near Saskatoon, SK. This site included measurement of all of the snow energy terms, permitting an estimate of the energy balance closure obtainable over snow surfaces. The observed energy balance residual was very large, indicating that the eddy covariance technique was unable to capture all of the turbulent energy. It was concluded that an unmeasured transfer of sensible heat was occurring which was strongly correlated with the long-wave radiation balance. Mechanisms for this relationship were hypothesized. Two snow energy balance models were used to investigate the energy imbalance, where it was observed that the flux terms could be suitably simulated if effective parameters were used to augment the sensible heat transfer rate. The results from this thesis contribute to the understanding of heat transfer processes over snow surfaces during mid-winter conditions and improve the ability to model turbulent heat and mass fluxes from snow surfaces in complex environments.
28

Initial Member Selection and Covariance Localization Study of Ensemble Kalman Filter based Data Assimilation

Yip, Yeung 2011 May 1900 (has links)
Petroleum engineers generate reservoir simulation models to optimize production and maximize recovery. History matching is one of the methods used to calibrate the reservoir models. During traditional history matching, individual model variable parameters (permeability, relative permeability, initial water saturation, etc) are adjusted until the production history is matched using the updated reservoir model. However, this method of utilizing only one model does not help capture the full range of system uncertainty. Another drawback is that the entire model has to be matched from the initial time when matching for new observation data. Ensemble Kalman Filter (EnKF) is a data assimilation technique that has gained increasing interest in the application of petroleum history matching in recent years. The basic methodology of the EnKF consists of the forecast step and the update step. This data assimilation method utilizes a collection of state vectors, known as an ensemble, which are simulated forward in time. In other words, each ensemble member represents a reservoir model (realization). Subsequently, during the update step, the sample covariance is computed from the ensemble, while the collection of state vectors is updated using the formulations which involve this updated sample covariance. When a small ensemble size is used for a large, field-scale model, poor estimate of the covariance matrix could occur (Anderson and Anderson 1999; Devegowda and Arroyo 2006). To mitigate such problem, various covariance conditioning schemes have been proposed to improve the performance of EnKF, without the use of large ensemble sizes that require enormous computational resources. In this study, we implemented EnKF coupled with these various covariance localization schemes: Distance-based, Streamline trajectory-based, and Streamline sensitivity-based localization and Hierarchical EnKF on a synthetic reservoir field case study. We will describe the methodology of each of the covariance localization schemes with their characteristics and limitations.
29

Variabilité des processus hydrologiques entrant dans le mecanisme de la génèse des crues sur les bassins à cinétique rapide

Le, Xuan Kham. Dartus, Denis. January 2008 (has links)
Reproduction de : Thèse de doctorat : Hydrologie et hydraulique : Toulouse, INPT : 2008. / Titre provenant de l'écran-titre. Bibliogr. 126 réf.
30

Uniqueness theorems for non-symmetric convex bodies

Shane, Christopher, Koldobsky, Alexander, January 2009 (has links)
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on March 29, 2010). Thesis advisor: Dr. Alexander Koldobsky. Vita. Includes bibliographical references.

Page generated in 0.0564 seconds