• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 14
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 71
  • 25
  • 20
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Nowcasting by the BSTS-U-MIDAS Model

Duan, Jun 23 September 2015 (has links)
Using high frequency data for forecasting or nowcasting, we have to deal with three major problems: the mixed frequency problem, the high dimensionality (fat re- gression, parameter proliferation) problem, and the unbalanced data problem (miss- ing observations, ragged edge data). We propose a BSTS-U-MIDAS model (Bayesian Structural Time Series-Unlimited-Mixed-Data Sampling model) to handle these prob- lem. This model consists of four parts. First of all, a structural time series with regressors model (STM) is used to capture the dynamics of target variable, and the regressors are chosen to boost the forecast accuracy. Second, a MIDAS model is adopted to handle the mixed frequency of the regressors in the STM. Third, spike- and-slab regression is used to implement variable selection. Fourth, Bayesian model averaging (BMA) is used for nowcasting. We use this model to nowcast quarterly GDP for Canada, and find that this model outperform benchmark models: ARIMA model and Boosting model, in terms of MAE (mean absolute error) and MAPE (mean absolute percentage error). / Graduate / 0501 / 0508 / 0463 / jonduan@uvic.ca
22

Forecasting using high-frequency data: a comparison of asymmetric financial duration models

Zhang, Q., Cai, Charlie X., Keasey, K. January 2009 (has links)
No / The first purpose of this paper is to assess the short-run forecasting capabilities of two competing financial duration models. The forecast performance of the Autoregressive Conditional Multinomial–Autoregressive Conditional Duration (ACM-ACD) model is better than the Asymmetric Autoregressive Conditional Duration (AACD) model. However, the ACM-ACD model is more complex in terms of the computational setting and is more sensitive to starting values. The second purpose is to examine the effects of market microstructure on the forecasting performance of the two models. The results indicate that the forecast performance of the models generally decreases as the liquidity of the stock increases, with the exception of the most liquid stocks. Furthermore, a simple filter of the raw data improves the performance of both models. Finally, the results suggest that both models capture the characteristics of the micro data very well with a minimum sample length of 20 days.
23

A Multiscale Analysis of the Factors Controlling Nutrient Dynamics and Cyanobacteria Blooms in Lake Champlain

Isles, Peter D. F. 01 January 2016 (has links)
Cyanobacteria blooms have increased in Lake Champlain due to excessive nutrient loading, resulting in negative impacts on the local economy and environmental health. While climate warming is expected to promote increasingly severe cyanobacteria blooms globally, predicting the impacts of complex climate changes on individual lakes is complicated by the many physical, chemical, and biological processes which mediate nutrient dynamics and cyanobacteria growth across time and space. Furthermore, processes influencing bloom development operate on a variety of temporal scales (hourly, daily, seasonal, decadal, episodic), making it difficult to identify important factors controlling bloom development using traditional methods or coarse temporal resolution datasets. To resolve these inherent problems of scale, I use 4 years of high-frequency biological, hydrodynamic, and biogeochemical data from Missisquoi Bay, Lake Champlain; 23 years of lake-wide monitoring data; and integrated process-based climate-watershed-lake models driven by regional climate projections to answer the following research questions: 1) To what extent do external nutrient inputs or internal nutrient processing control nutrient concentrations and cyanobacteria blooms in Lake Champlain; 2) how do internal and external nutrient inputs interact with meteorological drivers to promote or suppress bloom development; and 3) how is climate change likely to impact these drivers and the risk of cyanobacteria blooms in the future? I find that cyanobacteria blooms are driven by specific combinations of meteorological and biogeochemical conditions in different areas of the lake, and that in the absence of strong management actions cyanobacteria blooms are likely to become more severe in the future due to climate change.
24

Modelling Conditional Quantiles of CEE Stock Market Returns / Modelling Conditional Quantiles of CEE Stock Market Returns

Tóth, Daniel January 2015 (has links)
Correctly specified models to forecast returns of indices are important for in- vestors to minimize risk on financial markets. This thesis focuses on conditional Value at Risk modeling, employing flexible quantile regression framework and hence avoiding the assumption on the return distribution. We apply semi- parametric linear quantile regression (LQR) models with realized variance and also models with positive and negative semivariance which allows for direct modelling of the quantiles. Four European stock price indices are taken into account: Czech PX, Hungarian BUX, German DAX and London FTSE 100. The objective is to investigate how the use of realized variance influence the VaR accuracy and the correlation between the Central & Eastern and Western European indices. The main contribution is application of the LQR models for modelling of conditional quantiles and comparison of the correlation between European indices with use of the realized measures. Our results show that linear quantile regression models on one-step-ahead forecast provide better fit and more accurate modelling than classical VaR model with assumption of nor- mally distributed returns. Therefore LQR models with realized variance can be used as accurate tool for investors. Moreover we show that diversification benefits are...
25

Realized Jump GARCH model: pomůže dekompozice volatility vylepšit predikční schopnosti modelu? / Realized Jump GARCH model: Can decomposition of volatility improve its forecasting?

Poláček, Jiří January 2014 (has links)
The present thesis focuses on exploration of the applicability of realized measures in volatility modeling and forecasting. We provide a first comprehensive study of jump variation impact on future volatility of Central and Eastern European stock markets. As a main workhorse, the recently proposed Realized Jump GARCH model, which enables a study of the impact of jump variation on future volatility forecasts, is used. In addition, we estimate Realized GARCH and heterogeneous autoregressive (HAR) models using one-minute and five-minute high frequency data. We find that jumps are important for future volatility, but only to a limited extent due to the high level of information aggregation within the stock market index. Moreover, Realized (Jump) GARCH models outperform the standard GARCH model in terms of data fit and forecasting performance. Comparison of forecasts with HAR models reveals that Realized (Jump) GARCH models capture higher portion of volatility variation. Eventually, Realized Jump GARCH compared to other Realized GARCH models provides comparable or even better forecasting performance.
26

[en] HPA MODEL FOR MODELING HIGH FREQUENCY DATA: APPLICATION TO FORECAST HOURLY ELECTRIC LOAD / [pt] MODELO HPA PARA A MODELAGEM DE DADOS DE ALTA FREQUÊNCIA: APLICAÇÃO À PREVISÃO HORÁRIA DE CARGA ELÉTRICA

SCHAIANE NOGUEIRA OUVERNEY BARROSO 28 December 2010 (has links)
[pt] A previsão de curto prazo, que envolve dados de alta frequência, é essencial para a confiabilidade e eficiência da operação do setor elétrico, fazendo com que a alocação da carga seja feita de forma eficiente, além de indicar possíveis distorções nos próximos períodos (dias, horas, ou frações de hora). A fim de garantir a operação energética, diversas abordagens têm sido empregadas com vistas à previsão de carga de energia a curto prazo. Dentre elas, pode-se citar os modelos híbridos de Séries Temporais, Lógica Fuzzy e Redes Neurais e o Método Holt-Winters com múltiplos ciclos que é a principal ferramenta utilizada atualmente. O HPA (Hierarchical Profiling Approach) é um modelo que decompõe a variabilidade dos dados de séries temporais em três componentes: determinística, estocástica e ruído. A metodologia é capaz de tratar observações únicas, periódicas e aperiódicas, e ao mesmo tempo, serve como uma técnica de pré-branqueamento. Este trabalho tem por objetivo implementar o HPA e aplicá-lo a dados de carga de energia elétrica de 15 em 15 minutos pra um estado da região Sudeste do Brasil. Também serão analisadas as previsões de curto prazo geradas pelo modelo para a série considerada, visto que a habilidade preditiva do HPA ainda é desconhecida para séries brasileiras. As previsões forneceram Coeficiente U de Theil igual a 0,36 e um Erro Percentual Absoluto Médio (MAPE, Mean Absolute Percentage Error) de 5,46%, o qual é bem inferior ao valor fornecido pelo Modelo Ingênuo usado para comparação (15,08%). / [en] Short-term forecast, which involves high frequency data, is essential for a reliable and efficient electricity sector operation, enabling an efficient power load allocation and indicating possible distortions in the coming periods (days, hours, or hour fractions). To ensure the operation efficiency, several approaches have been employed in order to forecast the short-term load. Among them, one can mention the hybrid models of Time Series, Fuzzy Logic and Neural Networks and Holt-Winters Method with multiple cycles, which is the main tool used today. The HPA (Hierarchical Profiling Approach) model decomposes the variability of time series data into three components: deterministic, stochastic and noise. The model is capable of modeling single, periodic and aperiodic observations, and at the same time function as a pre-whitening technique. This work aims to implement the HPA and to apply it in 15 in 15 minutes load data of a Brazil’s southeastern state, since the predictive ability of the HPA is still not known for the Brazilian series. The short-term forecasts estimated for the series considered are analyzed and provided a Theil-U Coefficient equal to 0.36 and a Mean Absolute Percentage Error (MAPE) of 5.46%, which is smaller than the value given by the Naive Model (15.08%).
27

Statistical Models of Market Reactions to Influential Trades

Guo, Yi-Ting 16 July 2007 (has links)
In this study, we consider high frequency transaction data of NYSE, and apply statistical methods to characterize each trade into two classes, influential and ordinary liquidity trades. First, a median based approach is used to establish a high R-square price-volume model for high frequency data. Next, transactions are classified into four states based on the trade price, trade volume, quotes, and quoted depth. Volume weighted transition probability of the four states are investigated and shown to be distinct for informed trades and ordinary liquidity trades. Furthermore, four market reaction factors are introduced and studied. Logistic regression models of the influential trades are established based on the four factors and odds ratios are used to select the cutoff points.
28

On autocorrelation estimation of high frequency squared returns

Pao, Hsiao-Yung 14 January 2010 (has links)
In this paper, we investigate the problem of estimating the autocorrelation of squared returns modeled by diffusion processes with data observed at non-equi-spaced discrete times. Throughout, we will suppose that the stock price processes evolve in continuous time as the Heston-type stochastic volatility processes and the transactions arrive randomly according to a Poisson process. In order to estimate the autocorrelation at a fixed delay, the original non-equispaced data will be synchronized. When imputing missing data, we adopt the previous-tick interpolation scheme. Asymptotic property of the sample autocorrelation of squared returns based on the previous-tick synchronized data will be investigated. Simulation studies are performed and applications to real examples are illustrated.
29

High frequency data aggregation and Value-at-Risk / Aukšto dažnio duomenų agregavimas ir vertės pokyčio rizika

Pranckevičiūtė, Milda 20 September 2011 (has links)
Value-at-risk (VaR) model as a tool to estimate market risk is considered in the thesis. It is a statistical model defined as the maximum future loss due to likely changes in the value of financial assets portfolio during a certain period with a certain probability. A new definition of the aggregated VaR is given and the empirical study about different currencies position VaR estimates’ dependence on data aggregation functions (pointwise, maximum value, minimum value and average value) is provided. Functional ρ−GARCH(1,1) model is introduced and theorems of the stationary solution existence and maximum likelihood estimators of model parameters consistency are proved. Additionally, some examples of the model taking known density function of aggregated observations are given. Next, the general Hilbert space valued time series is presented and GARCH(1,1) model with univariate volatility is investigated. Theorems of the stationary solution existence, maximum likelihood estimators of model parameters consistency and asymptotic normality are proved; the analysis of residuals is provided. In the last chapter of the thesis the empirical study about Hurst index intraday value dependence on data aggregation taking different foreign currencies’ absolute returns is presented. / Disertacijoje nagrinėjamas vertės pokyčio rizikos modelis. Tai toks statistinis modelis, kurį taikant su tam tikra tikimybe įvertinamas didžiausias galimas nustatyto laikotarpio nuostolis, kredito įstaigos patiriamas dėl neigiamų taikomos finansinės priemonės vertės pokyčių. Apibrėžiamas agreguotų duomenų vertės pokyčio rizikos modelis ir pateikiamas praktinis tyrimas apie valiutų pozicijos vertės pokyčio rizikos modelio įvertinių priklausomybę nuo duomenų agregavimo taisyklės (pataškio, didžiausios vertės, mažiausios vertės ir vidutinės vertės). Kitame disertacijos skyriuje pristatomas naujas funkcinis ρ−GARCH(1,1) modelis, įrodomos stacionaraus sprendinio egzistavimo ir didžiausio tikėtinumo metodu įvertintų parametrų suderinamumo teoremos. Taip pat pateikiama keletas apibrėžtojo modelio pavyzdžių, kai žinoma agreguotų grąžų tankio funkcija. Disertacijoje apibrėžiamas Hilberto erdvės GARCH(1,1) modelis su vienmačiu kintamumu. Nagrinėjamos modelio savybės ir įrodomos stacionaraus sprendinio egzistavimo, didžiausio tikėtinumo metodu vertinamų parametrų suderinamumo ir asimptotinio normalumo teoremos, atliekama liekanų analizė. Paskutiniame disertacijos skyriuje aprašomas atliktas empirinis tyrimas apie Hursto indekso, kaip ilgos atminties parametro, priklausomybę nuo agregavimo taisyklės dienos metu, pasitelkiant absoliučias valiutų kursų grąžas.
30

Aukšto dažnio duomenų agregavimas ir vertės pokyčio rizika / High frequency data aggregation and Value-at-Risk

Pranckevičiūtė, Milda 20 September 2011 (has links)
Disertacijoje nagrinėjamas vertės pokyčio rizikos modelis. Tai toks statistinis modelis, kurį taikant su tam tikra tikimybe įvertinamas didžiausias galimas nustatyto laikotarpio nuostolis, kredito įstaigos patiriamas dėl neigiamų taikomos finansinės priemonės vertės pokyčių. Apibrėžiamas agreguotų duomenų vertės pokyčio rizikos modelis ir pateikiamas praktinis tyrimas apie valiutų pozicijos vertės pokyčio rizikos modelio įvertinių priklausomybę nuo duomenų agregavimo taisyklės (pataškio, didžiausios vertės, mažiausios vertės ir vidutinės vertės). Kitame disertacijos skyriuje pristatomas naujas funkcinis ρ−GARCH(1,1) modelis, įrodomos stacionaraus sprendinio egzistavimo ir didžiausio tikėtinumo metodu įvertintų parametrų suderinamumo teoremos. Taip pat pateikiama keletas apibrėžtojo modelio pavyzdžių, kai žinoma agreguotų grąžų tankio funkcija. Disertacijoje apibrėžiamas Hilberto erdvės GARCH(1,1) modelis su vienmačiu kintamumu. Nagrinėjamos modelio savybės ir įrodomos stacionaraus sprendinio egzistavimo, didžiausio tikėtinumo metodu vertinamų parametrų suderinamumo ir asimptotinio normalumo teoremos, atliekama liekanų analizė. Paskutiniame disertacijos skyriuje aprašomas atliktas empirinis tyrimas apie Hursto indekso, kaip ilgos atminties parametro, priklausomybę nuo agregavimo taisyklės dienos metu, pasitelkiant absoliučias valiutų kursų grąžas. / Value-at-risk (VaR) model as a tool to estimate market risk is considered in the thesis. It is a statistical model defined as the maximum future loss due to likely changes in the value of financial assets portfolio during a certain period with a certain probability. A new definition of the aggregated VaR is given and the empirical study about different currencies position VaR estimates’ dependence on data aggregation functions (pointwise, maximum value, minimum value and average value) is provided. Functional ρ−GARCH(1,1) model is introduced and theorems of the stationary solution existence and maximum likelihood estimators of model parameters consistency are proved. Additionally, some examples of the model taking known density function of aggregated observations are given. Next, the general Hilbert space valued time series is presented and GARCH(1,1) model with univariate volatility is investigated. Theorems of the stationary solution existence, maximum likelihood estimators of model parameters consistency and asymptotic normality are proved; the analysis of residuals is provided. In the last chapter of the thesis the empirical study about Hurst index intraday value dependence on data aggregation taking different foreign currencies’ absolute returns is presented.

Page generated in 0.0526 seconds