Spelling suggestions: "subject:"error codistribution"" "subject:"error bydistribution""
1 |
Online Monitoring Systems of Market Reaction to Realized Return VolatilityLiu, Chi-chin 23 July 2008 (has links)
Volatility is an important measure of stock market performance. Competing securities market makers keep abreast of the pace of volatility change by adjusting the bid-ask spreads and bid/ask quotes properly and efficiently. For intradaily high frequency transaction data, the observed volatility of stock returns can be decomposed into the sum of the two components - the realized volatility and the volatility due to microstructure noise. The quote adjustments of the market makers comprise part of the microstructure noise. In this study, we define the ratio of the realized integrated volatility to the observed squared returns as the proportion of realized integrated volatility (PIV). Time series models with generalized error distributed innovations are fitted to the PIV data based on 70-minute returns of NYSE tick-to-tick transaction data. Both retrospective and dynamic online control charts of the PIV data are established based on the fitted time series models. The McNemar test supports that the dynamic online control charts have the same power of detecting out of control events as the retrospective control charts. The Wilcoxon signedrank test is adopted to test the differences between the changes of the market maker
volatility and the realized volatility for in-control and out-of-control periods, respectively. The results reveals that the points above the upper control limit are related to the situation when the market makers can not keep up with the realized integrated volatility, whereas the points below the lower control limit indicate excessive reaction of the the market makers.
|
2 |
Heavy-tail statistical monitoring charts of the active managers' performanceChen, Chun-Cheng 03 August 2006 (has links)
Many performance measurement algorithms can only evaluate measure active managers' performance after a period of operating time. However, most investors are interested in monitoring the active managers' performances at any time, especially, when the performance is going down. So that the investors can adjust the targets and contents of their portfolios to reduce their risks. Yashchin,Thomas and David (1997) proposed to use a statistical quality control (SQC) procedure to monitor active managers' performances. In particular, they established the IR (Information Ratio) control charts under normality assumption to monitor the dynamic performances of active managers.
However, the distributions of IR statistic usually possess fat tail property. Since the underlying distribution of IR is an important hypothesis in building up the control chart, we consider the heavy tail distributions, such as mixture normal and generalized error distribution to fit the IR data. Based on the fitted distribution, the IR control charts are rebuilt. By simulations and empirical studies, the remedial control charts are found to detect the shifts of active managers' performances more sensitively.
|
3 |
Non-normal Bivariate Distributions: Estimation And Hypothesis TestingQumsiyeh, Sahar Botros 01 November 2007 (has links) (PDF)
When using data for estimating the parameters in a bivariate distribution, the tradition is to assume that data comes from a bivariate normal distribution. If the distribution is not bivariate normal, which often is the case, the maximum likelihood (ML) estimators are intractable and the least square (LS) estimators are inefficient. Here, we consider two independent sets of bivariate data which come from non-normal populations. We consider two distinctive distributions: the marginal and the conditional distributions are both Generalized Logistic, and the marginal and conditional distributions both belong to the Student&rsquo / s t family. We use the method of modified maximum likelihood (MML) to find estimators of various parameters in each distribution. We perform a simulation study to show that our estimators are more efficient and robust than the LS estimators even for small sample sizes.
We develop hypothesis testing procedures using the LS and the MML estimators. We show that the latter are more powerful and robust. Moreover, we give a comparison of our tests with another well known robust test due to Tiku and Singh (1982) and show that our test is more powerful. The latter is based on censored normal samples and is quite prominent (Lehmann, 1986). We also use our MML estimators to find a more efficient estimator of Mahalanobis distance. We give real life examples.
|
4 |
GARCH models applied on Swedish Stock Exchange IndicesBlad, Wiktor, Nedic, Vilim January 2019 (has links)
In the financial industry, it has been increasingly popular to measure risk. One of the most common quantitative measures for assessing risk is Value-at-Risk (VaR). VaR helps to measure extreme risks that an investor is exposed to. In addition to acquiring information of the expected loss, VaR was introduced in the regulatory frameworks of Basel I and II as a standardized measure of market risk. Due to necessity of measuring VaR accurately, this thesis aims to be a contribution to the research field of applying GARCH-models to financial time series in order to forecast the conditional variance and find accurate VaR-estimations. The findings in this thesis is that GARCH-models which incorporate the asymmetric effect of positive and negative returns perform better than a standard GARCH. Further on, leptokurtic distributions have been found to outperform normal distribution. In addition to various models and distributions, various rolling windows have been used to examine how the forecasts differ given window lengths.
|
5 |
Metody ditheringu obrazu / Methods of image ditheringPelc, Lukáš January 2014 (has links)
Master’s thesis discusses methods for dithering image. The basis is the explanation of the theory of digital images, color models, color depth and color range. Followed by the dismantling of the basic dithering methods which are a thresholding method, a random and matrix diffusion. Discussed are advanced methods of dithering with error distribution, bee with best known method Floyd-Steinberg. Included is a comparison of different methods including subjective comparison using a questionnaire. Program part is JAVA applet that shows the possibility of generating images using various dithering methods.
|
6 |
Data Fusion for the Problem of Protein Sidechain AssignmentLei, Yang 01 January 2010 (has links) (PDF)
In this thesis, we study the problem of protein side chain assignment (SCA) given
multiple sources of experimental and modeling data. In particular, the mechanism
of X-ray crystallography (X-ray) is re-examined using Fourier analysis, and a novel
probabilistic model of X-ray is proposed for SCA's decision making. The relationship
between the measurements in X-ray and the desired structure is reformulated in terms
of Discrete Fourier Transform (DFT). The decision making is performed by developing
a new resolution-dependent electron density map (EDM) model and applying
Maximum Likelihood (ML) estimation, which simply reduces to the Least Squares
(LS) solution. Calculation of the condence probability associated with this decision
making is also given. One possible extension of this novel model is the real-space
refinement when the continuous conformational space is used.
Furthermore, we present a data fusion scheme combining multi-sources of data
to solve SCA problem. The merit of our framework is the capability of exploiting
multi-sources of information to make decisions in a probabilistic perspective based on
Bayesian inference. Although our approach aims at SCA problem, it can be easily
transplanted to solving for the entire protein structure.
|
7 |
Corrected LM goodness-of-fit tests with applicaton to stock returnsPercy, Edward Richard, Jr. 05 January 2006 (has links)
No description available.
|
8 |
Analyzing value at risk and expected shortfall methods: the use of parametric, non-parametric, and semi-parametric modelsHuang, Xinxin 25 August 2014 (has links)
Value at Risk (VaR) and Expected Shortfall (ES) are methods often used to measure market risk. Inaccurate and unreliable Value at Risk and Expected Shortfall models can lead to underestimation of the market risk that a firm or financial institution is exposed to, and therefore may jeopardize the well-being or survival of the firm or financial institution during adverse markets. The objective of this study is therefore to examine various Value at Risk and Expected Shortfall models, including fatter tail models, in order to analyze the accuracy and reliability of these models.
Thirteen VaR and ES models under three main approaches (Parametric, Non-Parametric and Semi-Parametric) are examined in this study. The results of this study show that the proposed model (ARMA(1,1)-GJR-GARCH(1,1)-SGED) gives the most balanced Value at Risk results. The semi-parametric model (Extreme Value Theory, EVT) is the most accurate Value at Risk model in this study for S&P 500. / October 2014
|
9 |
Estimation And Hypothesis Testing In Stochastic RegressionSazak, Hakan Savas 01 December 2003 (has links) (PDF)
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal.
Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions.
As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
|
10 |
Distribution of Particle Image Velocimetry (PIV) Errors in a Planar JetHowell, Jaron A. 01 May 2018 (has links)
Particle Image Velocimetry (PIV) is an optical fluid measurement technique used to obtain velocity measurements. Two PIV systems were used to capture data simultaneously and measurement error for the MS PIV system is calculated. An investigation of error distribution is performed to determine when uncertainty estimations fail for the CS PIV-UQ method. Investigation of when results from multi pass PIV processing are achieve were performed so that reliable uncertainty estimations are produced with the CS method. An investigation was also performed which determined that error distributions in PIV systems are correlated with flow shear and particle seeding density. Correlation of random errors in space was also performed at the jet core and shear regions of the flow.
It was found that in flow regions with large shear that error distributions were non-Gaussian. It was also found in regions of large shear that CS uncertainty results did not match the error. For multi-pass PIV processing with 50% and 75% IW overlap it was found that 4 and 6 passes should be used, respectively, in order for CS uncertainty estimations to be reliable. It was also found that the correlation of random errors in space is much larger in shear regions of the jet flow than in the jet core.
|
Page generated in 0.1083 seconds