Spelling suggestions: "subject:"derices -- istatistical methods."" "subject:"derices -- bystatistical methods.""
1 |
A NON-PARAMETRIC TEST PROCEDURE BASED ON RANGE STATISTICS TO IDENTIFY CAUSES OF NON-NORMALITY IN SPECULATIVE PRICE CHANGE DISTRIBUTIONS.ABRAHAMSON, ALLEN ARNOLD. January 1982 (has links)
Most models of asset pricing or market equilibrium generally require the assumption of stationary price change generation. That is, the mean and/or variance of the price change is hypothesized to be constant over time. On the other hand, the widely accepted models of speculative price change generation, such as the subordinated stochastic process models, have their basis in mixtures of random variables. These mixtures, or compositisations, define non-stationary, non-Normally distributed forms. Therefore, the models based on mixtures cannot be reconciled to requirements of stationarity. A contaminated process, such as that suggested by Mandelbroit, implies continuously changing mean and/or variance. However, an alternative concept of mixture exists, which is consistent with models requiring stationary moments. This process is referred to as slippage. Slippage defines a state where moments are constant for intervals of time, but do change value. If speculative price changes were found to be characterized by slippage, rather than by contamination, then such a finding would still be consistent with the empirical distributions of price changes. More importantly, slippage would meet the requirement of stationarity imposed on the capital market and options models. This work advanced a methodology that discriminates between contamination-based and slippage-based non-stationarity in speculative price changes. Such a technique is necessary, inasmuch as curve fitting or estimation of moments cannot so discriminate. The technique employs non-parametric range estimators. Any given form of non-Normality induces an identifiable pattern of bias upon these estimators. Once a pattern induced by a time series of price changes is identified; this pattern then infers whether contamination, or, alternatively, slippage, generated the time series. Due to the composition and technique of the procedure developed here, it is referred to as a "Range Spectrum." The results examined here find that stocks do display contamination, as hypothesized by the subordinate stochastic models. A broad based index of price change, however, displays the characteristics of slippage. This quality not only has implications for, but suggests possibilities for further research, in the areas of diversification, securities and options pricing, and market timing.
|
2 |
Statistical analysis of some technical trading rules in financial markets任漢全, Yam, Hon-chuen. January 1996 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
|
3 |
The statistical properties and effectiveness of filter trading ruleXin, Ling, 辛聆 January 2013 (has links)
Filter trading rule is a technical trading strategy that was very popular amongst practitioners and has been used a lot for testing market efficiency. It has been shown that the filter trading rule is mathematically equivalent to the CUSUM quality control test as both are based on change point detection theory via sequential probability ratio tests (SPRT). To study the operating characteristics of the filter trading rule, many results from the CUSUM literature can be applied. However, some interesting operating characteristics of a technical trading rule such as expected profit per day may not be relevant when put into a quality control setting. In this thesis, we derive formulae for computing these operating characteristics.
It is well known that just like any other technical trading rule, the filter trading rule is not effective when the asset price follows a random walk. In this thesis, we studied the statistical properties and effectiveness of the filter trading rule under different asset price models including Markov regime switching model and conditional heteroskedasticity model. The properties of the filter trading rule considered include the waiting time for the first signal in filter trading, the duration of a long or a short cycle in filter trading, the profit return derived from a long or a short cycle and the unit time return of long term filter trading. Built on the above results, we consider the problem of optimizing the performance of a filter trading rule by choosing a suitable filter size.
For filter trading rule under the conditional heteroskedasticity model, the change point detection methods lead to a new technical trading rule called generalized filter trading rule in this thesis. The generalized filter trading rule is shown to have a better performance over the ordinary filter trading rule when it is applied to the trading of the Hang Seng Index futures contract. Finally, we have applied the filter trading rule to intraday trading on high frequency Hang Seng Index futures data. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
4 |
Mining optimal technical trading rules with genetic algorithmsShen, Rujun, 沈汝君 January 2011 (has links)
In recent years technical trading rules are widely known by more and
more people, not only the academics many investors also learn to apply
them in financial markets. One approach of constructing technical
trading rules is to use technical indicators, such as moving average(MA)
and filter rules. These trading rules are widely used possibly because
the technical indicators are simple to compute and can be programmed
easily. An alternative approach of constructing technical trading rules
is to rely on some chart patterns. However, the patterns and signals
detected by these rules are often made by the visual inspection through
human eyes. As for as I know, there are no universally acceptable methods
of constructing the chart patterns. In 2000, Prof. Andrew Lo and
his colleagues are the first ones who define five pairs of chart patterns
mathematically. They are Head-and-Shoulders(HS) & Inverted Headand-
Shoulders(IHS), Broadening tops(BTOP) & bottoms(BBOT), Triangle
tops(TTOP) & bottoms(TBOT), Rectangle tops(RTOP) & bottoms(
RBOT) and Double tops(DTOP) & bottoms(DBOT).
The basic formulation of a chart pattern consists of two steps: detection
of (i) extreme points of a price series; and (ii) shape of the pattern.
In Lo et al.(2000), the method of kernel smoothing was used to identify
the extreme points. It was admitted by Lo et al. (2000) that the
optimal bandwidth used in kernel method is not the best choice and
the expert judgement is needed in detecting the bandwidth. In addition,
their work considered chart pattern detection only but no buy/sell
signal detection. It should be noted that it is possible to have a chart
pattern formed without a signal detected, but in this case no transaction
will be made. In this thesis, I propose a new class of technical
trading rules which aims to resolve the above problems. More specifically,
each chart pattern is parameterized by a set of parameters which
governs the shape of the pattern, the entry and exit signals of trades.
Then the optimal set of parameters can be determined by using genetic
algorithms (GAs). The advantage of GA is that they can deal with a
high-dimensional optimization problems no matter the parameters to
be optimized are continuous or discrete. In addition, GA can also be
convenient to use in the situation that the fitness function is not differentiable
or has a multi-modal surface. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
5 |
Spatial autocorrelation and liquidity in Hong Kong's real estate marketLi, Chun-wah, 李振華 January 2010 (has links)
Spatial autocorrelation is commonly found in the Hedonic Pricing model for real estate prices,
but little attention has been paid to identify the causes behind. The primary objective of this
research is to examine the causes of spatial autocorrelation in housing prices. Observed
autocorrelation is often attributable to the omission of important location characteristics in the
modelling process. Since it is practically impossible to exhaustively include all location
characteristics, some variables may eventually be omitted, leaving spatially autocorrelated
residuals in the Hedonic Pricing model. This thesis proposes a new source of spatial
autocorrelation: real estate market liquidity. We hypothesize that liquidity affects the
geographical boundary within which buyers and sellers search for price information. When the
“immediate vicinity” of a property has few transactions, buyers and sellers may have to search
for price information from more distant locations. Therefore, low liquidity in the vicinity of a
property should strengthen the spatial autocorrelation of real estate prices.
A Spatial - Liquidity Hedonic Pricing (SLHP) model is proposed to test the above hypothesis.
The SLHP model generalizes traditional spatial autoregressive models by making the spatial
process liquidity dependent. When applied to the apartment market in Hong Kong, the model is
operationalized by defining “immediate vicinity” as the building where the subject unit locates.
Furthermore, the SLHP model recognizes that past transactions may affect current transactions,
but not vice versa, so the spatial weight matrix is simply lower triangular. Under this condition,
we have shown that the Maximum Likelihood Estimation is equivalent to the Ordinary Least
Squares Estimation. This greatly simplifies the estimation procedures and reduces the empirical
analysis to a feasible scale.
Based on 15 500 transactions of residential units in Taikooshing, Hong Kong from 1992 to 2006,
we conclude that while positive spatial autocorrelation is present in housing prices, its magnitude
decreases when liquidity, as measured by the past transaction volume in the immediate vicinity
of a subject unit, is high. In addition, we found that current prices are spatially correlated with
transactions occurred up to the last three months only, reflecting the relatively high information
efficiency of Hong Kong’s residential market. All these results are generally robust across a
variety of distance, liquidity, and time weight specifications.
This study establishes liquidity as a determinant of spatial autocorrelation in real estate prices.
This is a new finding contributing to the economic literature on liquidity effects and technical
literature on spatial estimation. Our results not only reveal the spatially dependent price
formation process in the real estate market, but also have practical applications on the hedonic
modelling of real estate prices for mass valuation and index construction. / published_or_final_version / Real Estate and Construction / Doctoral / Doctor of Philosophy
|
6 |
The heteroscedastic structure of some Hong Kong price seriesMa, Po-yee, Pauline., 馬寶兒. January 1989 (has links)
published_or_final_version / Statistics / Master / Master of Social Sciences
|
7 |
Scanner data and the construction of price indices.Ivancic, Lorraine, Economics, Australian School of Business, UNSW January 2007 (has links)
This thesis explores whether scanner data can be used to inform Consumer Price Index (CPI) construction, with particular reference to the issues of substitution bias and choice of aggregation dimensions. The potential costs and benefits of using scanner data are reviewed. Existing estimates of substitution bias are found to show considerable variation. An Australian scanner data set is used to estimate substitution bias for six different aggregation methods and for fixed base and superlative indexes. Direct and chained indexes are also calculated. Estimates of substitution bias are found to be highly sensitive to both the method of aggregation used and whether direct or chained indexes were used. The ILO (2004) recommends the use of dissimilarity indexes to determine the issue of when to chain. This thesis provides the first empirical study of dissimilarity indexes in this context. The results indicate that dissimilarity indexes may not be sufficient to resolve the issue. A Constant Elasticity of Substitution (CES) index provides an approximate estimate of substitution-bias-free price change, without the need for current period expenditure weights. However, an elasticity parameter is needed. Two methods, referred to as the algebraic and econometric methods, were used to estimate the elasticity parameter. The econometric approach involved the estimation of a system of equations proposed by Diewert (2002a). This system has not been estimated previously. The results show a relatively high level of substitution at the elementary aggregate level, which supports the use a Jevons index, rather than Carli or Dutot indexes, at this level. Elasticity parameter estimates were found to vary considerably across time, and statistical testing showed that elasticity parameter estimates were significantly different across estimation methods. Aggregation is an extremely important issue in the compilation of the CPI. However, little information exists about 'appropriate' aggregation methods. Aggregation is typically recommended over 'homogenous' units. An hedonic framework is used to test for item homogeneity across four supermarket chains and across all stores within each chain. This is a novel approach. The results show that treating the same good as homogenous across stores which belong to the same chain may be recommended.
|
8 |
Scanner data and the construction of price indices.Ivancic, Lorraine, Economics, Australian School of Business, UNSW January 2007 (has links)
This thesis explores whether scanner data can be used to inform Consumer Price Index (CPI) construction, with particular reference to the issues of substitution bias and choice of aggregation dimensions. The potential costs and benefits of using scanner data are reviewed. Existing estimates of substitution bias are found to show considerable variation. An Australian scanner data set is used to estimate substitution bias for six different aggregation methods and for fixed base and superlative indexes. Direct and chained indexes are also calculated. Estimates of substitution bias are found to be highly sensitive to both the method of aggregation used and whether direct or chained indexes were used. The ILO (2004) recommends the use of dissimilarity indexes to determine the issue of when to chain. This thesis provides the first empirical study of dissimilarity indexes in this context. The results indicate that dissimilarity indexes may not be sufficient to resolve the issue. A Constant Elasticity of Substitution (CES) index provides an approximate estimate of substitution-bias-free price change, without the need for current period expenditure weights. However, an elasticity parameter is needed. Two methods, referred to as the algebraic and econometric methods, were used to estimate the elasticity parameter. The econometric approach involved the estimation of a system of equations proposed by Diewert (2002a). This system has not been estimated previously. The results show a relatively high level of substitution at the elementary aggregate level, which supports the use a Jevons index, rather than Carli or Dutot indexes, at this level. Elasticity parameter estimates were found to vary considerably across time, and statistical testing showed that elasticity parameter estimates were significantly different across estimation methods. Aggregation is an extremely important issue in the compilation of the CPI. However, little information exists about 'appropriate' aggregation methods. Aggregation is typically recommended over 'homogenous' units. An hedonic framework is used to test for item homogeneity across four supermarket chains and across all stores within each chain. This is a novel approach. The results show that treating the same good as homogenous across stores which belong to the same chain may be recommended.
|
Page generated in 0.1043 seconds