1 |
Identifying the relative importance of stock characteristics in the UK marketFrench, D., Wu, Yuliang, Li, Y. 2016 January 1921 (has links)
Yes / There is no consensus in the literature as to which stock characteristic best explains returns. In this study, we employ a novel econometric approach better suited than the traditional characteristic sorting method to answer this question for the UK market. We evaluate the relative explanatory power of market, size, momentum, volatility, liquidity and book-to-market factors in a semiparametric characteristic-based factor model which does not require constructing characteristic portfolios. We find that momentum is the most important factor and liquidity is the least important based on their relative contribution to the fit of the model and the proportion of sample months for which factor returns are significant. Our evidence supports the view that irrational investor behaviour may drive stock returns.
|
2 |
Evolutionary factor analysisMotta, Giovanni 06 February 2009 (has links)
Linear factor models have attracted considerable interest over recent years especially in the econometrics literature. The intuitively appealing idea to explain a panel of economic variables by a few common factors is one of the reasons for their popularity. From a statistical viewpoint, the need to reduce the cross-section
dimension to a much smaller factor space dimension is obvious considering the large data sets available in economics and finance.
One of the characteristics of the traditional factor model is that the process is stationary in the time dimension. This appears restrictive, given the fact that over long time periods it is unlikely that e.g. factor loadings remain constant. For example, in the capital asset pricing model (CAPM) of Sharpe (1964) and
Lintner (1965), typical empirical results show that factor loadings are time-varying, which in the CAPM is caused by time-varying second moments.
In this thesis we generalize the tools of factor analysis for the study of stochastic processes whose behavior evolves over time. In particular, we introduce a new class of factor models with loadings that are allowed to be smooth functions of time. To estimate the resulting nonstationary factor model we generalize the properties of the principal components technique to the time-varying framework. We mainly consider separately two classes of Evolutionary Factor Models: Evolutionary Static Factor Models (Chapter 2) and Evolutionary Dynamic Factor Models (Chapter 3).
In Chapter 2 we propose a new approximate factor model where the common components are static but
nonstationary. The nonstationarity is introduced by the time-varying factor loadings, that are estimated by the eigenvectors of a nonparametrically estimated covariance matrix. Under simultaneous asymptotics
(cross-section and time dimension go to infinity simultaneously), we give conditions for consistency of our estimators of the time varying covariance matrix, the loadings and the factors. This paper generalizes to the locally stationary case the results given by Bai (2003) in the stationary framework. A simulation study
illustrates the performance of these estimators.
The estimators proposed in Chapter 2 are based on a nonparametric estimator of the covariance matrix
whose entries are computed with the same moothing parameter. This approach has the advantage of
guaranteeing a positive definite estimator but it does not adapt to the different degree of smoothness of the different entries of the covariance matrix. In Chapter 5 we give an additional theoretical result which explains how to construct a positive definite estimate of the covariance matrix while while permitting different
smoothing parameters. This estimator is based on the Cholesky decomposition of a pre-estimator of the covariance matrix.
In Chapter 3 we introduce the dynamics in our modeling. This model generalizes the dynamic (but
stationary) factor model of Forni et al. (2000), as well as the nonstationary (but static) factor model of Chapter 2. In the stationary (dynamic) case, Forni et al. (2000) show that the common components are estimated by the eigenvectors of a consistent estimator of the spectral density matrix, which is a matrix depending only on the frequency. In the evolutionary framework the dynamics of the model is explained by a time-varying spectral density matrix. This operator is a function of time as well as of the frequency.
In this chapter we show that the common components of a locally stationary dynamic factor model can be estimated consistently by the eigenvectors of a consistent estimator of the time-varying spectral density matrix.
In Chapter 4 we apply our theoretical results to real data and compare the performance of our approach with that based on standard techniques. Chapter 6 concludes and mention the main questions for future research.
|
3 |
Pratt's importance measures in factor analysis : a new technique for interpreting oblique factor modelsWu, Amery Dai Ling 11 1900 (has links)
This dissertation introduces a new method, Pratt's measure matrix, for interpreting multidimensional oblique factor models in both exploratory and confirmatory contexts. Overall, my thesis, supported by empirical evidence, refutes the currently recommended and practiced methods for understanding an oblique factor model; that is, interpreting the pattern matrix or structure matrix alone or juxtaposing both without integrating the information.
Chapter Two reviews the complexities of interpreting a multidimensional factor solution due to factor correlation (i.e., obliquity). Three major complexities highlighted are (1) the inconsistency between the pattern and structure coefficients, (2) the distortion of additive properties, and (3) the inappropriateness of the traditional cut-off rules as being "meaningful".
Chapter Three provides the theoretical rationale for adapting Pratt's importance measures from their use in multiple regression to that of factor analysis. The new method is demonstrated and tested with both continuous and categorical data in exploratory factor analysis. The results show that Pratt's measures are applicable to factor analysis and are able to resolve three interpretational complexities arising from factor obliquity.
In the context of confirmatory factor analysis, Chapter Four warns researchers that a structure coefficient could be entirely spurious due to factor obliquity as well as zero constraint on its corresponding pattern coefficient. Interpreting such structure coefficients as Graham et al. (2003) suggested can be problematic. The mathematically more justified method is to transform the pattern and structure coefficients into Pratt's measures.
The last chapter describes eight novel contributions in this dissertation. The new method is the first attempt ever at ordering the importance of latent variables for multivariate data. It is also the first attempt at demonstrating and explicating the existence, mechanism, and implications of the suppression effect in factor analyses. Specifically, the new method resolves the three interpretational problems due to factor obliquity, assists in identifying a better-fitting exploratory factor model, proves that a structure coefficient in a confirmatory factor analysis with a zero pattern constraint is entirely spurious, avoids the debate over the choice of oblique and orthogonal factor rotation, and last but not least, provides a tool for consolidating the role off actors as the underlying causes.
|
4 |
Performance evaluation of latent factor models for rating predictionZheng, Lan 24 April 2015 (has links)
Since the Netflix Prize competition, latent factor models (LFMs) have become the comparison ``staples'' for many of the recent recommender methods. Meanwhile, it is still unclear to understand the impact of data preprocessing and updating algorithms on LFMs. The performance improvement of LFMs over baseline approaches, however, hovers at only low percentage numbers. Therefore, it is time for a better understanding of their real power beyond the overall root mean square error (RMSE), which as it happens, lies at a very compressed range, without providing too much chance for deeper insight.
We introduce an experiment based handbook of LFMs and reveal data preprocessing and updating algorithms' power. We perform a detailed experimental study regarding the performance of classical staple LFMs on a classical dataset, Movielens 1M, that sheds light on a much more pronounced excellence of LFMs for particular categories of users and items, for RMSE and other measures. In particular, LFMs exhibit surprising and excellent advantages when handling several difficult user and item categories. By comparing the distributions of test ratings and predicted ratings, we show that the performance of LFMs is influenced by rating distribution. We then propose a method to estimate the performance of LFMs for a given rating dataset. Also, we provide a very simple, open-source library that implements staple LFMs achieving a similar performance as some very recent (2013) developments in LFMs, and at the same time being more transparent than some other libraries in wide use. / Graduate
|
5 |
Pratt's importance measures in factor analysis : a new technique for interpreting oblique factor modelsWu, Amery Dai Ling 11 1900 (has links)
This dissertation introduces a new method, Pratt's measure matrix, for interpreting multidimensional oblique factor models in both exploratory and confirmatory contexts. Overall, my thesis, supported by empirical evidence, refutes the currently recommended and practiced methods for understanding an oblique factor model; that is, interpreting the pattern matrix or structure matrix alone or juxtaposing both without integrating the information.
Chapter Two reviews the complexities of interpreting a multidimensional factor solution due to factor correlation (i.e., obliquity). Three major complexities highlighted are (1) the inconsistency between the pattern and structure coefficients, (2) the distortion of additive properties, and (3) the inappropriateness of the traditional cut-off rules as being "meaningful".
Chapter Three provides the theoretical rationale for adapting Pratt's importance measures from their use in multiple regression to that of factor analysis. The new method is demonstrated and tested with both continuous and categorical data in exploratory factor analysis. The results show that Pratt's measures are applicable to factor analysis and are able to resolve three interpretational complexities arising from factor obliquity.
In the context of confirmatory factor analysis, Chapter Four warns researchers that a structure coefficient could be entirely spurious due to factor obliquity as well as zero constraint on its corresponding pattern coefficient. Interpreting such structure coefficients as Graham et al. (2003) suggested can be problematic. The mathematically more justified method is to transform the pattern and structure coefficients into Pratt's measures.
The last chapter describes eight novel contributions in this dissertation. The new method is the first attempt ever at ordering the importance of latent variables for multivariate data. It is also the first attempt at demonstrating and explicating the existence, mechanism, and implications of the suppression effect in factor analyses. Specifically, the new method resolves the three interpretational problems due to factor obliquity, assists in identifying a better-fitting exploratory factor model, proves that a structure coefficient in a confirmatory factor analysis with a zero pattern constraint is entirely spurious, avoids the debate over the choice of oblique and orthogonal factor rotation, and last but not least, provides a tool for consolidating the role off actors as the underlying causes.
|
6 |
Pratt's importance measures in factor analysis : a new technique for interpreting oblique factor modelsWu, Amery Dai Ling 11 1900 (has links)
This dissertation introduces a new method, Pratt's measure matrix, for interpreting multidimensional oblique factor models in both exploratory and confirmatory contexts. Overall, my thesis, supported by empirical evidence, refutes the currently recommended and practiced methods for understanding an oblique factor model; that is, interpreting the pattern matrix or structure matrix alone or juxtaposing both without integrating the information.
Chapter Two reviews the complexities of interpreting a multidimensional factor solution due to factor correlation (i.e., obliquity). Three major complexities highlighted are (1) the inconsistency between the pattern and structure coefficients, (2) the distortion of additive properties, and (3) the inappropriateness of the traditional cut-off rules as being "meaningful".
Chapter Three provides the theoretical rationale for adapting Pratt's importance measures from their use in multiple regression to that of factor analysis. The new method is demonstrated and tested with both continuous and categorical data in exploratory factor analysis. The results show that Pratt's measures are applicable to factor analysis and are able to resolve three interpretational complexities arising from factor obliquity.
In the context of confirmatory factor analysis, Chapter Four warns researchers that a structure coefficient could be entirely spurious due to factor obliquity as well as zero constraint on its corresponding pattern coefficient. Interpreting such structure coefficients as Graham et al. (2003) suggested can be problematic. The mathematically more justified method is to transform the pattern and structure coefficients into Pratt's measures.
The last chapter describes eight novel contributions in this dissertation. The new method is the first attempt ever at ordering the importance of latent variables for multivariate data. It is also the first attempt at demonstrating and explicating the existence, mechanism, and implications of the suppression effect in factor analyses. Specifically, the new method resolves the three interpretational problems due to factor obliquity, assists in identifying a better-fitting exploratory factor model, proves that a structure coefficient in a confirmatory factor analysis with a zero pattern constraint is entirely spurious, avoids the debate over the choice of oblique and orthogonal factor rotation, and last but not least, provides a tool for consolidating the role off actors as the underlying causes. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
|
7 |
Financial Mathematics ProjectLi, Jiang 24 April 2012 (has links)
This project describes the underlying principles of Modern Portfolio Theory, the Capital Asset Pricing Model (CAPM), and multi-factor models in detail, explores the process of constructing optimal portfolios using the Modern Portfolio Theory, estimates the expected return and covariance matrix of assets using CAPM and multi-factor models, and finally, applies these models in real markets to analyze our portfolios and compare their performances.
|
8 |
Structural Models for Macroeconomics and ForecastingDe Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate
central debates in empirical macroeconomic modeling.
Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data
revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based
on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the
DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary
figures, it is not possible for them to quantify it, as done by our model.
The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.
Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable
to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.
The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE
models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and
Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which
models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE
modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that
resulting from the original specification.
|
9 |
Essays on monetary policy, saving and investmentLenza, Michele 04 June 2007 (has links)
This thesis addresses three relevant macroeconomic issues: (i) why
Central Banks behave so cautiously compared to optimal theoretical
benchmarks, (ii) do monetary variables add information about
future Euro Area inflation to a large amount of non monetary
variables and (iii) why national saving and investment are so
correlated in OECD countries in spite of the high degree of
integration of international financial markets.
The process of innovation in the elaboration of economic theory
and statistical analysis of the data witnessed in the last thirty
years has greatly enriched the toolbox available to
macroeconomists. Two aspects of such a process are particularly
noteworthy for addressing the issues in this thesis: the
development of macroeconomic dynamic stochastic general
equilibrium models (see Woodford, 1999b for an historical
perspective) and of techniques that enable to handle large data
sets in a parsimonious and flexible manner (see Reichlin, 2002 for
an historical perspective).
Dynamic stochastic general equilibrium models (DSGE) provide the
appropriate tools to evaluate the macroeconomic consequences of
policy changes. These models, by exploiting modern intertemporal
general equilibrium theory, aggregate the optimal responses of
individual as consumers and firms in order to identify the
aggregate shocks and their propagation mechanisms by the
restrictions imposed by optimizing individual behavior. Such a
modelling strategy, uncovering economic relationships invariant to
a change in policy regimes, provides a framework to analyze the
effects of economic policy that is robust to the Lucas'critique
(see Lucas, 1976). The early attempts of explaining business
cycles by starting from microeconomic behavior suggested that
economic policy should play no role since business cycles
reflected the efficient response of economic agents to exogenous
sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}
and, more recently, King and Rebelo, 1999). This view was challenged by
several empirical studies showing that the adjustment mechanisms
of variables at the heart of macroeconomic propagation mechanisms
like prices and wages are not well represented by efficient
responses of individual agents in frictionless economies (see, for
example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al., 2004). Hence, macroeconomic models currently incorporate
some sources of nominal and real rigidities in the DSGE framework
and allow the study of the optimal policy reactions to inefficient
fluctuations stemming from frictions in macroeconomic propagation
mechanisms.
Against this background, the first chapter of this thesis sets up
a DSGE model in order to analyze optimal monetary policy in an
economy with sectorial heterogeneity in the frequency of price
adjustments. Price setters are divided in two groups: those
subject to Calvo type nominal rigidities and those able to change
their prices at each period. Sectorial heterogeneity in price
setting behavior is a relevant feature in real economies (see, for
example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro
Area). Hence, neglecting it would lead to an understatement of the
heterogeneity in the transmission mechanisms of economy wide
shocks. In this framework, Aoki (2001) shows that a Central
Bank maximizing social welfare should stabilize only inflation in
the sector where prices are sticky (hereafter, core inflation).
Since complete stabilization is the only true objective of the
policymaker in Aoki (2001) and, hence, is not only desirable
but also implementable, the equilibrium real interest rate in the
economy is equal to the natural interest rate irrespective of the
degree of heterogeneity that is assumed. This would lead to
conclude that stabilizing core inflation rather than overall
inflation does not imply any observable difference in the
aggressiveness of the policy behavior. While maintaining the
assumption of sectorial heterogeneity in the frequency of price
adjustments, this chapter adds non negligible transaction
frictions to the model economy in Aoki (2001). As a
consequence, the social welfare maximizing monetary policymaker
faces a trade-off among the stabilization of core inflation,
economy wide output gap and the nominal interest rate. This
feature reflects the trade-offs between conflicting objectives
faced by actual policymakers. The chapter shows that the existence
of this trade-off makes the aggressiveness of the monetary policy
reaction dependent on the degree of sectorial heterogeneity in the
economy. In particular, in presence of sectorial heterogeneity in
price adjustments, Central Banks are much more likely to behave
less aggressively than in an economy where all firms face nominal
rigidities. Hence, the chapter concludes that the excessive
caution in the conduct of monetary policy shown by actual Central
Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not
represent a sub-optimal behavior but, on the contrary, might be
the optimal monetary policy response in presence of a relevant
sectorial dispersion in the frequency of price adjustments.
DSGE models are proving useful also in empirical applications and
recently efforts have been made to incorporate large amounts of
information in their framework (see Boivin and Giannoni, 2006). However, the
typical DSGE model still relies on a handful of variables. Partly,
this reflects the fact that, increasing the number of variables,
the specification of a plausible set of theoretical restrictions
identifying aggregate shocks and their propagation mechanisms
becomes cumbersome. On the other hand, several questions in
macroeconomics require the study of a large amount of variables.
Among others, two examples related to the second and third chapter
of this thesis can help to understand why. First, policymakers
analyze a large quantity of information to assess the current and
future stance of their economies and, because of model
uncertainty, do not rely on a single modelling framework.
Consequently, macroeconomic policy can be better understood if the
econometrician relies on large set of variables without imposing
too much a priori structure on the relationships governing their
evolution (see, for example, Giannone et al., 2004 and Bernanke et al., 2005).
Moreover, the process of integration of good and financial markets
implies that the source of aggregate shocks is increasingly global
requiring, in turn, the study of their propagation through cross
country links (see, among others, Forni and Reichlin, 2001 and Kose et al., 2003). A
priori, country specific behavior cannot be ruled out and many of
the homogeneity assumptions that are typically embodied in open
macroeconomic models for keeping them tractable are rejected by
the data. Summing up, in order to deal with such issues, we need
modelling frameworks able to treat a large amount of variables in
a flexible manner, i.e. without pre-committing on too many
a-priori restrictions more likely to be rejected by the data. The
large extent of comovement among wide cross sections of economic
variables suggests the existence of few common sources of
fluctuations (Forni et al., 2000 and Stock and Watson, 2002) around which
individual variables may display specific features: a shock to the
world price of oil, for example, hits oil exporters and importers
with different sign and intensity or global technological advances
can affect some countries before others (Giannone and Reichlin, 2004). Factor
models mainly rely on the identification assumption that the
dynamics of each variable can be decomposed into two orthogonal
components - common and idiosyncratic - and provide a parsimonious
tool allowing the analysis of the aggregate shocks and their
propagation mechanisms in a large cross section of variables. In
fact, while the idiosyncratic components are poorly
cross-sectionally correlated, driven by shocks specific of a
variable or a group of variables or measurement error, the common
components capture the bulk of cross-sectional correlation, and
are driven by few shocks that affect, through variable specific
factor loadings, all items in a panel of economic time series.
Focusing on the latter components allows useful insights on the
identity and propagation mechanisms of aggregate shocks underlying
a large amount of variables. The second and third chapter of this
thesis exploit this idea.
The second chapter deals with the issue whether monetary variables
help to forecast inflation in the Euro Area harmonized index of
consumer prices (HICP). Policymakers form their views on the
economic outlook by drawing on large amounts of potentially
relevant information. Indeed, the monetary policy strategy of the
European Central Bank acknowledges that many variables and models
can be informative about future Euro Area inflation. A peculiarity
of such strategy is that it assigns to monetary information the
role of providing insights for the medium - long term evolution of
prices while a wide range of alternative non monetary variables
and models are employed in order to form a view on the short term
and to cross-check the inference based on monetary information.
However, both the academic literature and the practice of the
leading Central Banks other than the ECB do not assign such a
special role to monetary variables (see Gali et al., 2004 and
references therein). Hence, the debate whether money really
provides relevant information for the inflation outlook in the
Euro Area is still open. Specifically, this chapter addresses the
issue whether money provides useful information about future
inflation beyond what contained in a large amount of non monetary
variables. It shows that a few aggregates of the data explain a
large amount of the fluctuations in a large cross section of Euro
Area variables. This allows to postulate a factor structure for
the large panel of variables at hand and to aggregate it in few
synthetic indexes that still retain the salient features of the
large cross section. The database is split in two big blocks of
variables: non monetary (baseline) and monetary variables. Results
show that baseline variables provide a satisfactory predictive
performance improving on the best univariate benchmarks in the
period 1997 - 2005 at all horizons between 6 and 36 months.
Remarkably, monetary variables provide a sensible improvement on
the performance of baseline variables at horizons above two years.
However, the analysis of the evolution of the forecast errors
reveals that most of the gains obtained relative to univariate
benchmarks of non forecastability with baseline and monetary
variables are realized in the first part of the prediction sample
up to the end of 2002, which casts doubts on the current
forecastability of inflation in the Euro Area.
The third chapter is based on a joint work with Domenico Giannone
and gives empirical foundation to the general equilibrium
explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found
that domestic saving and investment in OECD countries strongly
comove, contrary to the idea that high capital mobility should
allow countries to seek the highest returns in global financial
markets and, hence, imply a correlation among national saving and
investment closer to zero than one. Moreover, capital mobility has
strongly increased since the publication of Feldstein - Horioka's
seminal paper while the association between saving and investment
does not seem to comparably decrease. Through general equilibrium
mechanisms, the presence of global shocks might rationalize the
correlation between saving and investment. In fact, global shocks,
affecting all countries, tend to create imbalance on global
capital markets causing offsetting movements in the global
interest rate and can generate the observed correlation across
national saving and investment rates. However, previous empirical
studies (see Ventura, 2003) that have controlled for the effects
of global shocks in the context of saving-investment regressions
failed to give empirical foundation to this explanation. We show
that previous studies have neglected the fact that global shocks
may propagate heterogeneously across countries, failing to
properly isolate components of saving and investment that are
affected by non pervasive shocks. We propose a novel factor
augmented panel regression methodology that allows to isolate
idiosyncratic sources of fluctuations under the assumption of
heterogenous transmission mechanisms of global shocks. Remarkably,
by applying our methodology, the association between domestic
saving and investment decreases considerably over time,
consistently with the observed increase in international capital
mobility. In particular, in the last 25 years the correlation
between saving and investment disappears.
|
10 |
Algoritmické fundamentové obchodování / Algorithmic fundamental tradingPižl, Vojtěch January 2016 (has links)
This thesis aims to apply methods of value investing into developing field of algorithmic trading. Firstly, we investigate the effect of several fundamental variables on stock returns using the fixed effects model and portfolio approach. The results confirm that size and book- to-market ratio explain some variation in stock returns that market alone do not capture. Moreover, we observe a significant positive effect of book-to-market ratio and negative effect of size on future stock returns. Secondly, we try to utilize those variables in a trading algorithm. Using the common performance evaluation tools we test several fundamentally based strategies and discover that investing into small stocks with high book-to-market ratio beats the market in the tested period between 2009 and 2015. Although we have to be careful with conclusions as our dataset has some limitations, we believe that there is a market anomaly in the testing period which may be caused by preference of technical strategies over value investing by market participants.
|
Page generated in 0.0826 seconds