• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 35
  • 35
  • 23
  • 22
  • 13
  • 8
  • 8
  • 7
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 458
  • 458
  • 211
  • 200
  • 105
  • 80
  • 77
  • 69
  • 61
  • 60
  • 58
  • 54
  • 54
  • 51
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Essays on Macroeconomics and Asset Pricing:

Eiermann, Alexander January 2017 (has links)
Thesis advisor: Peter Ireland / A significant theoretical literature suggests that the effects of open market operations and large scale asset purchases are limited when short-term interest rates are constrained by the zero-lower-bound (ZLB). This view is supported by a growing body of empirical evidence that points to the tepid response of the U.S. economy to extraordinary policy measures implemented by the Federal Reserve (Fed) during the past several years. In the first essay, Effective Monetary Policy at the Zero-Lower-Bound, I show that permanent open market operations (POMOs), defined as financial market interventions that permanently increase the supply of money, remain relevant at the ZLB and can increase output and inflation. Consequently, I argue that the limited success of Fed policy in recent years may be due in part to the fact that it failed to generate sufficient money creation to support economic recovery following the Great Recession. I then demonstrate that conducting POMOs at the ZLB may improve welfare when compared to a broad range of policy regimes, and conclude by conducting a robustness exercise to illustrate that money creation remains relevant at the ZLB when it is not necessarily permanent. With these results in hand, I explore the consequences of Fed QE more directly in a framework asset purchases are an independent instrument of monetary policy. In the second essay, Effective Quantitative Easing at the Zero-Lower-Bound, I show that the observed lack of transmission between U.S. monetary policy and output economic activity a consequence of the fact the Fed engaged in what I define as sterilized QE: temporary asset purchases that have a limited effect on the money supply. Conversely, I show that asset purchase programs geared towards generating sustained increases in the money supply may significantly attenuate output and inflation losses associated with adverse economic shocks and the ZLB constraint. Furthermore, these equilibrium outcomes may be achieved with a smaller volume of asset purchases. My results imply that Fed asset purchase programs designed to offset the observed declines in the U.S. money supply could have been a more effective and efficient means of providing economic stimulus during the recovery from the Great Recession. The third essay—which is joint work with Apollon Fragkiskos, Harold Spilker, and Russ Wermers— titled Buyout Gold: MIDAS Estimators and Private Equity, we develop a new approach to study private equity returns using a data set first introduced in Fragkiskos et al. (2017). Our innovation is that we adopt a mixed data sampling (MIDAS) framework and model quarterly private equity returns as a function of high frequency factor prices. This approach allows us to endogenize time aggregation and use within-period information that may be relevant to pricing private equity returns in a single, parsimonious framework. We find that our MIDAS framework offers superior performance in terms of generating economically meaningful factor loadings and in-sample and out-of-sample fit using index and vintage-level returns when compared with other methods from the literature. Results using fund-level data are mixed, but MIDAS does display a slight edge. Concerning appropriate time-aggregation, we show that there is significant heterogeneity at the vintage level. This implies highly aggregated private equity data may not properly reflect underlying performance in the cross section.
152

Risk premia estimation in Brazil: wait until 2041 / Estimação de prêmios de risco no Brasil: aguarde até 2041

Cavalcante Filho, Elias 20 June 2016 (has links)
The estimation results of Brazilian risk premia are not robust in the literature. For instance, among the 133 market risk premium estimates reported on the literature, 41 are positives, 18 are negatives and the remainder are not significant. In this study, we investigate the grounds for this lack of consensus. First of all, we analyze the sensitivity of the US risk premia estimation to two relevant constraints present in the Brazilian market: the small number of assets (137 eligible stocks) and the short time-series sample available for estimation (14 years). We conclude that the second constrain, small T, has greater impact on the results. Following, we evaluate the two potential causes of problems for the risk premia estimation with small T: i) small sample bias on betas; ii) divergence between ex-post and ex-ante risk premia. Through Monte Carlo simulations, we conclude that for the T available for Brazil, the betas estimates are no longer a problem. However, it is necessary to wait until 2041 to be able to estimate ex-ante risk premia with Brazilian data. / Os resultados das estimações de prêmios de risco brasileiros não são robustos na literatura. Por exemplo, dentre 133 estimativas de prêmio de risco de mercado documentadas, 41 são positivas, 18 negativas e o restante não é significante. No presente trabalho, investigamos os motivos da falta de consenso. Primeiramente, analisamos a sensibilidade da estimação dos prêmios de risco norte-americanos a duas restrições presentes no mercado brasileiro: o baixo número de ativos (137 ações elegíveis) e a pequena quantidade de meses disponíveis para estimação (14 anos). Concluímos que a segunda restrição, T pequeno, tem maior impacto sobre os resultados. Em seguida, avaliamos as duas potenciais causas de problemas para a estimação de prêmios de risco em amostras com T pequeno: i) viés de pequenas amostras nas estimativas dos betas; e ii) divergência entre prêmio de risco ex-post e ex-ante. Através de exercícios de Monte Carlo, concluímos que para o T disponível no Brasil, a estimativa dos betas já não é mais um problema. No entanto, ainda precisamos esperar até 2041 para conseguirmos estimar corretamente os prêmios ex-ante com os dados brasileiros.
153

O modelo de projeção de lucros de Hou, Dijk e Zhang (2012) e o custo de capital implícito: metodologia para aplicação em empresas brasileiras / The earnings\'projection model of Hou, Dijk and Jhang (2012) and the implied cost of capital: a study on the Brazilian market

Pereira, Bruna Losada 03 August 2016 (has links)
A teoria sobre o custo de capital das empresas estudada desde a década de 1950 trouxe amplas contribuições aos estudos de finanças corporativas, alocação de carteiras de investimento, fusões e aquisições, ciências contábeis, entre outras aplicações. Os modelos clássicos de custo de capital compreendem modelos como o CAPM (Capital Asset Pricing Model), de Sharpe, Lintner e Mossin; o APT (Arbitrage Pricing Theory), de Ross; o modelo de 3-fatores, de Fama e French, e de 4-fatores, de Carhart, entre outros. Em virtude das diversas críticas feitas aos modelos clássicos (ELTON, 1999; FAMA; FRENCH, 2004; GRINBLATT; TITMAN, 2005; ASHTON; WANG, 2012; HOU et al., 2012), há então espaço para o surgimento de uma metodologia alternativa para estimativa do custo de capital das empresas, contexto em que surgem os modelos de Custo de Capital Implícito (ICC - Implied Cost of Capital). São cinco os principais modelos de ICC estudados e testados na literatura: Gordon e Gordon, modelo FHERM (1997), Claus e Thomas, modelo CT (2001), Gebhardt, Lee e Swaminathan, modelo GLS (2001), Ohlson e Juettner-Nauroth, modelo OJ (2005) e Easton, modelo de EASTON (2004). Todos se baseiam em expectativas sobre resultados futuros projetados, e as pesquisas que os aplicam fundamentam-se majoritariamente em projeções de analistas. Há, no entanto, diversos problemas levantados pela literatura quanto ao uso de dados produzidos por analistas (GUAY et al., 2011; HOU et al., 2012; KARAMANOU, 2012). Hou et al. (2012) propõem então uma metodologia cross-sectional de projeções de resultados das empresas, com base em dados contábeis, alternativa às projeções dos analistas e aplicável aos modelos de ICC, a qual se mostrou eficiente para os testes desenvolvidos. O objetivo desta tese foi verificar se a metodologia de projeção de lucros proposta por Hou et al. (2012), com as devidas considerações e ajustes, é válida para aplicação no mercado brasileiro e, em caso positivo, verificar qual a magnitude do Custo de Capital Implícito esperado pelos investidores para aplicação de recursos no Brasil, através da aplicação dos cinco principais modelos de ICC. Analisou-se também se os modelos de ICC podem ser considerados eficientes como ferramenta para prever os ativos que terão maiores ou menores retornos futuros e, por fim, verificou-se como o prêmio pelo risco implícito se compara com o prêmio pelo risco do CAPM, e qual das duas abordagens é mais eficiente como ferramenta de precificação de ativos. Para tanto, foi analisada uma janela de dados de 1994 a 2014. As principais conclusões obtidas foram: (i) o modelo de Hou et al. (2012) ajustado tem desempenho muito positivo para fins de projeção de lucros no Brasil, com capacidade de prever 69,8% dos lucros futuros; (ii) o prêmio pelo risco implícito apurado para o Brasil para o período de 1994 a 2014 é da magnitude de 7,5% a.a., em linha com a literatura internacional e nacional; (iii) identificou-se a importância de se efetuarem ajustes e controles inflacionários, em especial, para aplicar os modelos GLS e CT, sob risco de subestimar o ICC nesses modelos; (iv) verificou-se que o único modelo, entre os testados, contraindicado para aplicação no Brasil é o FHERM, cujas simplificações teóricas e de premissas levam a resultados muito voláteis e pouco capazes de prever os retornos futuros das ações; e (v) na análise comparativa entre os modelos de ICC e os modelos clássicos de custo de capital, concluiu-se que as metodologias de ICC testadas são eficientes como ferramenta para previsão da performance futura dos ativos, diferentemente do CAPM tradicional que apresentou resultados inferiores e não conclusivos para tais fins. Por fim, salienta-se a potencial contribuição dos modelos de ICC para análises relacionadas às finanças comportamentais e prêmios de liquidez. / The theory on companies\' cost of capital has been studied since the 1960s, bringing forward extensive contributions to the study of corporate finance, allocation of investment portfolios, mergers and acquisitions, accounting, among other several applications. The classical models of cost of capital include, for example, the CAPM (Capital Asset Pricing Model) of Shrape, Lintnet and Mossin, the APT (Arbitrage Pricing Theory) of Ross, the 3-factor model of Fama and French and the 4-factor model of Carhart. Due to the several criticisms directed at the classical models and its limitations (ELTON, 1999; FAMA; FRENCH, 2004; GRINBLATT; TITMAN, 2005; ASHTON; WANG, 2012; HOU et al., 2012), this context made room for the emergence of an alternative methodology for estimating the firms\' cost of capital, represented by the Implied Cost of Capital models (ICC). There are five main models of ICC studied and tested in the literature: Gordon and Gordon, FHERM Model (1997), Claus and Thomas, CT Model (2001), Gebhardt Lee and Swaminathan, GLS Model (2001), Ohlson and Juettner-Nauroth, OJ Model (2005) and Easton, EASTON Model (2004). All such models are based on expectations about projected future earnings, and studies that apply these methods are predominantly based on analysts\' estimates. There are, however, several problems raised by the literature regarding the use of analysts projections (GUAY et al., 2011; HOU et al., 2012; KARAMANOU, 2012). Hou et al. (2012) then proposed a cross-sectional approach to estimate the firms\' future earnings, as an alternative methodology to apply in the ICC models, which was proved very efficient. Given this context, the objective of this thesis is to verify whether the cross-sectional methodology proposed by Hou et al. (2012) to estimate future earnings, with due adjustments to the local market\'s characteristics, is valid for application in Brazil. If so, we should then verify what the magnitude of the ICC expected by investors in Brazil is, estimated using the five main ICC models. Also, this thesis should analyse if the ICC models can be considered efficient as a tool to predict which assets should have larger or smaller future returns. Finally, we should compare the risk premium estimated by the ICC models and risk premium estimated by the CAPM, and identify which of the two approaches is more efficient for asset pricing. In order to achieve such goals, a window of data from 1994 to 2014 was analysed. The main results achieved were: (i) the adjusted model of Hou et al. (2012) has shown very positive performance for projecting earnings in Brasil, with power to predict 69,8% of future earnings; (ii) the implicit risk premium for the Brazilian market from 1994 to 2014 is of 7,5% per year, which corroborates the national and international literature; (iii) it was identified the need of controlling for inflation effects, specially when implementing the GLS and CT models, at risk of underestimating the ICC if not taking the due precautions; (iv) the only model, among the tested, which was identified as unfit for applying to the Brazilian market was FHERM, since its theorical simplifications lead to too volatile results, which are poorly capable of predicting future returns; and (v) when comparing the ICC to the classical models, it was concluded that the ICC methodologies are efficient as a tool to infer future asstes\' performance, while the traditional CAPM presents poor and unconclusuve results for such purpose. At last, we stress the potencial contribution of the ICC models to the study of behavioral finance and liquidity premiums.
154

Arbitrage pricing theory in international markets / Teoria de apreçamento arbitragem aplicada a mercados internacionais

Bernat, Liana Oliveira 05 September 2011 (has links)
This dissertation studies the impact of multiple pre-specified sources of risk in the return of three non-overlapping groups of countries, through an Arbitrage Pricing Theory (APT) model. The groups are composed of emerging and developed markets. Emerging markets have become important players in the world economy, especially as capital receptors, but they were not included in the majority of previous related works. Two strategies are used to choose two set of risk factors. The first one is to use macroeconomic variables, as prescribed by most of the literature, such as world excess return, exchange rates, variation in the spread between Eurodollar deposit tax and U.S. Treasury bill (TED spread) and change in the oil price. The second strategy is to extract factors by using a principal component analysis, designated as statistical factors. The first important result is a great resemblance between the first statistical factor and the world excess return. We estimate the APT model using two statistical methodologies: Iterated Nonlinear Seemingly Unrelated Regression (ITNLSUR) by McElroy and Burmeister (1988) and the Generalized Method Moments (GMM) by Hansen (1982). The results from both methods are very similar. With macroeconomic variables, only the world excess of return is priced in the three groups with a premium varying from 4.4% to 6.3% per year and, in the model with statistical variables, only the first statistical factor is priced in all groups with a premium varying from 6.2% to 8.5% per year. / Essa dissertação estuda o impacto de múltiplas fontes de riscos pré-especificados nos retornos de três grupos de países não sobrepostos, através de um modelo de Teoria de Precificação por Arbitragem (APT). Os grupos são compostos por mercados emergentes e desenvolvidos. Mercados emergentes tornaram-se importantes na economia mundial, especialmente como receptores de capital, mas não foram inclusos na maioria dos trabalhos correlatos anteriores. Duas estratégias foram adotadas para a escolha de dois conjuntos de fatores de risco. A primeira foi utilizar variáveis macroeconômicas, descritas na maior parte da literatura, como e excesso de retorno da carteira mundial, taxas de câmbio, variação da diferença entre a taxa de depósito em Eurodólar e a U.S. Treasury Bill (TED Spread) e mudanças no preço do petróleo. A segunda estratégia foi extrair fatores de risco através de uma análise de componentes principais, denominados fatores estatísticos. O primeiro resultado importante é a grande semelhança entre o primeiro fator estatístico e o retorno da carteira mundial. Nós estimamos o modelo APT usando duas metodologias estatísticas: Regressões Aparentemente não Correlacionadas Iteradas (ITNLSUR) de McElroy e Burmeister (1988) e o Método dos Momentos Generalizados (GMM) de Hansen (1982). Os resultados de ambas as metodologias são muito similares. Utilizando variáveis macroeconômicas, apenas o excesso de retorno da carteira mundial é precificado nos três grupos com prêmios variando de 4,4% a 6.3% ao ano e, no modelo com variáveis estatísticas, apenas o primeiro fator estatístico é precificado em todos os grupos com prêmios que variam entre 6,2% a 8,5% ao ano.
155

Extreme downside risk : implications for asset pricing and portfolio management

Nguyen, Linh Hoang January 2015 (has links)
This thesis investigates different aspects of the impact of extreme downside risk on stock returns. We first investigate the impact at market level, where the return of the stock market index is expected to be positively correlated to its tail risk. More specifically, we incorporate Markov switching mechanism into the framework of Bali et al. (2009) to analyse the relationship between risk and returns under different market regimes. Interestingly, although highly significant in calm periods, the tail risk-return relationship cannot be captured during turbulent times. This is puzzling since this is the time when the distress risk is most prominent. We show that this pattern persists under different modifications of the framework, including expanding the set of state variables and accounting for the non-iid feature of return process. We suggest that this result is due to the leverage and volatility feedback effects. To better filter out these effects, we propose a simple but effective modification to the risk measures which reinstates the positive extreme risk-return relationship under any state of market volatility. The success of our method provides insights into how extreme downside risk is factored into expected returns. In the second investigation, this thesis explores the impact of extreme downside risk on returns in a security level analysis. We demonstrate that a stock with higher tail risk exposure tends to experience higher average returns. Motivated by the limitations of systematic extreme downside risk measures in the literature, we propose two groups of new ‘co-tail-risk’ measures constructed from two different approaches. The first group is the natural development of canonical downside beta and comoment measures, while the second group is based on the sensitivity of stock returns on innovations in market systematic crash risk. We utilise our new measures to investigate the asset pricing implication of extreme downside risk and show that they can capture a significant positive relationship between this risk and expected stock return. Moreover, our second group of ‘co-tail-risk’ measures show a highly consistent performance even in extreme settings such as low tail threshold and monthly sample estimation. The ability of this measure to generate a number of observations given limited return data solves one of the most challenging problems in tail risk literature. In the last investigation, this thesis examines the influence of extreme downside risk on portfolio optimisation. It is motivated by the evidence in Chapter 4 regarding the size pattern of the extreme downside risk impact on stock returns where the impact is larger for small stocks. Accordingly, portfolio optimisation practice that focuses on tail risk should be more effective when applied to small stocks. In comparing the performance of mean-Expected Tail Loss against that of mean-variance across size groups of Fama and French’s (1993) sorted portfolios, we confirm this conjecture. Moreover, we further investigate the performance of different switching approaches between mean-variance and mean-Expected Tail Loss to utilise the suitability of these optimisation methods for specific market conditions. However, our results reject the use of any switching method. We demonstrate the reason switching could not enhance performance is due to the invalidity of the argument regarding the suitability of any optimisation method for a specific market regime.
156

Essays in insider trading, informational efficiency, and asset pricing

Clark, Stephen Rhett 01 July 2014 (has links)
In this dissertation, I consider a range of topics related to the role played by information in modern asset pricing theory. The primary research focus is twofold. First, I synthesize existing research in insider trading and seek to stimulate an expansion of the literature at the intersection of work in the insider trading and financial economics areas. Second, I present the case for using Peter Bossaerts's (2004) Efficiently Learning Markets (ELM) methodology to empirically test asset pricing models. The first chapter traces the development of domestic and international insider trading regulations and explores the legal issues surrounding the proprietary nature of information in financial markets. I argue that, practically, the reinvigoration of the insider trading debate is unfortunate because, in spite of seemingly unending efforts to settle the debate, we are no closer to answering whether insider trading is even harmful, much less worthy of legal action. In doing so, I challenge the conventional wisdom of framing insider trading research as a quest for resolution to the debate. By adopting an agnostic perspective on the desirability of insider trading regulations, I am able to clearly identify nine issues in this area that are fruitful topics for future research. The second chapter studies prices and returns for movie-specific Arrow-Debreu securities traded on the Iowa Electronic Markets. The payoffs to these securities are based on the movies' initial 4-week U.S. box office receipts. We employ a unique data set for which we have traders' pre-opening forecasts to provide the first direct test of Bossaerts's (2004) ELM hypothesis. We supplement the forecasts with estimated convergence rates to examine whether the prior forecast errors affect market price convergence. Our results support the ELM hypothesis. While significant deviations between initial forecasts and actual box-office outcomes exist, prices nonetheless evolve in accordance with efficient updating. Further, convergence rates appear independent of both the average initial forecast error and the level of disagreement in forecasts. Lastly, the third chapter revisits the theoretical justifications for Bossaerts's (2004) ELM, with the goal of providing clear, intuitive proofs of the key results underlying the methodology. The seemingly biggest hurdle to garnering more widespread adoption of the ELM methodology is the confusion that surrounds the use of weighted modified returns when testing for rational asset pricing restrictions. I attack this hurdle by offering a transparent justification for this approach. I then establish how and why Bossaerts's results extend from the case of digital options to the more practically relevant class of all limited-liability securities, including equities. I conclude by showing that the ELM restrictions naturally lend themselves to estimation and testing of asset pricing models, using weighted modified returns, in a Generalized Method of Moments (GMM) framework.
157

Controversia del CAPM con relación al riesgo y rentabilidad de activos financieros frente a otros modelos alternativos y derivados / Controversy CAPM in relation to the risk and return of financial assets compared to other alternative models and derivatives

Laurente García, María Marisol, Saldaña Villalobos, Leyla del Milagro 06 July 2019 (has links)
El presente trabajo tiene como objetivo analizar el uso y aplicación del modelo de valoración de activos de capital, CAPM, como herramienta de planificación y evaluación financiera, comparándolo con otros modelos alternativos. El CAPM propone una relación entre el riesgo y rendimiento de un activo. El riesgo está representado por el coeficiente beta, que mide la sensibilidad del instrumento financiero en relación con el riesgo sistemático, ya sea en un portafolio de activos o en la valoración de una empresa. Debido a que existen críticas sobre la validez del CAPM, en este estudio se busca conocer la efectividad que tiene el uso y la aplicación del modelo. Para ello, se han buscado evidencias empíricas, en diferentes países, y sectores económicos en las que se compara el CAPM con otros modelos alternativos, tales como el APT o el de Tres Factores Fama y French que, según la investigación realizada, serían los más utilizados. Los resultados de esta investigación muestran que el CAPM no ofrece necesariamente resultados positivos significativos en los estudios revisados. Sin embargo, ello no quiere decir que el CAPM no sea un modelo suficiente para predecir la relación riesgo – rentabilidad en los casos en los que se aplica. Se concluye por ello que, a pesar de que existen modelos alternativos tratando de superar las limitaciones del CAPM, hoy en día este modelo sigue siendo el más utilizado fundamentalmente por su sencillez y por su capacidad de explicar y predecir, de manera suficiente, en la mayoría de las aplicaciones generales. / The objective of this paper is to analyze the use and application of the capital asset pricing model, CAPM, as a planning and financial evaluation tool and to compare it with other alternative models. The CAPM propose a relationship between the risk and return of an asset. The risk is represented by coefficient called beta, which measures the sensitivity of the financial asset in relation to it´s systematic risk, either in a portfolio or in the valuation of a company. Given that there are controversies about the validity of the CAPM, the study is gad is to understand the effectiveness of the use and application of the model. In order to do that, evidence, in different countries and economic sectors, is presented in which the CAPM is compared with other alternative models, such as the APT or the Fama and French Three Factor, according to this investigation would be the most used. The results of this investigation shown that, the CAPM, even though it is not able to offer significant positives results in the studies reviewed. However, it is not a sufficient model for predictins the risk - return relationship in the cases where it applies. It is concluded for that, although there are alternatives models trying to overcome the limitations of the CAPM, this model is nowadays the most used yet, fundamentally because of its simplicity and its ability to explain and predict, in a sufficient fashion, in most of the general applications. / Trabajo de Suficiencia Profesional
158

Essays in cross-sectional asset pricing

Cederburg, Scott Hogeland 01 May 2011 (has links)
In this dissertation, I study the performance of asset-pricing models in explaining the cross section of expected stock returns. The finance literature has uncovered several potential failings of the Capital Asset Pricing Model (CAPM). I investigate the ability of additional risk factors, which are not considered by the CAPM, to explain these problems. In particular, I examine intertemporal risk and long-run risk in the cross section of returns. In addition, I develop a firm-level test to refine and reassess the cross-sectional evidence against the CAPM. In the first chapter, I test the cross-sectional implications of the Intertemporal CAPM (ICAPM) of Merton (1973) and Campbell (1993, 1996) using a new firm-level approach. I find that the ICAPM performs well in explaining returns. Consistent with theoretical predictions, investors require a large positive premium for taking on market risk and zero-beta assets earn the risk-free rate. Moreover, investors accept lower returns on assets that hedge against adverse shifts in the investment opportunity set. The ICAPM explains more cross-sectional variation in average returns than either the CAPM or Fama-French (1993) model. I also investigate whether the SMB and HML factors of the Fama-French model proxy for intertemporal risk and find little evidence in favor of this conjecture. In the second chapter, we propose an intertemporal asset-pricing model that simultaneously resolves the puzzling negative relations between expected stock return and analysts' forecast dispersion, idiosyncratic volatility, and credit risk. All three effects emerge in a long-run risk economy accommodating a formal cross section of firms characterized by mean-reverting expected dividend growth. Higher cash flow duration firms exhibit higher exposure to economic growth shocks while they are less sensitive to firm-specific news. Such firms command higher risk premiums but exhibit lower measures of idiosyncratic risk. Empirical evidence broadly supports our model's predictions, as higher dispersion, idiosyncratic volatility, and credit risk firms display lower exposure to long-run risk along with higher firm-specific risk. Lastly, in the third chapter, we examine asset-pricing anomalies at the firm level. Portfolio-level tests linking CAPM alphas to a large number of firm characteristics suggest that the CAPM fails across multiple dimensions. There are, however, concerns that underlying firm-level associations may be distorted at the portfolio level. In this paper we use a hierarchical Bayes approach to model conditional firm-level alphas as a function of firm characteristics. Our empirical results indicate that much of the portfolio-based evidence against the CAPM is overstated. Anomalies are primarily confined to small stocks, few characteristics are robustly associated with CAPM alphas out of sample, and most firm characteristics do not contain unique information about abnormal returns.
159

A closer examination of the book-tax difference pricing anomaly

Hepfer, Bradford Fitzgerald 01 May 2016 (has links)
In this study, I examine whether the pricing of book-tax differences reflects mispricing or a priced risk factor. I provide new evidence that temporary book-tax differences are mispriced by developing portfolios that trade on the information in book-tax differences for future accruals and cash flows. I develop and test predictions on whether book-tax difference mispricing is the value-glamour anomaly in disguise. Both signals of mispricing relate to firm growth and, thus, both may capture mispricing due to over-extrapolation of realized growth to future growth. I find that the book-tax difference pricing anomaly is subsumed by the value-glamour anomaly. Specifically, trading on the information in book-tax differences does not yield incremental returns relative to a value-glamour trading strategy. Hence, mispricing associated with book-tax differences relates more generally to the mispricing of expected growth as extrapolated from past growth.
160

波動度預測模型之探討 / The research on forecast models of volatility

吳佳貞, Wu, Chia-Chen Unknown Date (has links)
期望波動度在投資組合的選擇、避險策略、資產管理,以及金融資產的評價上是關鍵性因素,因此,在波動度變化甚巨的金融市場中,找出具有良好預測波動度能力的模型,是絕對必要的。過去從事資產價格行為的相關研究都假設資產的價格過程是隨機的,且呈對數常態分配、變異數固定。然而實證結果一再顯示:變異數是隨時間而變動的(如 Mandelbrot(1963)、 Fama(1965))。為預測波動度(或變異數),Eagle(1982)首先提出了 ARCH 模型,允許預期條件變異數作為過去殘差的函數,因此變異數能隨時間而改變。此後 Bollerslve(1986)提出 GARCH 模型,修正ARCH 模型線性遞減遞延結構,將過去的殘差及變異數同時納入條件變異數方程式中。 Nelson(1991)則提出 EGARCH 模型以改進 GARCH 模型的三大缺點,此模型對具有高度波動性的金融資產提供更成功的另一估計模式。除上列之 ARCH-type 模型外,Hull and White(1987)提出連續型隨機波動模型(continuous time stochastic volatility model),用以評價股價選擇權,此模型不僅將過去的變異數納入條件變異數的方程式中,同時該條件變異數也會因隨機噪音(random noise)而變動。近年來,上述模型均被廣泛運用在模擬金融資產的波動性,均是相當實用的模型。 本文以隨機漫步(random walk)、GARCH(1,1)、EGARCH(1,1)及隨機波動模型(stochastic volatility)進行不同期間下,股價指數與外匯波動度之預測,並以實證結果判斷上述四種模型在預測外匯及股價指數波動度的能力表現。實證結果顯示:隨機波動模型不論在股價指數或外匯、長期或短期的波動度預測上,都是最佳的波動度預測模型,因此建議各大金融機構可採隨機波動模型預測金融資產未來的波動度。 / Volatility forecast is extremely important factor in portfolio chice, hedging strategies, asset management, asset pricing and option pricing. Identifying a good forecast model of volatility is absolutely necessary, especially for the highly volatile Taiwan stock marek. Due to increasing attention to the impact of marke risk on asset returns, academic researchers and practicians have developed ways to control risk and methodologies to forecast return volatility. Past researches on asset price behavior usually assumed that asset price behavior follows random walk, and its probability distribution is a log-normal distribution with a constant variance (or constant volatility). This assumption is in fact in violation of empirical evidence showing that volatility tends to vary over time (e.g., Mandelbrot﹝1963﹞ and Fama﹝1965﹞). To forecast volatility (or variance), Engle(1982) is the first scholar to propose a forecast model, now well-known as ARCH, whose conditional variance is a funtion of past squared returns residuals. Accordingly, the forecast variance(or volatility) varies over time. Bollerslev(1986) proposed a generalized model, called GARCH, which allows the current conditional variance depends not only on past squared residuals, but also on past conditional variances. However, Nelson(1991) has recently proposed a new model, called EGARCH, which attempts to remove the weakness of the GARCH model. The EGARCH model has been shown to be successful to forecast volatility and to describe successful stock price behavior. In addition, Hull and white(1987) employed a continuous-time stochastic volatility model to develop in option pricing model. Their stochastic volatility model not only admits the past variance, but also depends on random noise of volatility. The above-mentioned models have been widely implemented in practice to simulate and to forecast asset return volatility. This thesis investigates whether random walk, GARCH(1,1), EGARCH(1,1) and stochastic volatility model differ in their ability to predict the volatility of stock index and currency returns over short-term and long-term horizons. The results strongly support that the best volatility predictions are generated by the stochasic volatility model. Therefore, it is recommended that financial institutions may adopt stochastic volatility model to predict asset return volatility.

Page generated in 0.0985 seconds