Spelling suggestions: "subject:"cynamic portfolio"" "subject:"clynamic portfolio""
1 |
Functional approximation methods for solving stochastic control problems in financeYang, Chunyu, 1979- 02 December 2010 (has links)
I develop a numerical method that combines functional approximations and dynamic programming to solve high-dimensional discrete-time stochastic control problems under general constraints. The method relies on three building blocks: first, a quasi-random grid and the radial basis function method are used to discretize and interpolate the high-dimensional state space; second, to incorporate constraints, the method of Lagrange multipliers is applied to obtain the first order optimality conditions; third, the conditional expectation of the value function is approximated by a second order polynomial basis, estimated using ordinary least squares regressions. To reduce the approximation error, I introduce the test region iterative contraction (TRIC) method to shrink the approximation region around the optimal solution. I apply the method to two Finance applications: a) dynamic portfolio choice with constraints, a continuous control problem; b) dynamic portfolio choice with capital gain taxation, a high-dimensional singular control problem. / text
|
2 |
Deep Learning for Dynamic Portfolio Optimization / Djupinlärning för dynamisk portföljoptimeringMolnö, Victor January 2021 (has links)
This thesis considers a deep learning approach to a dynamic portfolio optimization problem. A proposed deep learning algorithm is tested on a simplified version of the problem with promising results, which suggest continued testing of the algorithm, on a larger scale for the original problem. First the dynamics and objective function of the problem are presented, and the existence of a no-trade-region is explained via the Hamilton-Jacobi-Bellman equation. The no-trade-region dictates the optimal trading strategy. Solving the Hamilton-Jacobi-Bellman equation to find the no-trade-region is not computationally feasible in high dimension with a classic finite difference approach. Therefore a new algorithm to iteratively update and improve an estimation of the no-trade-region is derived. This is a deep learning algorithm that utilizes neural network function approximation. The algorithm is tested on the one-dimensional version of the problem for which the true solution is known. While testing in one dimension only does not assess whether this algorithm scales better than a finite difference approach to higher dimensions, the learnt solution comes fairly close to true solution with a relative score of 0.72, why it is suggested that continued research of this algorithm is performed for the multidimensional version of the problem. / Den här uppsatsen undersöker en djupinlärningsmetod for att lösa ett dynamiskt portföljoptimeringsproblem. En föreslagen djupinlärningsalgoritm testas på en föreklad version av problemet, med lovande resultat. Därför föreslås det vidare att algoritmens prestanda testas i större skala även för det urpsrungliga problemet. Först presenteras dynamiken och målfunktionen för problemet. Det förklaras via Hamilton-Jacobi-Bellman-ekvationen varför det finns en handelsstoppregion. Handelsstoppregionen bestämmer den optimala handelsstrategin. Att lösa Hamilton-Jacobi-Bellman-ekvationen för att hitta handelsstoppregionen är inte beräkningspratiskt möjligt i hög dimension om ett traditionellt tillvägagångssätt med finita differenser används. Därför härleds en ny algoritm som iterativt uppdaterar och förbättrar en skattning av handelsstoppregionen. Det är en djupinlärningsalgoritm som utnyttjar funktionsapproximation med neurala nätverk. Algoritmen testas på den endimensionella verisonen av problemet, för vilken den sanna lösningen är känd. Tester i det endimensionella fallet kan naturligtvis inte ge svar på frågan om den nya algoritmen skalar bättre än en finit differensmetod till högre dimensioner. Men det är i alla fall klart att den inlärda lösningen kommer tämligen nära den sanna med relativ poäng 0.72, och därför föreslås fortsatt forskning kring algoritmen i förhållande till den flerdimensionella versionen av problemet.
|
3 |
熵風險值約當測度的動態資產組合理論及實證研究 / Dynamic Portfolio Theory and Empirical Research Based on EVaR Equivalent Measure張佳誠 Unknown Date (has links)
在資產組合的優化過程中,總是希望賺取穩定的報酬以及規避不必要的風險,也因此,風險的衡量在資產組合理論中至關重要,而A. Ahmadi-Javid(2011)發表證明以相對熵為基礎的熵風險值(Entropic Value-at-Risk,簡稱EVaR)是為被廣泛使用的條件風險值(Conditional Value-at-Risk,簡稱CVaR)之上界,且EVaR在使用上更為效率,具有相當優越的性質,而本文將利用熵風險值的約當測度,去修改傳統均值–變異模型,並以臺灣股市為例,利用基因模擬退火混合演算法來驗證其在動態架構下的性質及績效,結果顯示比起傳統模型更為貼近效率前緣。
|
4 |
Essays in asset pricing and portfolio choiceIlleditsch, Philipp Karl 15 May 2009 (has links)
In the first essay, I decompose inflation risk into (i) a part that is correlated with real returns on the market portfolio and factors that determine investor’s preferences and investment opportunities and (ii) a residual part. I show that only the first part earns a risk premium. All nominal Treasury bonds, including the nominal money-market account, are equally exposed to the residual part except inflation-protected Treasury bonds, which provide a means to hedge it. Every investor should put 100% of his wealth in the market portfolio and inflation-protected Treasury bonds and hold a zero-investment portfolio of nominal Treasury bonds and the nominal money market account.
In the second essay, I solve the dynamic asset allocation problem of finite lived, constant relative risk averse investors who face inflation risk and can invest in cash, nominal bonds, equity, and inflation-protected bonds when the investment opportunityset is determined by the expected inflation rate. I estimate the model with nominal bond, inflation, and stock market data and show that if expected inflation increases, then investors should substitute inflation-protected bonds for stocks and they should borrow cash to buy long-term nominal bonds.
In the lastessay, I discuss how heterogeneity in preferences among investors withexternal non-addictive habit forming preferences affects the equilibrium nominal term structure of interest rates in a pure continuous time exchange economy and complete securities markets. Aggregate real consumption growth and inflation are exogenously specified and contain stochastic components thataffect their means andvolatilities. There are two classes of investors who have external habit forming preferences and different localcurvatures oftheir utility functions. The effects of time varying risk aversion and different inflation regimes on the nominal short rate and the nominal market price of risk are explored, and simple formulas for nominal bonds, real bonds, and inflation risk premia that can be numerically evaluated using Monte Carlo simulation techniques are provided.
|
5 |
Gestão dinâmica do risco de mercado com modelo Cópula-GARCH / Dynamic market risk management with Copula-GARCH modelRighi, Marcelo Brutti 28 January 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The present work aims to analyze the market risk management copula-GARCH model
approach efficiency. To that we use data referent to daily prices of North American, German,
Australian, Brazilian, Hong Kong and South African markets, considering the period from
July 2002 to June 2012, totalizing ten years of observations. Results allow to conclude that
there are volatility clusters along series during sub-prime and Eurozone debt crises.
Developed markets present lower general oscillation levels than emerging ones. There was
gradual increment in analyzed markets pair to pair dynamic correlation levels, with general
levels between 0.3 and 0.6. Computed dynamic VaRs followed returns evolution, not
exceeding the expect number of violations, unlike static VaR estimates. Developed markets
present rising on optimal hedge ratios starting on sub-prime crisis, while for emerging
markets many ratios maintain at same levels. Static ratios did not follow markets evolution. It
is identified predominance of the Student t copula on risk and return relationships. However it
is not possible to infer existence of an explicit association. Structural change tests indicated
breaks in volatility at sub-prime crisis begin, while for correlations there is not homogeneity
for breaks or dates. There are patterns on participations which are not followed by markets
composed portfolio static weights. During all sample period the dynamic portfolio volatility
was less than static one, especially in turbulent periods, with reductions up to 50%. Tests
reveal that volatility obtained with Copula-GARCH based strategy is less than those referent
to static and dynamic DCC-GARCH approaches. / O presente trabalho visa analisar a eficiência da abordagem de gestão de risco de
mercado baseada no modelo Cópula-GARCH. Para tanto são usados dados referentes às
cotações diárias dos mercados Norte Americano, Alemão, Australiano, Brasileiro,
Honconguês e Sul Africano, considerando o período referente à Julho de 2002 até Junho de
2012, totalizando dez anos de observações. Os resultados permitem concluir que existem
agrupamentos de volatilidade ao longo das séries durante as crises do sub-prime e da dívida
Europeia. Mercados desenvolvidos apresentam menores níveis gerais de oscilação do que
emergentes. Houve incremento gradual no nível de correlação dinâmica par a par dos
mercados analisados, com níveis gerais entre 0,3 e 0,6. VaRs dinâmicos computados
acompanharam a evolução dos retornos, não excedendo o número esperado de violações, ao
contrário das estimativas de VaR estáticas. Mercados desenvolvidos apresentam aumento na
razão ótima de hedge a partir da crise do sub-prime, enquanto para os mercados emergentes
muitas razões ótimas mantiveram-se nos mesmos patamares. Razões estáticas não
acompanharam a evolução dos mercados. É identificado predomínio da cópula t de Student no
relacionamento entre risco e retorno. Todavia não é possível inferir a existência de uma
associação explícita. Os testes de mudança estrutural indicaram quebras nas volatilidades no
início da crise do sub-prime, enquanto para correlações não há homogeneidade de quebras
nem de datas. Existem padrões nas participações que não são acompanhadas pelos pesos
estáticos dos ativos na composição da carteira construída. Durante todo o período amostral a
volatilidade do portfolio dinâmico foi menor que a do estático, especialmente em períodos de
maior turbulência, com reduções de até 50%. Testes revelam que a volatilidade obtida com
estratégia baseada no modelo Cópula-GARCH é menor que aquelas referentes às abordagens
estática e dinâmica efetuada através do modelo DCC-GARCH.
|
6 |
Allocation dynamique de portefeuille avec profil de gain asymétrique : risk management, incitations financières et benchmarking / Dynamic asset allocation with asymmetric payoffs : risk management, financial incentives, and benchmarkingTergny, Guillaume 31 May 2011 (has links)
Les gérants de portefeuille pour compte de tiers sont souvent jugés par leur performance relative à celle d'un portefeuille benchmark. A ce titre, ils sont amenés très fréquemment à utiliser des modèles internes de "risk management" pour contrôler le risque de sous-performer le benchmark. Par ailleurs, ils sont de plus en plus nombreux à adopter une politique de rémunération incitative, en percevant une commission de sur-performance par rapport au benchmark. En effet, cette composante variable de leur rémunération leur permet d'augmenter leur revenu en cas de sur-performance sans contrepartie en cas de sous-performance. Or de telles pratiques ont fait récemment l'objet de nombreuses polémiques : la période récente de crise financière mondiale a fait apparaître certaines carences de plusieurs acteurs financiers en terme de contrôle de risque ainsi que des niveaux de prise de risque et de rémunération jugés excessifs. Cependant, l'étude des implications de ces pratiques reste un thème encore relativement peu exploré dans le cadre de la théorie classique des choix dynamiques de portefeuille en temps continu. Cette thèse analyse, dans ce cadre théorique, les implications de ces pratiques de "benchmarking" sur le comportement d'investissement de l'asset manager. La première partie étudie les propriétés de la stratégie dynamique optimale pour l'asset manager concerné par l'écart entre la rentabilité de son portefeuille et celle d'un benchmark fixe ou stochastique (sur ou sous-performance). Nous considérons plusieurs types d'asset managers, caractérisés par différentes fonctions d'utilité et qui sont soumis à différentes contraintes de risque de sous-performance. Nous montrons en particulier quel est le lien entre les problèmes d'investissement avec prise en compte de l'aversion à la sous-performance et avec contrainte explicite de "risk management". Dans la seconde partie, on s'intéresse à l'asset manager bénéficiant d'une rémunération incitative (frais de gestion variables, bonus de sur-performance ou commission sur encours additionnelle). On étudie, selon la forme de ses incitations financières et son degré d'aversion à la sous-performance, comment sa stratégie d'investissement s'écarte de celle de l'investisseur (ou celle de l'asset manager sans rémunération incitative). Nous montrons que le changement de comportement de l'asset manager peut se traduire soit par une réduction du risque pris par rapport à la stratégie sans incitation financière soit au contraire par une augmentation de celui-ci. Finalement, nous montrons en quoi la présence de contraintes de risque de sous-performance, imposées au gérant ou traduisant son aversion à la sous-performance, peut être bénéfique à l'investisseur donnant mandat de gestion financière. / It is common practice to judge third-party asset managers by looking at their financial performance relative to a benchmark portfolio. For this reason, they often choose to rely on internal risk-management models to control the downside risk of their portfolio relative to the benchmark. Moreover, an increasing number are adopting an incentive-based scheme, by charging an over-performance commission relative to the benchmark. Indeed, including this variable component in their global remuneration allows them to increase their revenue in case of over-performance without any penalty in the event of underperforming the benchmark. However, such practices have recently been at the heart of several polemics: the recent global financial crisis has uncovered some shortcomings in terms of internal risk control as well as excessive risk-taking and compensation levels of several financial players. Nevertheless, it appears that analyzing the impact of these practices remains a relatively new issue in continuous time-dynamic asset allocation theory. This thesis analyses in this theoretical framework the implications of these "benchmarking" practices on the asset manager's investment behavior. The first part examines the properties of the optimal dynamic strategy for the asset manager who is concerned by the difference of return between their portfolio and a fix or stochastic benchmark (over- or under-performance). Several asset manager types are considered, defined by different utility functions and different downside-risk constraints. In particular, the link between investment problems with aversion to under-performance and risk management constraints is shown. In the second part, the case of the asset manager who benefits from an incentive compensation scheme (variable asset management fees, over-performance bonuses or additional commission on asset under management), is investigated. We study how, depending on the choice of financial inventive structure and loss aversion level, the asset manager's strategy differs from that of the investor (or the strategy of the asset manager receiving no incentive remuneration). This study shows that the change in investment behavior of the asset manager can lead to both a reduction in the risk taken relative to the strategy without financial incentives or conversely an increase thereof. Finally we show that the existence of downside risk constraints, imposed on the asset manager or corresponding to their aversion for under-performance, can be beneficial to the investor mandating financial management.
|
7 |
Dynamic portfolio construction and portfolio risk measurementMazibas, Murat January 2011 (has links)
The research presented in this thesis addresses different aspects of dynamic portfolio construction and portfolio risk measurement. It brings the research on dynamic portfolio optimization, replicating portfolio construction, dynamic portfolio risk measurement and volatility forecast together. The overall aim of this research is threefold. First, it is aimed to examine the portfolio construction and risk measurement performance of a broad set of volatility forecast and portfolio optimization model. Second, in an effort to improve their forecast accuracy and portfolio construction performance, it is aimed to propose new models or new formulations to the available models. Third, in order to enhance the replication performance of hedge fund returns, it is aimed to introduce a replication approach that has the potential to be used in numerous applications, in investment management. In order to achieve these aims, Chapter 2 addresses risk measurement in dynamic portfolio construction. In this chapter, further evidence on the use of multivariate conditional volatility models in hedge fund risk measurement and portfolio allocation is provided by using monthly returns of hedge fund strategy indices for the period 1990 to 2009. Building on Giamouridis and Vrontos (2007), a broad set of multivariate GARCH models, as well as, the simpler exponentially weighted moving average (EWMA) estimator of RiskMetrics (1996) are considered. It is found that, while multivariate GARCH models provide some improvements in portfolio performance over static models, they are generally dominated by the EWMA model. In particular, in addition to providing a better risk-adjusted performance, the EWMA model leads to dynamic allocation strategies that have a substantially lower turnover and could therefore be expected to involve lower transaction costs. Moreover, it is shown that these results are robust across the low - volatility and high-volatility sub-periods. Chapter 3 addresses optimization in dynamic portfolio construction. In this chapter, the advantages of introducing alternative optimization frameworks over the mean-variance framework in constructing hedge fund portfolios for a fund of funds. Using monthly return data of hedge fund strategy indices for the period 1990 to 2011, the standard mean-variance approach is compared with approaches based on CVaR, CDaR and Omega, for both conservative and aggressive hedge fund investors. In order to estimate portfolio CVaR, CDaR and Omega, a semi-parametric approach is proposed, in which first the marginal density of each hedge fund index is modelled using extreme value theory and the joint density of hedge fund index returns is constructed using a copula-based approach. Then hedge fund returns from this joint density are simulated in order to compute CVaR, CDaR and Omega. The semi-parametric approach is compared with the standard, non-parametric approach, in which the quantiles of the marginal density of portfolio returns are estimated empirically and used to compute CVaR, CDaR and Omega. Two main findings are reported. The first is that CVaR-, CDaR- and Omega-based optimization offers a significant improvement in terms of risk-adjusted portfolio performance over mean-variance optimization. The second is that, for all three risk measures, semi-parametric estimation of the optimal portfolio offers a very significant improvement over non-parametric estimation. The results are robust to as the choice of target return and the estimation period. Chapter 4 searches for improvements in portfolio risk measurement by addressing volatility forecast. In this chapter, two new univariate Markov regime switching models based on intraday range are introduced. A regime switching conditional volatility model is combined with a robust measure of volatility based on intraday range, in a framework for volatility forecasting. This chapter proposes a one-factor and a two-factor model that combine useful properties of range, regime switching, nonlinear filtration, and GARCH frameworks. Any incremental improvement in the performance of volatility forecasting is searched for by employing regime switching in a conditional volatility setting with enhanced information content on true volatility. Weekly S&P500 index data for 1982-2010 is used. Models are evaluated by using a number of volatility proxies, which approximate true integrated volatility. Forecast performance of the proposed models is compared to renowned return-based and range-based models, namely EWMA of Riskmetrics, hybrid EWMA of Harris and Yilmaz (2009), GARCH of Bollerslev (1988), CARR of Chou (2005), FIGARCH of Baillie et al. (1996) and MRSGARCH of Klaassen (2002). It is found that the proposed models produce more accurate out of sample forecasts, contain more information about true volatility and exhibit similar or better performance when used for value at risk comparison. Chapter 5 searches for improvements in risk measurement for a better dynamic portfolio construction. This chapter proposes multivariate versions of one and two factor MRSACR models introduced in the fourth chapter. In these models, useful properties of regime switching models, nonlinear filtration and range-based estimator are combined with a multivariate setting, based on static and dynamic correlation estimates. In comparing the out-of-sample forecast performance of these models, eminent return and range-based volatility models are employed as benchmark models. A hedge fund portfolio construction is conducted in order to investigate the out-of-sample portfolio performance of the proposed models. Also, the out-of-sample performance of each model is tested by using a number of statistical tests. In particular, a broad range of statistical tests and loss functions are utilized in evaluating the forecast performance of the variance covariance matrix of each portfolio. It is found that, in terms statistical test results, proposed models offer significant improvements in forecasting true volatility process, and, in terms of risk and return criteria employed, proposed models perform better than benchmark models. Proposed models construct hedge fund portfolios with higher risk-adjusted returns, lower tail risks, offer superior risk-return tradeoffs and better active management ratios. However, in most cases these improvements come at the expense of higher portfolio turnover and rebalancing expenses. Chapter 6 addresses the dynamic portfolio construction for a better hedge fund return replication and proposes a new approach. In this chapter, a method for hedge fund replication is proposed that uses a factor-based model supplemented with a series of risk and return constraints that implicitly target all the moments of the hedge fund return distribution. The approach is used to replicate the monthly returns of ten broad hedge fund strategy indices, using long-only positions in ten equity, bond, foreign exchange, and commodity indices, all of which can be traded using liquid, investible instruments such as futures, options and exchange traded funds. In out-of-sample tests, proposed approach provides an improvement over the pure factor-based model, offering a closer match to both the return performance and risk characteristics of the hedge fund strategy indices.
|
8 |
Allocation dynamique de portefeuille avec profil de gain asymétrique : risk management, incitations financières et benchmarking / Dynamic asset allocation with asymmetric payoffs : risk management, financial incentives, and benchmarkingTergny, Guillaume 31 May 2011 (has links)
Les gérants de portefeuille pour compte de tiers sont souvent jugés par leur performance relative à celle d'un portefeuille benchmark. A ce titre, ils sont amenés très fréquemment à utiliser des modèles internes de "risk management" pour contrôler le risque de sous-performer le benchmark. Par ailleurs, ils sont de plus en plus nombreux à adopter une politique de rémunération incitative, en percevant une commission de sur-performance par rapport au benchmark. En effet, cette composante variable de leur rémunération leur permet d'augmenter leur revenu en cas de sur-performance sans contrepartie en cas de sous-performance. Or de telles pratiques ont fait récemment l'objet de nombreuses polémiques : la période récente de crise financière mondiale a fait apparaître certaines carences de plusieurs acteurs financiers en terme de contrôle de risque ainsi que des niveaux de prise de risque et de rémunération jugés excessifs. Cependant, l'étude des implications de ces pratiques reste un thème encore relativement peu exploré dans le cadre de la théorie classique des choix dynamiques de portefeuille en temps continu. Cette thèse analyse, dans ce cadre théorique, les implications de ces pratiques de "benchmarking" sur le comportement d'investissement de l'asset manager. La première partie étudie les propriétés de la stratégie dynamique optimale pour l'asset manager concerné par l'écart entre la rentabilité de son portefeuille et celle d'un benchmark fixe ou stochastique (sur ou sous-performance). Nous considérons plusieurs types d'asset managers, caractérisés par différentes fonctions d'utilité et qui sont soumis à différentes contraintes de risque de sous-performance. Nous montrons en particulier quel est le lien entre les problèmes d'investissement avec prise en compte de l'aversion à la sous-performance et avec contrainte explicite de "risk management". Dans la seconde partie, on s'intéresse à l'asset manager bénéficiant d'une rémunération incitative (frais de gestion variables, bonus de sur-performance ou commission sur encours additionnelle). On étudie, selon la forme de ses incitations financières et son degré d'aversion à la sous-performance, comment sa stratégie d'investissement s'écarte de celle de l'investisseur (ou celle de l'asset manager sans rémunération incitative). Nous montrons que le changement de comportement de l'asset manager peut se traduire soit par une réduction du risque pris par rapport à la stratégie sans incitation financière soit au contraire par une augmentation de celui-ci. Finalement, nous montrons en quoi la présence de contraintes de risque de sous-performance, imposées au gérant ou traduisant son aversion à la sous-performance, peut être bénéfique à l'investisseur donnant mandat de gestion financière. / It is common practice to judge third-party asset managers by looking at their financial performance relative to a benchmark portfolio. For this reason, they often choose to rely on internal risk-management models to control the downside risk of their portfolio relative to the benchmark. Moreover, an increasing number are adopting an incentive-based scheme, by charging an over-performance commission relative to the benchmark. Indeed, including this variable component in their global remuneration allows them to increase their revenue in case of over-performance without any penalty in the event of underperforming the benchmark. However, such practices have recently been at the heart of several polemics: the recent global financial crisis has uncovered some shortcomings in terms of internal risk control as well as excessive risk-taking and compensation levels of several financial players. Nevertheless, it appears that analyzing the impact of these practices remains a relatively new issue in continuous time-dynamic asset allocation theory. This thesis analyses in this theoretical framework the implications of these "benchmarking" practices on the asset manager's investment behavior. The first part examines the properties of the optimal dynamic strategy for the asset manager who is concerned by the difference of return between their portfolio and a fix or stochastic benchmark (over- or under-performance). Several asset manager types are considered, defined by different utility functions and different downside-risk constraints. In particular, the link between investment problems with aversion to under-performance and risk management constraints is shown. In the second part, the case of the asset manager who benefits from an incentive compensation scheme (variable asset management fees, over-performance bonuses or additional commission on asset under management), is investigated. We study how, depending on the choice of financial inventive structure and loss aversion level, the asset manager's strategy differs from that of the investor (or the strategy of the asset manager receiving no incentive remuneration). This study shows that the change in investment behavior of the asset manager can lead to both a reduction in the risk taken relative to the strategy without financial incentives or conversely an increase thereof. Finally we show that the existence of downside risk constraints, imposed on the asset manager or corresponding to their aversion for under-performance, can be beneficial to the investor mandating financial management.
|
9 |
Optimal portfolio selection with transaction costsKoné, N'Golo 05 1900 (has links)
Le choix de portefeuille optimal d'actifs a été depuis longtemps et continue d'être un sujet d'intérêt majeur dans le domaine de la finance. L'objectif principal étant de trouver la meilleure façon d'allouer les ressources financières dans un ensemble d'actifs disponibles sur le marché financier afin de réduire les risques de fluctuation du portefeuille et d'atteindre des rendements élevés. Néanmoins, la littérature de choix de portefeuille a connu une avancée considérable à partir du 20ieme siècle avec l'apparition de nombreuses stratégies motivées essentiellement par le travail pionnier de Markowitz (1952) qui offre une base solide à l'analyse de portefeuille sur le marché financier. Cette thèse, divisée en trois chapitres, contribue à cette vaste littérature en proposant divers outils économétriques pour améliorer le processus de sélection de portefeuilles sur le marché financier afin d'aider les intervenants de ce marché.
Le premier chapitre, qui est un papier joint avec Marine Carrasco, aborde un problème de sélection de portefeuille avec coûts de transaction sur le marché financier. Plus précisément, nous développons une procédure de test simple basée sur une estimation de type GMM pour évaluer l'effet des coûts de transaction dans l'économie, quelle que soit la forme présumée des coûts de transaction dans le modèle. En fait, la plupart des études dans la littérature sur l'effet des coûts de transaction dépendent largement de la forme supposée pour ces frictions dans le modèle comme cela a été montré à travers de nombreuses études (Dumas and Luciano (1991), Lynch and Balduzzi (1999), Lynch and Balduzzi (2000), Liu and Loewenstein (2002), Liu (2004), Lesmond et al. (2004), Buss et al. (2011), Gârleanu and Pedersen (2013), Heaton and Lucas (1996)). Ainsi, pour résoudre ce problème, nous développons une procédure statistique, dont le résultat est indépendant de la forme des coûts de transaction, pour tester la significativité de ces coûts dans le processus d'investissement sur le marché financier. Cette procédure de test repose sur l'hypothèse que le modèle estimé par la méthode des moments généralisés (GMM) est correctement spécifié. Un test commun utilisé pour évaluer cette hypothèse est le J-test proposé par Hansen (1982). Cependant, lorsque le paramètre d'intérêt se trouve au bord de l'espace paramétrique, le J-test standard souffre d'un rejet excessif. De ce fait, nous proposons une procédure en deux étapes pour tester la sur-identification lorsque le paramètre d'intérêt est au bord de l'espace paramétrique. Empiriquement, nous appliquons nos procédures de test à la classe des anomalies utilisées par Novy-Marx and Velikov (2016). Nous montrons que les coûts de transaction ont un effet significatif sur le comportement des investisseurs pour la plupart de ces anomalies. Par conséquent, les investisseurs améliorent considérablement les performances hors échantillon en tenant compte des coûts de transaction dans le processus d'investissement.
Le deuxième chapitre aborde un problème dynamique de sélection de portefeuille de grande taille. Avec une fonction d'utilité exponentielle, la solution optimale se révèle être une fonction de l'inverse de la matrice de covariance des rendements des actifs. Cependant, lorsque le nombre d'actifs augmente, cet inverse devient peu fiable, générant ainsi une solution qui s'éloigne du portefeuille optimal avec de mauvaises performances. Nous proposons deux solutions à ce problème. Premièrement, nous pénalisons la norme des poids du portefeuille optimal dans le problème dynamique et montrons que la stratégie sélectionnée est asymptotiquement efficace. Cependant, cette méthode contrôle seulement en partie l'erreur d'estimation dans la solution optimale car elle ignore l'erreur d'estimation du rendement moyen des actifs, qui peut également être importante lorsque le nombre d'actifs sur le marché financier augmente considérablement. Nous proposons une méthode alternative qui consiste à pénaliser la norme de la différence de pondérations successives du portefeuille dans le problème dynamique pour garantir que la composition optimale du portefeuille ne fluctue pas énormément entre les périodes. Nous montrons que, sous des conditions de régularité appropriées, nous maîtrisons mieux l'erreur d'estimation dans le portefeuille optimal avec cette nouvelle procédure. Cette deuxième méthode aide les investisseurs à éviter des coûts de transaction élevés sur le marché financier en sélectionnant des stratégies stables dans le temps. Des simulations ainsi qu'une analyse empirique confirment que nos procédures améliorent considérablement la performance du portefeuille dynamique.
Dans le troisième chapitre, nous utilisons différentes techniques de régularisation (ou stabilisation) empruntées à la littérature sur les problèmes inverses pour estimer le portefeuille diversifié tel que définie par Choueifaty (2011). En effet, le portefeuille diversifié dépend du vecteur de volatilité des actifs et de l'inverse de la matrice de covariance du rendement des actifs. En pratique, ces deux quantités doivent être remplacées par leurs contrepartie empirique. Cela génère une erreur d'estimation amplifiée par le fait que la matrice de covariance empirique est proche d'une matrice singulière pour un portefeuille de grande taille, dégradant ainsi les performances du portefeuille sélectionné. Pour résoudre ce problème, nous étudions trois techniques de régularisation, qui sont les plus utilisées : le rigde qui consiste à ajouter une matrice diagonale à la matrice de covariance, la coupure spectrale qui consiste à exclure les vecteurs propres associés aux plus petites valeurs propres, et Landweber Fridman qui est une méthode itérative, pour stabiliser l'inverse de matrice de covariance dans le processus d'estimation du portefeuille diversifié. Ces méthodes de régularisation impliquent un paramètre de régularisation qui doit être choisi. Nous proposons donc une méthode basée sur les données pour sélectionner le paramètre de stabilisation de manière optimale. Les solutions obtenues sont comparées à plusieurs stratégies telles que le portefeuille le plus diversifié, le portefeuille cible, le portefeuille de variance minimale et la stratégie naïve 1 / N à l'aide du ratio de Sharpe dans l'échantillon et hors échantillon. / The optimal portfolio selection problem has been and continues to be a subject of interest in finance. The main objective is to find the best way to allocate the financial resources in a set of assets available on the financial market in order to reduce the portfolio fluctuation risks and achieve high returns. Nonetheless, there has been a strong advance in the literature of the optimal allocation of financial resources since the 20th century with the proposal of several strategies for portfolio selection essentially motivated by the pioneering work of Markowitz (1952)which provides a solid basis for portfolio analysis on the financial market. This thesis, divided into three chapters, contributes to this vast literature by proposing various economic tools to improve the process of selecting portfolios on the financial market in order to help stakeholders in this market.
The first chapter, a joint paper with Marine Carrasco, addresses a portfolio selection problem with trading costs on stock market. More precisely, we develop a simple GMM-based test procedure to test the significance of trading costs effect in the economy regardless of the form of the transaction cost. In fact, most of the studies in the literature about trading costs effect depend largely on the form of the frictions assumed in the model (Dumas and Luciano (1991), Lynch and Balduzzi (1999), Lynch and Balduzzi (2000), Liu and Loewenstein (2002), Liu (2004), Lesmond et al. (2004), Buss et al. (2011), Gârleanu and Pedersen (2013), Heaton and Lucas (1996)). To overcome this problem, we develop a simple test procedure which allows us to test the significance of trading costs effect on a given asset in the economy without any assumption about the form of these frictions. Our test procedure relies on the assumption that the model estimated by GMM is correctly specified. A common test used to evaluate this assumption is the standard J-test proposed by Hansen (1982). However, when the true parameter is close to the boundary of the parameter space, the standard J-test based on the chi2 critical value suffers from overrejection. To overcome this problem, we propose a two-step procedure to test overidentifying restrictions when the parameter of interest approaches the boundary of the parameter space. In an empirical analysis, we apply our test procedures to the class of anomalies used in Novy-Marx and Velikov (2016). We show that transaction costs have a significant effect on investors' behavior for most anomalies. In that case, investors significantly improve out-of-sample performance by accounting for trading costs.
The second chapter addresses a multi-period portfolio selection problem when the number of assets in the financial market is large. Using an exponential utility function, the optimal solution is shown to be a function of the inverse of the covariance matrix of asset returns. Nonetheless, when the number of assets grows, this inverse becomes unreliable, yielding a selected portfolio that is far from the optimal one. We propose two solutions to this problem. First, we penalize the norm of the portfolio weights in the dynamic problem and show that the selected strategy is asymptotically efficient. However, this method partially controls the estimation error in the optimal solution because it ignores the estimation error in the expected return, which may also be important when the number of assets in the financial market increases considerably. We propose an alternative method that consists of penalizing the norm of the difference of successive portfolio weights in the dynamic problem to guarantee that the optimal portfolio composition does not fluctuate widely between periods. We show, under appropriate regularity conditions, that we better control the estimation error in the optimal portfolio with this new procedure. This second method helps investors to avoid high trading costs in the financial market by selecting stable strategies over time. Extensive simulations and empirical results confirm that our procedures considerably improve the performance of the dynamic portfolio.
In the third chapter, we use various regularization (or stabilization) techniques borrowed from the literature on inverse problems to estimate the maximum diversification as defined by Choueifaty (2011). In fact, the maximum diversification portfolio depends on the vector of asset volatilities and the inverse of the covariance matrix of assets distribution. In practice, these two quantities need to be replaced by their sample counterparts. This results in estimation error which is amplified by the fact that the sample covariance matrix may be close to a singular matrix in a large financial market, yielding a selected portfolio far from the optimal one with very poor performance. To address this problem, we investigate three regularization techniques, such as the ridge, the spectral cut-off, and the Landweber-Fridman, to stabilize the inverse of the covariance matrix in the investment process. These regularization schemes involve a tuning parameter that needs to be chosen. So, we propose a data-driven method for selecting the tuning parameter in an optimal way. The resulting regularized rules are compared to several strategies such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio.
|
10 |
Quantitative Portfolio Construction Using Stochastic Programming / Kvantitativ portföljkonstruktion med användning av stokastisk programmering : En studie inom portföljoptimeringAshant, Aidin, Hakim, Elisabeth January 2018 (has links)
In this study within quantitative portfolio optimization, stochastic programming is investigated as an investment decision tool. This research takes the direction of scenario based Mean-Absolute Deviation and is compared with the traditional Mean-Variance model and widely used Risk Parity portfolio. Furthermore, this thesis is done in collaboration with the First Swedish National Pension Fund, AP1, and the implemented multi-asset portfolios are thus tailored to match their investment style. The models are evaluated on two different fund management levels, in order to study if the portfolio performance benefits from a more restricted feasible domain. This research concludes that stochastic programming over the investigated time period is inferior to Risk Parity, but outperforms the Mean-Variance Model. The biggest aw of the model is its poor performance during periods of market stress. However, the model showed superior results during normal market conditions. / I denna studie inom kvantitativ portföljoptimering undersöks stokastisk programmering som ett investeringsbeslutsverktyg. Denna studie tar riktningen för scenariobaserad Mean-Absolute Deviation och jämförs med den traditionella Mean-Variance-modellen samt den utbrett använda Risk Parity-portföljen. Avhandlingen görs i samarbete med Första AP-fonden, och de implementerade portföljerna, med era tillgångsslag, är därför skräddarsydda för att matcha deras investeringsstil. Modellerna utvärderas på två olika fondhanteringsnivåer för att studera om portföljens prestanda drar nytta av en mer restrektiv optimeringsmodell. Den här undersökningen visar att stokastisk programmering under undersökta tidsperioder presterar något sämre än Risk Parity, men överträffar Mean-Variance. Modellens största brist är dess prestanda under perioder av marknadsstress. Modellen visade dock något bättre resultat under normala marknadsförhållanden.
|
Page generated in 0.0574 seconds