Spelling suggestions: "subject:"economie financière"" "subject:"conomie financière""
1 |
Treatment of Market Risks under Solvency II and its Market ImplicationsLorent, Benjamin 21 June 2016 (has links)
The three chapters all address solvency regulation issues, with a focus on market risks under the Solvency II framework. Chapter 1 deals with “high-level” aspects of Solvency II as main principles and the general structure. Chapters 2 and 3 will be devoted to quantitative issues. Chapter 1 describes the main evolutions that led to the development of Solvency II. The insurance sector has dramatically evolved during the last two decades. Among others developments, we stress the new risks faced by the sector as natural catastrophes, changing demographics or market risks. Insurers become international companies, investing almost 10 trillion € of assets in Europe at the end of 2014 and being increasingly intertwined with banks and other financial sectors. Financial innovation and the refinement of risk management techniques and models developed by companies have gained momentum among the major European insurance companies. Have these evolutions changed the needs for the supervisory of insurance companies? The economic foundation for regulation is based on the presence of market failures, including severe asymmetric information problems and principal-agent conflicts. Insurance consumers, particularly individuals and households, face significant challenges in judging the financial risk of insurers. But the importance of the insurance sector for financial stability has been increasing. A sound regulatory and supervisory system is necessary to maintain efficient, safe, fair and stable financial markets and promote growth and competition in the insurance sector. The difficult conditions experienced by the industry and the shortcomings of the previous regulatory and supervisory framework have forced regulators to take action to change the way in which they regulate insurance companies’ solvency. Recognizing the shortcomings of Solvency I, EU policy-makers undertook the Solvency II project. Solvency I was not consistently applied throughout EU as the directive allowed countries to implement insurance regulation in different ways. Moreover Solvency I did not consider risks fully or in detail. In life business, the major criticism was the lack of consideration of asset risks. Allowances for latest developments in risk management were also inadequate and companies could not use an internal model to calculate the solvency capital. Finally, the increasing presence of conglomerates and groups forced the insurance regulator to align some requirements with the banking regulation, Basel II/III. Due to the differences in their core business activities, banks and insurers regulators’ goal does not imply comparability of the overall capital charges. However, considering the asset side of the balance sheets, the investment portfolios of banks and insurers contain the same asset classes. In order to avoid regulatory arbitrage, the capital charges for the same amount and type of asset risk should be similar. Chapter 2 compares the main regulatory frameworks in Europe: Solvency II and the Swiss Solvency Test, SST, in Switzerland, with a focus on potential market implications. Both systems are quite advanced but some key differences need to be highlighted, including the treatment of assets, in particular sovereign bonds, the consideration of diversification or the risk measure applied. Solvency II uses a Value at Risk at 99.5% whereas the SST is based on a Tail Value at Risk at 99%. Our approach is both qualitative and quantitative. In particular, based on a numerical example, we aim at quantifying the level of regulatory capital prescribed by the standard models. The numerical analysis reveals large differences between capital charges assigned to the same asset class under Solvency II and the SST. Solvency II penalizes investment in stocks, mainly due to a lower diversification benefit under the standard formula. On the other hand the SST model requires a higher capital for bonds, primary due to a stringent risk measure and confidence level. The treatment of EU sovereign bonds under Solvency II is another area of concern as it does not require any capital for spread risk. The question arises to what extent an internal model leads to different capital requirements as compared to the SST and Solvency II models. Therefore we apply an internal approach based on Monte Carlo simulation to derive the necessary capital based on the Value at Risk at 99.5% (in line with the Solvency II standard model) and on the Tail Value at Risk at 99% (in line with the SST standard model). Internal models calculate capital requirements that more closely matches risks of insurers and promote a culture of risk management. To develop internal models, companies need incentives to properly manage their risks, i.e. decreasing capital requirements. One potential benefit of the standard model is that insurers who use it can be compared to one another, whereas internal models are by definition specific to individual insurers. One argument against the standard model is the possibility of some systemic risk. An unusual event in the capital or insurance market could encourage all insurers to take the exact same response, thereby causing a run in the market. The analysis shows that standard and internal models still display large discrepancies in their results, suggesting a long way ahead to achieve a harmonized view between the regulators and the insurance sector. The choice of a statistical model or the refinement of parameters are key concepts when setting up an internal model and appear to be critical in the Solvency Capital Requirement calculation. By calculating and comparing the market risk capital charges for a representative insurer under the Solvency II and the SST standard approach as well as an internal model, we are able to provide evidence that the regulatory framework might have an impact on asset portfolios. The main impacts would be a shift from long-term to shorter-term debt, an increase in the attractiveness of higher-rated corporate debt and government bonds, in particular EU sovereign bonds as the consequence of the special treatment under Solvency II, as well as low level of equity holdings. But it is unlikely that large-scale reallocations will happen in the short term, as transitional arrangements are likely to phase in the implementation of Solvency II over several years. The likely impact on assets portfolios could have also already been anticipating by insurers. Chapter 3 studies the effectiveness of the Solvency II reform to prevent the default probability faced by a life insurance company. The default risk leads to a consequence that policyholders might not get back their initial investment upon default of the insurance company. Therefore, policyholders are concerned with the issues like what probability the insurance company will become bankrupt and which amount they can expect to obtain after taking account of the default risk of the insurer. Starting from a theoretical life insurance company which sells a participation insurance policy containing only a savings component and a single premium inflow, we simulate a life insurance company on an eight-year time horizon. We focus only on market risks as there is no mortality risk attached to the insurance contract. Finally several policies and investment strategies will be analysed. The purpose of the chapter is to evaluate how Solvency II can prevent the company to collapse. The papers discussing Solvency II effectiveness are qualitative in nature. In particular there is little research on the accuracy of the standard formula with regard to the proclaimed ruin probability of 0.5% per year. To do so we compare the probability of default at maturity of the life insurance policy, i.e. if the company has to enough assets to pay what was promised to the policyholders, with the early probability of default forced by Solvency II based on standard and internal models. We have first to calculate the Solvency Capital Requirement as laid down in the directive. One crucial point is the evaluation of liabilities. To do so we use an approach recently applied by the insurance sector called Least-squares Monte Carlo (LSMC). The aim of Solvency II is to monitor insurers on an annual basis. The SCR level can then be interpreted as a regulatory barrier, consistent with a model developed by Grosen and Jørgensen (2002). Key drivers of the ruin probability at maturity include interest rate parameters, portfolio riskiness and investment strategies in bonds. The continuously decrease of interest rates creates a challenge for insurers, especially life insurers that suffer a double impact on their balance sheet: a valuation effect and a decreasing reinvestment returns of premiums and maturing bonds. The latter explain also the riskiness of rolling-bond strategies compared to duration matching strategies. By setting the confidence level to 99.5% per year, the regulator wants to ensure that the annual ruin probability equals to 0.5%. Since the SCR from our internal model equals the 0.5% quantile of the distribution, it exactly matches the targeted ruin probability. Our analysis reveals that the set-up and calibration of the Solvency II standard model are inadequate as the solvency capital derived by the standard formula overestimates the results of the internal model. This is mainly the consequence of an overestimated equity capital and a lower diversification benefit. The 0.5% proclaimed goal under Solvency II is not reached, being too conservative. One declared goal of the directive is to decrease the duration gap between assets and liabilities. Solvency II penalizes then rolling-bond strategies. The long-term feature of our policy should impact the level of regulatory capital. As Solvency II is based on a quantile measurement, we define the solvency capital using the default probability objective for different horizons. SCR is not systematically a decreasing function of the time horizon even if a decreasing form appears on long-term. This shows undoubtedly that a horizon effect exists in terms of measurement of solvency. As the standard model overestimated the internal model capital we expect a forced default probability higher than 0.5% under the Solvency II framework. The SCR barrier stops the company more often than it should be. This can be interpreted as one cost of regulation, i.e. closing down financially sound at maturity companies. The analysis of the evolution of default probabilities as a function of time horizon reveals that ruin probabilities at maturity lie always below the Solvency II objective. Furthermore the gap between the observed default at maturity and the Solvency II objective is increasing over time; the situation is even worse for longer-term insurance products. Finally stakeholders are more interested in their expected return than in the default probability. A cost of regulation defined as the difference between stakeholder’s returns with and without regulatory framework exists, particularly for shareholders. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
2 |
Essays on Complexity in the Financial SystemGeraci, Marco Valerio 15 September 2017 (has links)
The goal of this thesis is to study the two key aspects of complexity of the financial system: interconnectedness and nonlinear relationships. In Chapter 1, I contribute to the literature that focuses on modelling the nonlinear relationship between variables at the extremes of their distribution. In particular, I study the nonlinear relationship between stock prices and short selling. Whereas most of the academic literature has focused on measuring the relationship between short selling and asset returns on average, in Chapter 1, I focus on studying the relationship that arises in the extremes of the two variables. I show that the association between financial stock prices and short selling can become extremely strong under exceptional circumstances, while at the same time being weak in normal times. The tail relationship is stronger for small cap firms, a result that is intuitively in line with the empirical findings that stocks with lower liquidity are more price-sensitive to short selling. Finally, results show that the adverse tail correlation between increases in short selling and declines in stock prices was not always lower during the ban periods, but had declined markedly towards the end of the analysis window. Such results cast doubts about the effectiveness of bans as a way to prevent self-reinforcing downward price spirals during the crisis. In Chapter 2, I propose a measure of interconnectedness that takes into account the time-varying nature of connections between financial institutions. Here, the parameters underlying comovement are allowed to evolve continually over time through permanent shifts at every period. The result is an extremely flexible measure of interconnectedness, which uncovers new dynamics of the US financial system and can be used to monitor financial stability for regulatory purposes. Various studies have combined statistical measures of association (e.g. correlation, Granger causality, tail dependence) with network techniques, in order to infer financial interconnectedness (Billio et al. 2012; Barigozzi and Brownlees, 2016; Hautsch et al. 2015). However, these standard statistical measures presuppose that the inferred relationships are time-invariant over the sample used for the estimation. To retrieve a dynamic measure of interconnectedness, the usual approach has been to divide the original sample period into multiple subsamples and calculate these statistical measures over rolling windows of data. I argue that this is potentially unsuitable if the system studied is time-varying. By relying on short subsamples, rolling windows lower the power of inference and induce dimensionality problems. Moreover, the rolling window approach is known to be susceptible to outliers because, in small subsamples, these have a larger impact on estimates (Zivot and Wang, 2006). On the other hand, choosing longer windows will lead to estimates that are less reactive to change, biasing results towards time-invariant connections. Thus, the rolling window approach requires the researcher to choose the window size, which involves a trade-off between precision and flexibility (Clark and McCracken, 2009). The choice of window size is critical and can lead to different results regarding interconnectedness. The major novelty of the framework is that I recover a network of financial spillovers that is entirely dynamic. To do so, I make the modelling assumption that the connection between any two institutions evolves smoothly through time. I consider this assumption reasonable for three main reasons. First, since connections are the result of many financial contracts, it seems natural that they evolve smoothly rather than abruptly. Second, the assumption implies that the best forecast of a connection in the future is the state of that connection today. This is consistent with the notion of forward-looking prices. Third, the assumption allows for high flexibility and for the data to speak for itself. The empirical results show that financial interconnectedness peaked around two main events: the Long-Term Capital Management crisis of 1998 and the great financial crisis of 2008. During these two events, I found that large banks and broker/dealers were among the most interconnected sectors and that real estate companies were the most vulnerable to financial spillovers. At the individual financial institution level, I found that Bear Stearns was the most vulnerable financial institution, however, it was not a major propagator, and this might explain why its default did not trigger a systemic crisis. Finally, I ranked financial institutions according to their interconnectedness and I found that rankings based on the time-varying approach were more stable than rankings based on other market-based measures (e.g. marginal expected short fall by Acharya et al. (2012) and Brownlees and Engle (2016)). This aspect is significant for policy makers because highly unstable rankings are unlikely to be useful to motivate policy action (Danielsson et al. 2015; Dungey et al. 2013). In Chapter 3, rather than assuming interconnectedness as an exogenous process that has to be inferred, as is done in Chapter 2, I model interconnectedness as an endogenous function of market dynamics. Here, I take interconnectedness as the realized correlation of asset returns. I seek to understand how short selling can induce higher interconnectedness by increasing the negative price pressure on pairs of stocks. It is well known that realized correlation varies continually through time and becomes higher during market events, such as the liquidation of large funds. Most studies model correlation as an exogenous stochastic process, as is done, for example, in Chapter 2. However, recent studies have proposed to interpret correlation as an endogenous function of the supply and demand of assets (Brunnermeier and Pedersen, 2005; Brunnermeier and Oehmke, 2014; Cont and Wagalath, 2013; Yang and Satchell, 2007). Following these studies, I analyse the relationship between short selling and correlation between assets. First, thanks to new data on public short selling disclosures for the United Kingdom, I connect stocks based on the number of common short sellers actively shorting them. I then analyse the relationship between common short selling and excess correlation of those stocks. To this end, I measure excess correlation as the monthly realized correlation of four-factor Fama and French (1993) and Carhart (1997) daily returns. I show that common short selling can predict one-month ahead excess correlation, controlling for similarities in size, book-to-market, momentum, and several other common characteristics. I verify the confirm the predictive ability of common short selling out-of-sample, which could prove useful for risk and portfolio managers attempting to forecast the future correlation of assets. Moreover, I showed that this predictive ability can be used to establish a trading strategy that yields positive cumulative returns over 12 months. In the second part of the chapter I concentrate on possible mechanisms that could give rise to this effect. I focus on three, non-exclusive, mechanisms. First, short selling can induce higher correlation in asset prices through the price-impact mechanism (Brunnermeier and Oehmke, 2014; Cont and Wagalath, 2013). According to this mechanism, short sellers can contribute to price declines by creating sell-order imbalances i.e. by increasing excess supply of an asset. Thus, short selling across several stocks should increase the realized correlation of those stocks. Second, common short selling can be associated with higher correlation if short sellers are acting as voluntary liquidity providers. According to this mechanisms, short sellers might act as liquidity providers in times of high buy-order imbalances (Diether et al. 2009b). In this cases, the low returns observed after short sales might be compensations to short sellers for providing liquidity. In a multi-asset setting, this mechanism would result in short selling being associated with higher correlation mechanism. Both above-mentioned mechanisms deliver a testable hypothesis that I verify. In particular, both mechanisms posit that the association between short selling and correlation should be stronger for stocks which are low on liquidity. For the first mechanism, the price impact effect should be stronger for illiquid stocks and stocks with low market depth. For the liquidity provision mechanism, the compensation for providing liquidity should be higher for illiquid stocks. The empirical results cannot confirm that uncovered association between short selling and correlation is stronger for illiquid stocks, thus not supporting the price-impact and liquidity provision hypothesis. I thus examine a third possible mechanism that could explain the uncovered association between short selling and correlation i.e. the informative trading mechanism. Short sellers have been found to be sophisticated market agents which can predict future returns (Dechow et al. 2001). If this is indeed the case, then short selling should be associated with higher future correlation. I found that informed common short selling i.e. common short selling that is linked to informative trading, was strongly associated to future excess correlation. This evidence supports the informative trading mechanism as an explanation for the association between short selling and correlation. In order to further verify this mechanism, I checked if informed short selling takes place in the data, whilst controlling for several of the determinants of short selling, including short selling costs. The results show evidence of both informed and momentum-based non-informed short selling taking place. Overall, the results have several policy implications for regulators. The results suggest that the relationship between short selling and future excess correlation is driven by informative short selling, thus confirming the sophistication of short sellers and their proven importance for market efficiency and price informativeness (Boehmer and Wu, 2013). On the other hand, I could not dismiss that also non-informative momentum-based short selling is taking place in the sample. The good news is that I did not find evidence of a potentially detrimental price-impact effect of common short selling for illiquid stock, which is the sort of predatory effect that regulators often fear. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
3 |
Empirical evidence on time-varying risk attitudesGilson, Matthieu 05 September 2019 (has links) (PDF)
My thesis focuses on the risk-taking behavior of financial agents, aiming particularlyat better understanding how risk attitudes can change over time. It alsoexplores the implications that these changes have on financial markets, and on theeconomy as a whole.The first paper, which is a joint work with Kim Oosterlinck and Andrey Ukhov,studies how risk aversion of financial markets’ participants is affected by the SecondWorld War. The literature links extreme events to changes in risk aversion but failsto find a consensus on the direction of this change. Moreover, due to data limitationsand difficulties in estimation of risk aversion, the speed of the change in risk aversionhas seldom been analyzed. This paper develops an original methodology to overcomethe latter limitation. To estimate changes in attitude toward risk, we rely on thedaily market prices of lottery bonds issued by Belgium. We provide evidence on thedynamic of risk attitude before, during and after the Second World War. We findsubstantial variations between 1938 and 1946. Risk aversion increased at the outbreakof the war, decreased dramatically during the occupation to increase again afterthe war. To our knowledge, this finding of reversal in risk attitude is unique in theliterature. We discuss several potential explanations to this pattern, namely changesin economic perspectives, mood, prospect theory, and background risk. While theymight all have played a role, we argue that habituation to background risk mostconsistently explains the observed behavior over the whole period. Living continuouslyexposed to war-related risks has gradually changed the risk-taking behavior ofinvestors.In the second paper, I derive a measure of risk aversion from asset prices andanalyze what are its main drivers. Given the complexity of eliciting risk aversionfrom asset prices, few papers provide empirical evidence on the dynamics of riskaversion in a long-term perspective. This paper tries to fill the gap. First, I providea measure of risk aversion that is original, both because of the length of its sampleperiod (1958- 1991) and the methodology used. I study the relationship betweenthis new measure of risk aversion and several key economic variables in a structuralvector autoregression. Results show that risk aversion varies over the period. Aworsening of economic conditions, a decrease in stock prices or a tighter monetarypolicy lead to an increase in risk aversion. On the other hand, an increase in riskaversion is linked to a larger corporate bond credit spread and has an adverse effecton stock prices.The third paper explores the impact of asset price bubbles on the riskiness offinancial institutions. I investigate the effect of a real estate boom on the financialstability of commercial banks in the United States using exogenous variations intheir exposure to real estate prices. I find that the direction of the effect dependson bank characteristics. Although higher real estate prices have a positive impacton bank stability on average, small banks and banks that operate in competitivebanking markets experience a negative effect. I reconcile these findings by providingevidence that higher real estate prices benefit commercial banks by raising the valueof collateral pledged by borrowers but at the cost of an increase in local bankingcompetition. This increase in competition affects banks that have a low marketpower more severely, which explains why small banks and banks facing a high degreeof competition display relatively lower stability during a real estate boom. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
4 |
Essays on Bank OpaquenessD'Udekem D'Acoz, Benoit 02 September 2020 (has links) (PDF)
Opaqueness is inherent to financial institutions but contributes to the fragility of the banking system. The archetypal assets held by banks, loans, have a value that cannot be properly communicated outside of a banking relationship (Sharpe 1990; Rajan 1992). Because they are relationship specific and raise adverse selection concerns, these assets are illiquid (Diamond and Rajan 2001). However, these assets are financed with liquid deposits; uncertainty about their value can cause depositors to withdraw their funds and banks to topple (Calomiris and Kahn 1991; Chen 1999). Additionally, the combination of opaqueness and leverage creates moral hazard incentives, exacerbated by government guarantees, as well as other agency conflicts that are detrimental to stability (Jensen and Meckling 1976).This dissertation presents three original contributions on the consequences of bank opaqueness. The first contribution concerns financial analysts. We show that, unlike in other industries, the most talented sell-side analysts are no more likely than their peers to issue recommendation revisions that influence bank stock prices. However, star analysts appear to maintain influence by uncovering firm-specific bad news that induces sharp negative revaluations of bank stock prices. In the second contribution, we find that the persistence of bank dividend policies increases with agency conflicts between shareholders and managers and decreases in the presence of large institutional shareholders who have an incentive to monitor banks and to mitigate agency conflicts. Our third contribution assesses the competitive distortions in bond markets since the recent reforms of the European Union bank safety net. We find that nationalized systemic banks, and those that benefit from high bailout expectations, do not benefit from funding advantages compared to their peers. Our findings also suggest that bailout expectations for these banks have diminished, consistent with new regulatory frameworks enacted after the financial crisis being effective.Overall, our findings suggest that opaqueness presents formidable challenges for public authorities but that its consequences can be mitigated by credible regulation. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
5 |
Essays in Financial EconomicsKoulischer, Francois 24 March 2016 (has links)
The financial crisis that started in 2007 has seen central banks play an unprecedented role both to ensure financial stability and to support economic activity. While the importance of the central bank in ensuring financial stability is well known (see e.g. Padoa-Schioppa (2014)), the unprecedented nature of the financial crisis led central banks to resort to new instruments for which the literature offered little guidance. This thesis aims to bridge this gap, using both theory and data to better understand one of the main instruments used by central banks: collateralized loans. The general contribution of the thesis is thus both retrospective and forward looking. On a retrospective point of view, it helps understanding the actions of the central bank during the crisis and the mechanisms involved. Looking forward, a better understanding of the tools used during the crisis allows to better inform future policies.The first chapter starts from the observation that the literature, starting with Bagehot (1873), has generally assumed that the central bank should lend against high quality collateral. However in the 2007-2013 crisis central banks lent mostly against low quality collateral. In this chapter, we explore when it is efficient for the central bank to relax its collateral policy. In our model, a commercial bank funds projects in the real economy by borrowing against collateral from the interbank market or the central bank. While collateral prevents the bank from shirking (in the spirit of Holmstrom and Tirole (2011)), it is costly to use as its value is lower for investors and the central bank than for the bank. We find that when the bank has high levels of available collateral, it borrows in the interbank market against low collateral requirements so that the collateral policy of the central bank has no impact on banks' borrowing. However, when the amount of available collateral falls below a threshold, the lack of collateral prevents borrowing. In this case, the collateral policy of the central bank can affect lending, and it can therefore be optimal for the central bank to relax its collateral requirements to avoid the credit crunch.The second chapter focuses on collateralized loans in the context of the euro area. According to the literature on optimum currency area, one of the main drawbacks of currency unions is the inability for the central bank to accommodate asymmetric shocks with its interest rate policy. Suppose that there are 2 countries in an economy and one suffers a negative shock while the other has a positive shock. Theory would suggest an accommodative policy - low interest rates - in the first country and a restrictive policy - high interest rates - in the second one. This is however impossible in a currency union because the interest rate must be the same for both countries (Mundell 1961, McKinnon 1963, de Grauwe 2012). In this chapter I show that collateral policy can accommodate asymmetric shocks. I extend the model of collateralized lending of the first chapter to two banks A and B and two collateral types 1 and 2 .I also introduce a central bank deposit facility which allows the interest rate instrument to be compared with the collateral policy instrument in the context of a currency area hit by asymmetric shocks. Macroeconomic shocks impact the investment opportunities available to banks and the value of their collateral and the central bank seeks to steer economy rates towards a target level. I show that when banks have different collateral portfolios (as in a monetary union where banks invest in the local economy), an asymmetric shock on the quality and value of their collateral can increase interest rates in the country hit by the negative shock while keeping them unchanged in the country with a positive shock.The third chapter provides an empirical illustration of this “collateral channel” of open market operations. We use data on assets pledged by banks to the ECB from 2009 to 2011 to quantify the “collateral substitution / smoother transmission of monetary policy” trade-off faced by the central bank. We build an empirical model of collateral choice that is similar in spirit to the model on institutional demand for financial assets of Koijen (2014). We show how the haircut of the central bank can affect the relative cost of pledging collateral to the central bank and how this cost can be estimated using the amount of assets pledged by banks. Our model allows to perform a broad set of policy counterfactuals. For example, we use the recovered coefficient to assess how a 5% haircut increase on all collateral belonging to a specific asset class (e.g. government bonds or ABS) would affect the type of collateral used at the central bank. The final chapter focuses on the use of loans as collateral by banks in the euro area. While collateral is generally viewed as consisting of liquid and safe assets such as government bonds, we show that banks in Europe do use bank loans as collateral. We identify two purposes of bank loan collateral: funding and liquidity purposes. The main distinction between the two purposes is with respect to the maturity of the instruments involved: liquidity purposes refer to the use of bank loans as collateral to obtain short term liquidity and manage unexpected liquidity shocks. In practice the central bank is the main acceptor of these collateral. The second type of use is for funding purposes, in which case bank loans are used as collateral in ABSs or covered bonds. The collateral in these transactions allow banks to obtain a lower long-term funding cost. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
6 |
Essays on the economics, politics and finance of infrastructureBertomeu, Salvador 21 January 2021 (has links) (PDF)
The main idea of this thesis is to study three different issues, economic, political, or financial, related to three different public infrastructure sectors, transport, water and sewerage, and electricity, by using three different methodological approaches. In the first chapter, I make creative use of a non-parametric technique traditionally used to measure the relative efficiency of a set of similar firms, data envelopment analysis, to identify the most likely objective, economic vs. political, behind a specific policy. In the second chapter I empirically investigate the effects of the increasing private financial ownership of the water and sewerage utilities in England and Wales on key outcome variables such as leverage levels and consumer bills. Finally, in the third chapter, I evaluate an equity-aimed policy introduced in the electricity sector in Spain in 2009 by measuring the effect of its introduction on the probability of a household of being energy poor.Chapter One – Unbundling political and economic rationality: a non-parametric approach tested on transport infrastructure in SpainThis paper suggests a simple quantitative method to assess the extent to which public investment decisions are dominated by political or economic motivations. The true motivation can be identified by modeling each policy goal as the focus of the optimization anchoring a data envelopment analysis of the efficiency of the observed implementation. In other words, we rank performance based on how far observed behavior is under each possible goal, and the goal for which the distance is smaller reveals the specific motivation of the investment or any policy decision for that matter. Traditionally, data envelopment analysis is used to measure the relative efficiency of a set of firms having a similar productive structure. In this case, each firm corresponds to a different policy year, the policy being the determinant of the investment made.The approach is tested on Spain’s land transport infrastructure policy since it is argued by many observers to be driven more by political than economic concerns, resulting in a mismatch between capacity investment and traffic demand. History has shown that when the source of financing has been private, the network has been developed in areas with high demand, i.e. the Northern and Mediterranean corridors. When the source has been public, the network has been developed following a radial pattern, converging from a to Madrid. The method clearly shows that public investments in land transport infrastructure have generally been more consistent with a political objective – the centralization of economic power – than with an economic objective – maximizing mobility –.Chapter Two – On the effects of the private financial ownership of regulated utilities: lessons from the UK water sectorThis paper analyzes the quantitative impact of the growing role of non-traditional financial actors in the financing structure and consumer pricing of regulated private utilities. The focus is on the water sector in England and Wales, where the effect of the firms’ corporate financing and ownership strategies on key outcome variables may have been underestimated. The sector was privatized in 1989, year in which the 10 regional monopolies became 10 water and sewerage companies, listed and publicly traded on UK Stock Exchanges. Since then, six of the ten have been de-listed, bought-out by private equity – investment and infrastructure funds. I make use of this variation in ownership to measure the effect on leverage levels and consumer bills.I develop a theoretical framework allowing me to derive two hypotheses: first, the buyout of a company increases its leverage level, and second, the buyout of a company increases the consumer bill through higher leverage levels. The empirical analysis is based on two sequential steps: a staggered difference-in-differences estimation shows that private equity buyouts increase the leverage levels of water utilities. An instrumental variable and two-stage least squares estimation then show that these higher leverage levels increase the average consumer bills of bought-out utilities more than if they had not been bought-out. The estimated impact of the private equity buyouts in the sector in England and Wales on the annual average consumer bill ranges from 13.5 to 32.6 GBP, for a sample average bill of about 427 GBP.Chapter Three – Understanding the effectiveness of the electricity social rate in reducing energy poverty in SpainThis paper analyzes the causal impact of the introduction of a social subsidy, the bono social de electricidad, in Spain's electricity market in 2009. The measure was introduced following the surge in energy poverty, increasing particularly after the financial crisis. Using data from the family budget survey from 2006 to 2017, we evaluate the social policy in its fight against energy poverty.We proceed in two steps. First, we use a difference-in-differences approach to measure such a causal impact and to analyze how the introduction of the measure directly affected eligible households. We find that the introduction of the subsidy has reduced the likelihood of energy poverty for the eligible households. Therefore, the bono social de electricidad has reached its equity objective of increasing affordability of electricity. The second step aims at understanding how specifically the introduction of the subsidy affects consumers. We find that, in reaction to lower effective prices, households do not increase their consumption of electricity, resulting in lower total electricity expenditure. We are therefore able to show that this policy did not induce a change in the consumption behavior and that the increased affordability entirely resulted in a decrease of expenditure in electricity / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
7 |
Essays on tail risk in macroeconomics and finance: measurement and forecastingRicci, Lorenzo 13 February 2017 (has links)
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0924 seconds