331 |
Analysis of the relationship between business cycles and bank credit extenstion : evidence from South AfricaChakanyuka, Goodman 06 1900 (has links)
This study provides evidence of the relationship between bank-granted credit and
business cycles in South Africa. The study is conducted in three phases, namely
qualitative research (Phase I), quantitative research (Phase II) and econometric analysis
(Phase III). A sequential (connected data) mixed methodology (Phase I and II) is used to
collect and analyze primary data from market participants. The qualitative research
(Phase I) involves structured interviews with influential or well informed people on the
subject matter. Phase I of the study is used to understand the key determinants of bank
credit in South Africa and to appreciate how each of the credit aggregates behaves during
alternate business cycles. Qualitative survey results suggest key determinants of
commercial bank credit in South Africa as economic growth, collateral value, bank
competition, money supply, deposit liabilities, capital requirements, bank lending rates
and inflation. The qualitative results are used to formulate questions of the structured
survey questionnaire (Quantitative research- Phase II). The ANOVA and Pearman’s
product correlation analysis techniques are used to assess relationship between variables.
The quantitative results show that there is direct and positive relationship between bank
lending behavior and credit aggregates namely economic growth, collateral value, bank
competition and money supply. On the other hand, the results show that there is a
negative relationship between credit growth and bank capital and lending rates. Overall,
the quantitative findings show that bank lending in South Africa is procyclical. The
survey results indicate that the case for demand-following hypothesis is stronger than
supply-leading hypothesis in South Africa.
The econometric methodology is used to augment results of the survey study. Phase III of
the study re-examines econometric relationship between bank lending and business
cycles. The study employs cointegration and vector error correction model (VECM)
techniques in order to test for existence of long-run relationship between the selected
variables. Granger causality test technique is applied to the variables of interest to test for
direction of causation between variables. The study uses quarterly data for the period of
1980:Q1 to 2013:Q4. Business cycles are determined and measured by Gross Domestic
Product at market prices while bank-granted credit is proxied by credit extension to the
private sector. The econometric test results show that there is a significant long-run
relationship between economic growth and bank credit extension. The Granger causality
test provides evidence of unidirectional causal relationship with direction from economic
growth to credit extension for South Africa. The study results indicate that the case for
demand-following hypothesis is stronger than supply-leading hypothesis in South Africa.
Economic growth spurs credit market development in South Africa.
Overall, the results show that there is a stable long-run relationship between macroeconomic
business cycles and real credit growth in South Africa. The results show that
economic growth significantly causes and stimulates bank credit. The study, therefore,
recommends that South Africa needs to give policy priority to promotion and
development of the real sector of the economy to propel and accelerate credit extension.
Economic growth is considered as the significant policy variable to stimulate credit
extension. The findings therefore hold important implications for both theory and policy. / Business Management / D.B.L.
|
332 |
Essays on the economics of risk and uncertaintyBerger, Loïc 22 June 2012 (has links)
In the first chapter of this thesis, I use the smooth ambiguity model developed by Klibanoff, Marinacci, and Mukerji (2005) to define the concepts of ambiguity and uncertainty premia in a way analogous to what Pratt (1964) did in the risk theory literature. I show that these concepts may be useful to quantify the effect ambiguity has on the welfare of economic agents. I also define several other concepts such as the unambiguous probability equivalent or the ambiguous utility premium, provide local approximations of these different premia and show the link that exists between them when comparing different degrees of ambiguity aversion not only in the small, but also in the large. <p><p>In the second chapter, I analyze the effect of ambiguity on self-insurance and self-protection, that are tools used to deal with the uncertainty of facing a monetary loss when market insurance is not available (in the self-insurance model, the decision maker has the opportunity to furnish an effort to reduce the size of the loss occurring in the bad state of the world, while in the self-protection – or prevention – model, the effort reduces the probability of being in the bad state). <p>In a short note, in the context of a two-period model I first examine the links between risk-aversion, prudence and self-insurance/self-protection activities under risk. Contrary to the results obtained in the static one-period model, I show that the impacts of prudence and of risk-aversion go in the same direction and generate a higher level of prevention in the more usual situations. I also show that the results concerning self-insurance in a single period framework may be easily extended to a two-period context. <p>I then consider two-period self-insurance and self-protection models in the presence of ambiguity and analyze the effect of ambiguity aversion. I show that in most common situations, ambiguity prudence is a sufficient condition to observe an increase in the level of effort. I propose an interpretation of the model in the context of climate change, so that self-insurance and self-protection are respectively seen as adaptation and mitigation efforts a policy-maker should provide to deal with an uncertain catastrophic event, and interpret the results obtained as an expression of the Precautionary Principle. <p><p>In the third chapter, I introduce the economic theory developed to deal with ambiguity in the context of medical decision-making. I show that, under diagnostic uncertainty, an increase in ambiguity aversion always leads a physician whose goal is to act in the best interest of his patient, to choose a higher level of treatment. In the context of a dichotomic choice (treatment versus no treatment), this result implies that taking into account the attitude agents generally manifest towards ambiguity may induce a physician to change his decision by opting for treatment more often. I further show that under therapeutic uncertainty, the opposite happens, i.e. an ambiguity averse physician may eventually choose not to treat a patient who would have been treated under ambiguity neutrality. <p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
333 |
Essays on the macroeconomic implications of information asymmetriesMalherbe, Frédéric 02 September 2010 (has links)
Along this dissertation I propose to walk the reader through several macroeconomic<p>implications of information asymmetries, with a special focus on financial<p>issues. This exercise is mainly theoretical: I develop stylized models that aim<p>at capturing macroeconomic phenomena such as self-fulfilling liquidity dry-ups,<p>the rise and the fall of securitization markets, and the creation of systemic risk.<p>The dissertation consists of three chapters. The first one proposes an explanation<p>to self-fulfilling liquidity dry-ups. The second chapters proposes a formalization<p>of the concept of market discipline and an application to securitization<p>markets as risk-sharing mechanisms. The third one offers a complementary<p>analysis to the second as the rise of securitization is presented as banker optimal<p>response to strict capital constraints.<p>Two concepts that do not have unique acceptations in economics play a central<p>role in these models: liquidity and market discipline.<p>The liquidity of an asset refers to the ability for his owner to transform it into<p>current consumption goods. Secondary markets for long-term assets play thus<p>an important role with that respect. However, such markets might be illiquid due<p>to adverse selection.<p>In the first chapter, I show that: (1) when agents expect a liquidity dry-up<p>on such markets, they optimally choose to self-insure through the hoarding of<p>non-productive but liquid assets; (2) this hoarding behavior worsens adverse selection and dries up market liquidity; (3) such liquidity dry-ups are Pareto inefficient<p>equilibria; (4) the government can rule them out. Additionally, I show<p>that idiosyncratic liquidity shocks à la Diamond and Dybvig have stabilizing effects,<p>which is at odds with the banking literature. The main contribution of the<p>chapter is to show that market breakdowns due to adverse selection are highly<p>endogenous to past balance-sheet decisions.<p>I consider that agents are under market discipline when their current behavior<p>is influenced by future market outcomes. A key ingredient for market discipline<p>to be at play is that the market outcome depends on information that is observable<p>but not verifiable (that is, information that cannot be proved in court, and<p>consequently, upon which enforceable contracts cannot be based).<p>In the second chapter, after introducing this novel formalization of market<p>discipline, I ask whether securitization really contributes to better risk-sharing:<p>I compare it with other mechanisms that differ on the timing of risk-transfer. I<p>find that for securitization to be an efficient risk-sharing mechanism, it requires<p>market discipline to be strong and adverse selection not to be severe. This seems<p>to seriously restrict the set of assets that should be securitized for risk-sharing<p>motive.<p>Additionally, I show how ex-ante leverage may mitigate interim adverse selection<p>in securitization markets and therefore enhance ex-post risk-sharing. This<p>is interesting because high leverage is usually associated with “excessive” risktaking.<p>In the third chapter, I consider risk-neutral bankers facing strict capital constraints;<p>their capital is indeed required to cover the worst-case-scenario losses.<p>In such a set-up, I find that: 1) banker optimal autarky response is to diversify<p>lower-tail risk and maximize leverage; 2) securitization helps to free up capital<p>and to increase leverage, but distorts incentives to screen loan applicants properly; 3) market discipline mitigates this problem, but if it is overestimated by<p>the supervisor, it leads to excess leverage, which creates systemic risk. Finally,<p>I consider opaque securitization and I show that the supervisor: 4) faces uncertainty<p>about the trade-off between the size of the economy and the probability<p>and the severity of a systemic crisis; 5) can generally not set capital constraints<p>at the socially efficient level. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
334 |
Essays on macroeconomics and financeEmiris, Marina January 2006 (has links)
Doctorat en Sciences politiques et sociales / info:eu-repo/semantics/nonPublished
|
335 |
An econometric analysis of the impact of imports on inflation in NamibiaShilongo, Fillemon 01 1900 (has links)
This study investigated the impact of import prices on inflation in Namibia, using quarterly time series data over the period 1998Q2-2017Q4. The variables used in the study are inflation rate, M2, real GDP and import prices. The study found that all the variables are integrated of order one (1), and upon testing for cointegration using Johansen test, there was no cointegration. Therefore, the model was analysed using ordinary least squares (OLS) techniques of vector autoregression (VAR) approach, granger causality test and the impulse response function. The results of the study revealed that import prices granger causes inflation at 1% level of significance. Inflation is also granger caused by real GDP and broad money supply (M2) does not Granger cause inflation. The study further revealed that the shocks to import prices are significant in explaining variation in inflation both in the short run and in the long term. / Economics / M. Com. (Economics)
|
336 |
Specification and estimation of the price responsiveness of alcohol demand: a policy analytic perspectiveDevaraj, Srikant 13 January 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Accurate estimation of alcohol price elasticity is important for policy analysis – e.g.., determining optimal taxes and projecting revenues generated from proposed tax changes. Several approaches to specifying and estimating the price elasticity of demand for alcohol can be found in the literature. There are two keys to policy-relevant specification and estimation of alcohol price elasticity. First, the underlying demand model should take account of alcohol consumption decisions at the extensive margin – i.e., individuals' decisions to drink or not – because the price of alcohol may impact the drinking initiation decision and one's decision to drink is likely to be structurally different from how much they drink if they decide to do so (the intensive margin). Secondly, the modeling of alcohol demand elasticity should yield both theoretical and empirical results that are causally interpretable.
The elasticity estimates obtained from the existing two-part model takes into account the extensive margin, but are not causally interpretable. The elasticity estimates obtained using aggregate-level models, however, are causally interpretable, but do not explicitly take into account the extensive margin. There currently exists no specification and estimation method for alcohol price elasticity that both accommodates the extensive margin and is causally interpretable. I explore additional sources of bias in the extant approaches to elasticity specification and estimation: 1) the use of logged (vs. nominal) alcohol prices; and 2) implementation of unnecessarily restrictive assumptions underlying the conventional two-part model. I propose a new approach to elasticity specification and estimation that covers the two key requirements for policy relevance and remedies all such biases. I find evidence of substantial divergence between the new and extant methods using both simulated and the real data. Such differences are profound when placed in the context of alcohol tax revenue generation.
|
337 |
The development of optimal composite multiples models for the performance of equity valuations of listed South African companies : an empirical investigationNel, Willem Soon 09 October 2014 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: The practice of combining single-factor multiples (SFMs) into composite multiples
models is underpinned by the theory that various SFMs carry incremental information,
which, if encapsulated in a superior value estimate, largely eliminates biases and
errors in individual estimates. Consequently, the chief objective of this study was to
establish whether combining single value estimates into an aggregate estimate will
provide a superior value estimate vis-á-vis single value estimates.
It is envisaged that this dissertation will provide a South African perspective, as an
emerging market, to composite multiples modelling and the multiples-based equity
valuation theory on which it is based. To this end, the study included 16 SFMs, based
on value drivers representing all of the major value driver categories, namely
earnings, assets, dividends, revenue and cash flows.
The validation of the research hypothesis hinged on the results obtained from the
initial cross-sectional empirical investigation into the factors that complicate the
traditional multiples valuation approach. The main findings from the initial analysis,
which subsequently directed the construction of the composite multiples models, were
the following: Firstly, the evidence suggested that, when constructing multiples, multiples whose
peer groups are based on a combination of valuation fundamentals perform more
accurate valuations than multiples whose peer groups are based on industry
classifications. Secondly, the research results confirmed that equity-based multiples
produce more accurate valuations than entity-based multiples. Thirdly, the research
findings suggested that multiples models that are constructed on earnings-based
value drivers, especially HE, offer higher degrees of valuation accuracy compared to
multiples models that are constructed on dividend-, asset-, revenue- or cash flowbased
value drivers.
The results from the initial cross-sectional analysis were also subjected to an industry
analysis, which both confirmed and contradicted the initial cross-sectional-based
evidence. The industry-based research findings suggested that both the choice of optimal Peer Group Variable (PGV) and the choice of optimal value driver are
industry-specific.
As with the initial cross-sectional analysis, earnings-based value drivers dominated
the top positions in all 28 sectors that were investigated, while HE was again
confirmed as the most accurate individual driver.
However, the superior valuation performance of multiples whose peer groups are
based on a combination of valuation fundamentals, as deduced from the crosssectional
analysis conducted earlier, did not hold when subjected to an industry
analysis, suggesting that peer group selection methods are industry-specific.
From this evidence, it was possible to construct optimal industry-specific SFMs
models, which could then be compared to industry-specific composite models. The
evidence suggested that composite-based modelling offered, on annual average,
between 20.21% and 44.59% more accurate valuations than optimal SFMs modelling
over the period 2001 to 2010.
The research results suggest that equity-based composite modelling may offer
substantial gains in precision over SFMs modelling. These gains are, however,
industry-specific and a carte blanche application thereof is ill advised. Therefore,
since investment practitioners’ reports typically include various multiples, it seems
prudent to consider the inclusion of composite models as a more accurate alternative. / AFRIKAANSE OPSOMMING: Die praktyk om Enkelfaktor Veelvoude (EFVe) te kombineer in saamgestelde
veelvoudmodelle word ondersteun deur die teorie dat verskillende EFVe oor
inkrementele inligting beskik, wat, indien dit in ’n superieure waardeskatting
opgeneem word, grootliks vooroordele en foute in individuele skattings elimineer.
Gevolglik was die hoofdoel van hierdie studie om vas te stel of die kombinering van
verskeie enkelfaktor waardeskattings in ’n totale waardeskatting ’n superieure
waardeskatting sal verskaf vis-á-vis enkelfaktor waardeskattings.
Dit word voorsien dat hierdie proefskrif ’n Suid-Afrikaanse perspektief, as ’n
ontluikende mark, sal bied aangaande saamgestelde veelvoudmodellering en die
veelvoud-gebaseerde ekwiteitswaardasie-teorie waarop dit gebaseer is. Hiermee ten
doel, sluit hierdie studie 16 EFVe in, gebaseer op waardedrywers wat al die
vernaamste waardedrywerkategorieë, naamlik verdienste, bates, dividende, omset en
kontantvloeie, verteenwoordig.
Die bevestiging van die navorsingshipotese is afhanklik van die resultate soos bekom
vanuit die aanvanklike dwarsdeursnee-empiriese ondersoek na die faktore wat die
tradisionele veelvoudwaardasieproses kompliseer. Die hoofbevindinge van die
aanvanklike ontleding, wat daarna rigtinggewend was vir die komposisie van die
saamgestelde veelvoudmodelle, was die volgende: Eerstens, dui die bewyse daarop dat, wanneer veelvoude saamgestel word,
veelvoude waarvan die portuurgroepe op ’n kombinasie van fundamentele waardasieveranderlikes
gebaseer is, meer akkurate waardasies lewer as veelvoude waarvan
die portuurgroepe op industrie-klassifikasies gebaseer is. Tweedens, het die
navorsingsresultate bevestig dat ekwiteitsgebaseerde veelvoude meer akkurate
waardasies lewer as entiteitsgebaseerde veelvoude. Derdens, toon die
navorsingsbevindinge dat veelvoudmodelle wat saamgestel word uit verdienstegebaseerde
waardedrywers, veral wesensverdienste (WV), hoër grade van
waardasie-akkuraatheid bied in vergelyking met veelvoudmodelle wat saamgestel
word uit dividend-, bate-, omset- of kontantvloei-gebaseerde waardedrywers. Die resultate van die aanvanklike dwarsdeursnee-ontleding is ook onderwerp aan ’n
industrie-ontleding, wat die aanvanklike bevindinge van die dwarsdeursnee-ontleding
beide bevestig en weerspreek het. Die bevindinge vanaf die industrie-ontleding dui
daarop dat beide die keuse van optimale Portuurgroepveranderlike (PGV) en die
optimale keuse van waardedrywer, industrie-spesifiek is.
Soos met die aanvanklike dwarsdeursnee-ontleding, het verdienste-gebaseerde
waardedrywers die top posisies by al 28 sektore wat ondersoek is, gedomineer, terwyl
WV weer as die akkuraatste individuele waardedrywer bevestig is.
Die superieure waardasie-resultate van veelvoude waarvan die portuurgroepe
gebaseer was op ’n kombinasie van fundamentele waardasie-veranderlikes, soos
afgelei uit die aanvanklike dwarsdeursnee-ontleding, het egter nie dieselfde resultate
gelewer op ’n per sektor basis nie, wat aandui dat portuurgroep seleksiemetodes
industrie-spesifiek is.
Vanuit hierdie bevindinge was dit moontlik om optimale EFV-modelle saam te stel,
wat dan vergelyk kon word met industrie-spesifieke saamgestelde veelvoudmodelle.
Die bevindinge het voorgestel dat saamgestelde modellering gemiddeld jaarliks,
tussen 20.21% en 44.59% meer akkurate waardasies gelewer het as optimale EFVmodellering
oor die tydperk 2001 tot 2010. Die navorsingsresultate dui aan dat ekwiteitsgebaseerde saamgestelde modellering
aansienlike toenames in waardasie-akkuraatheid mag bewerkstellig bo dié van EFVmodellering.
Hierdie toenames is egter industrie-spesifiek en ’n carte blanche
toepassing daarvan is nie aan te beveel nie. Gevolglik, aangesien
beleggingspraktisyns se verslae tipies verskeie veelvoude insluit, blyk dit redelik om
die insluiting van saamgestelde modelle as ’n meer akkurate alternatief te oorweeg.
|
338 |
Macroeconomic variables and the stock market : an empirical comparison of the US and JapanHumpe, Andreas January 2008 (has links)
In this thesis, extensive research regarding the relationship between macroeconomic variables and the stock market is carried out. For this purpose the two largest stock markets in the world, namely the US and Japan, are chosen. As a proxy for the US stock market we use the S&P500 and for Japan the Nikkei225. Although there are many empirical investigations of the US stock market, Japan has lagged behind. Especially the severe boom and bust sequence in Japan is unique in the developed world in recent economic history and it is important to shed more light on the causes of this development. First, we investigate the long-run relationship between selected macroeconomic variables and the stock market in a cointegration framework. As expected, we can support existing findings in the US, whereas Japan does not follow the same relationships as the US. Further econometric analysis reveals a structural break in Japan in the early 1990s. Before that break, the long-run relationship is comparable to the US, whereas after the break this relationship breaks down. We believe that a liquidity trap in a deflationary environment might have caused the normal relationship to break down. Secondly, we increase the variable set and apply a non-linear estimation technique to investigate non-linear behaviour between macroeconomic variables and the stock market. We find the non-linear models to have better in and out of sample performance than the appropriate linear models. Thirdly, we test a particular non-linear model of noise traders that interact with arbitrage traders in the dividend yield for the US and Japanese stock market. A two-regime switching model is supported with an inner random or momentum regime and an outer mean reversion regime. Overall, we recommend investors and policymakers to be aware that a liquidity trap in a deflationary environment could also cause severe downturn in the US if appropriate measures are not implemented accordingly.
|
339 |
Optimal monetary and fiscal policy in economies with multiple distortionsHorvath, Michal January 2008 (has links)
This thesis aims to contribute towards a better understanding of the optimal coordination of monetary and fiscal policy in complex economic environments. We analyze the characteristics of optimal dynamics in an economy in which neither prices nor wages adjust instantaneously and lump-sum taxes are unavailable as a source of government finance. We then propose that monetary and fiscal policy should be coordinated to satisfy a pair of simple `specific targeting rules', a rule for inflation and a rule for the growth of real wages. We show that such simple rule-based conduct of policy can do remarkably well in replicating the dynamics of the economy under optimal policy following a given shock. We study optimal policy coordination in the context of an economy where a constant proportion of agents lacks access to the asset market. We find that the optimal economy moves along an analogue of a conventional inflation-output variance frontier in response to a government spending shock, as the population share of non-Ricardian agents rises. The optimal output response rises, while inflation volatility subsides. There is little evidence that increased government spending would crowd in private consumption in the optimal economy. We investigate the optimal properties and wider implications of a macroeconomic policy framework aimed at meeting an unconditional debt target. We show that the best stationary policy in terms of an unconditional welfare measure is characterized by highly persistent debt dynamics, less history-dependence in the conduct of policy, less reliance on debt finance and more short-term volatility following a government spending shock compared with the non-stationary `timelessly optimal' plan.
|
340 |
The Price of Uranium : an Econometric Analysis and Scenario SimulationsKroén, Johannes January 2019 (has links)
The purpose of this thesis is to analyze: (a) the determinants of the global price of uranium; and (b) how this price could be affected by different nuclear power generation scenarios for 2030. To do this a multivariable regression analysis will be used. Within the model, the price of uranium is the dependent variable and the independent variables are generated nuclear power electricity representing demand (GWh), price of coal as a substitute to generated nuclear power electricity, and the price of oil representing uranium production costs. The empirical results show that generated nuclear electricity and the oil price, to be statistically significant at the 5 percent level. The coal price was not however a statistically significant. The scenarios for 2030 are three possible nuclear power generation demand cases; high, medium and low demand. The results for the high demand generated a price of 255 US$/kg and the medium demand 72US$/kg.
|
Page generated in 0.0594 seconds