• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 29
  • 10
  • 7
  • 7
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 108
  • 33
  • 28
  • 27
  • 24
  • 19
  • 17
  • 16
  • 16
  • 16
  • 14
  • 13
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Estudo comparativo dos modelos de value-at-risk para instrumentos pré-fixados. / A comparative study of value-at-risk models for fixed rate instruments.

Paulo Kwok Shaw Sain 07 August 2001 (has links)
Nos últimos anos, o value-at-risk tem se tornado uma ferramenta amplamente utilizada nas principais instituições financeiras, inclusive no Brasil. Dentre suas vantagens, destaca-se a possibilidade de se resumir em um único número os riscos de mercado incorridos e incorporar neste valor tanto a exposição da instituição quanto a volatilidade do mercado. O objetivo principal deste estudo é verificar a eficácia dos modelos mais conhecidos de value-at-risk - RiskMetrics(TM) e Simulação Histórica - na mensuração dos riscos de mercado de carteiras de renda fixa compostas por instrumentos pré-fixados em reais. No âmbito da alocação de capital para atendimento aos órgãos de regulamentação, o estudo estende-se também ao modelo adotado pelo Banco Central do Brasil. No decorrer do estudo, discute-se ainda as vantagens e desvantagens apresentadas, bem como o impacto que as peculiaridades do mercado brasileiro exercem sobre as hipóteses assumidas em cada um dos modelos. / Value-at-Risk (VaR) has become the primary tool for the systematic measuring and monitoring of market risk in most financial institutions. VaR is a statistical measure that comprises not only the exposure but also the market volatility in a single number. The main purpose of this work is to evaluate the performance of the well-known value-at-risk models - RiskMetrics(TM) and Historical Simulation - in the Brazilian fixed-income market. In the scope of capital allocation related to banking regulation, this study also extends briefly to the model adopted by the Brazilian Central Bank. Additionally, the underlying assumptions of these models are analyzed in the Brazilian financial market context. Also, this study discusses the advantages and disadvantages presented by the RiskMetrics and the Historical Simulation models.
32

Can Duration -- Interest Rate Risk -- and Convexity Explain the Fractional Price Change and Market Risk of Equities?

Cheney, David L. 01 May 1993 (has links)
In the last two decades, duration analysis has been largely applied to fixed - income securities . However, since rising and falling interest rates have been determined to be a major cause of stock price movements, equity duration has received a great deal of attention. The duration of an equity is a measure of its interest rate risk. Duration is the sensitivity of the price of an equity with respect to the interest rate. Convexity is the sensitivity of duration with respect to the interest rate. The analysis revealed that the fractional price change and market risk of equities can be explained by duration and convexity.
33

Exploring the Feasibility of Replicating SPAN-Model's Required Initial Margin Calculations using Machine Learning : A Master Thesis Project for Intraday Margin Call Investigation in the Commodities Market

Branestam, Clara, Sandgren, Amanda January 2023 (has links)
Machine learning is a rapidly growing field within artificial intelligence that an increasing number of individuals and corporations are beginning to utilize. In recent times, the financial sector has also started to recognize the potential of these techniques and methods. Nasdaq Clearing is responsible for managing the clearing business for the clearinghouse's members, and the objective of this thesis has been to explore the possibilities of using machine learning to replicate a subpart of the SPAN model's margin call calculations, known as initial margin, in the commodities market. The purpose of replicating SPAN's initial margin calculations is to open up for possibilities to create transparency and understanding in how the input variables affect the output. In the long run, we hope to broaden the insights on how one can use machine learning within the margin call processes. Various machine learning algorithms, primarily focused on regression tasks but also a few classification ones, have been employed to replicate the initial margin size. The primary objective of the methodology was to determine the algorithm that demonstrated the best performance in obtaining values that were as close as possible to the actual initial margin values. The findings revealed that a model composed of a combination of classification and regression, with non-parametric algorithms such as Random Forest and KNN, performed the best in both cases. Our conclusion is that the developed model possesses the ability to effectively compute the size of the initial margin and thus accomplishes its objective. / Maskininlärning är ett snabbt växande område inom artificiell intelligens som allt fler individer och företag börjar använda. Finanssektorn har nu också börjat undersöka hur dessa tekniker och metoder kan skapa värde. Nasdaq Clearing hanterar clearingverksamheten för clearinghusets medlemmar och syftet med denna uppsats har varit att undersöka möjligheterna att använda maskininlärning för att replikera en del av SPAN-modellens beräkningar av marginkravet som kallas Initial Marginal. Syftet med att replikera SPANs initiala marginberäkningar är att öppna upp för möjligheter att skapa transparens och förståelse för hur inputvariablernapåverkar outputen. På sikt hoppas vi kunna bredda insikterna hur maskininlärningslösningar skulle kunna användas inom "Margin Call"- processen. De metoder som användes för att replikera storleken på Initial Margin var olika maskininlärningsalgoritmer, främst fokuserade på regressionsuppgifter men några klassificeringsalgoritmer användes också. Fokus i metoden var att hitta vilken algoritm som presterade bäst, det vill säga den algoritm som predikterade närmst de faktiska värdena för Initial Margin. Resultatet visade sig vara en modell som kombinerade klassificering och regression, där icke-parametriska algoritmer såsom Random Forest och KNN var de som presterade bäst i båda fallen. Vår slutsats är att den utvecklade modellen har en god förmåga att beräkna storleken på Initial Margin och därmed uppfyller den sitt syfte.
34

Impact of Corporate Governance Mechanisms on Total, Systematic, Market, and Insolvency Risk of Fintech

Randombage, Sandun, Ramesh, Sudharshani January 2023 (has links)
Corporate governance practices of fintech companies have caused to increase in risk or caused to decrease in the risks. This study is mainly focused to identify the impact of corporate governance mechanisms, especially board structure and ownership structure, on the market-based risk of fintech companies. We have employed several corporate governance mechanisms such as, board size, board independence, board expertise on fintech, CEO duality, risk committee functioning, institutional ownership, and managerial ownership of the fintech companies. Total risk, systematic risk, market risk,and insolvency risk are employed as our dependent variables to examine this phenomenon. We have selected 46 listed fintech companies that are listed in any stock market of the world. Data is collected through 2012-2022 period. We have conducted our analysis using 369 unbalanced panel datasets. Our purpose was to emphasize the importance of better corporate governance mechanisms to risk management in fintech companies. From the management point of view, investors’ point of view, or directors’ point of view, what changes should do to better risk management of the company and also their personal benefit? In the recent past, two bluechip fintech companies have bankrupt due to corporate governance mispractices and risk management issues. Our results show that, corporate governance is one of the key factors in determining risk of the fintech companies. We have identified that the best practices caused to decrease risk while mispractices caused to increase risk.
35

Is Value-at-Risk (VaR) a Fair Proxy for Market Risk Under Conditions of Market Leverage?

Lang, Todd M. 29 December 2000 (has links)
Ex-post intraday market-risk extrema are compared with ex-ante standard RiskMetrics parametric Value-at-Risk (VaR) limits for three foreign currency futures markets (British Pound, Japanese Yen, Swiss Frank) to determine whether forecasted volatility of market returns based on settlement price data provides a valid proxy for short-term market risk independent of market leverage. Intraday violations of ex-ante one-day VaR limits at the 95% confidence level should occur for less than 5% of market days. Violation frequencies for each of the markets tested are shown to occur well in excess of this 5% tolerance level: 9.54% for the British Pound, 7.09% for the Japanese Yen, and 7.79% for the Swiss Franc futures markets. Thus, it is empirically demonstrated that VaR is a poor proxy for short-term market risk under conditions of market leverage. Implications for managing (measuring, monitoring, controlling), reporting, and regulating financial market risk are discussed. / Master of Arts
36

Two Essays on Asset Pricing

Hur, Jungshik 01 May 2007 (has links)
This dissertation consists of two chapters. The first chapter shows that the measurement errors in betas for stocks induce corresponding measurement errors in alphas and a spurious negative covariance between the estimated betas and alphas across stocks. This negative covariance between the estimated betas and alphas results in a violation of the independence assumption between the independent variable (betas) and error terms in the Fama-MacBeth regressions of tests of the CAPM, thereby creating a downward bias in the estimated market risk premiums. The procedure of using portfolio returns and betas does not necessarily eliminate this bias. Depending upon the grouping variable used to form portfolios, the negative covariance between estimated betas and alphas can be increased, decreased, and can even be made positive. This paper proposes two methods for correcting the downward bias in the estimated market risk premium. The estimated market risk premiums are consistent with the CAPM after the proposed corrections. The second chapter provides evidence that when the ex-post market risk premium is positive (up markets), the relation between returns and betas is positive, significant, and consistent with the CAPM. However, when the ex-post market risk premium is negative (down markets), the negative relation between betas and returns is significant, but stronger than what is implied by the CAPM. This strong negative relation offsets the positive relation, resulting in an insignificant relation between returns and betas for the overall period. The negative relation between size and returns, after controlling for beta differences, is present only when the ex-post market risk premium is negative, and is responsible for the negative relation for the overall period. This paper decomposes the negative relation between size and returns after controlling for beta differences into the intercept size effect (relation between alphas of stocks and their size) and the residual size effect (relation between residuals of stocks and their size). The asymmetrical size effect between up and down market is being driven by the residual size effect. Long term mean reversion in returns explains, in part, the negative relation between size and returns during down markets. / Ph. D.
37

Development of value at risk measures : towards an extreme value approach

Ganief, Moegamad Shahiem 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: Commercial banks, investment banks, insurance companies, non-financial firms, and pension funds hold portfolios of assets that may include stocks, bonds, currencies, and derivatives. Each institution needs to quantify the amount of risk its portfolio is exposed to in the course of a day, week, month, or year. Extreme events in financial markets, such as the stock market crash of October 1987, are central issues in finance and particularly in risk management and financial regulation. A method called value at risk (VaR) can be used to estimate market risk. Value at risk is a powerful measure of risk that is gaining wide acceptance amongst institutions for the management of market risk. Value at Risk is an estimate of the largest lost that a portfolio is likely to suffer during all but truly exceptional periods. More precisely, the VaR is the maximum loss that an institution can be confident it would lose a certain fraction of the time over a particular period. The power of the concept is its generality. VaR measures are applicable to entire portfolios - encompassing many asset categories and multiple sources of risk. As with its power, the challenge of calculating VaR also stems from its generality. In order to measure risk in a portfolio using VaR, some means must be found for determining a return distribution for the portfolio. There exists a wide range of literature on different methods of implementing VaR. But, when one attempts to apply the results, several questions remain open. For example, given a VaR measure, how can the risk manager test that the particular measure at hand is appropriately specified? And secondly, given two different VaR measures, how can the risk manager pick the best measure? Despite the popularity of VaR for measuring market risk, no consensus has yet been reach as to the best method to implement this risk measure. The absence of consensus is in part derived from the realization that each method currently in use has some significant drawbacks. The aim of this project is threefold: to introduce the reader to the concept of VaR; present the theoretical basis for the general approaches to VaR computations; and to introduce and apply Extreme Value Theory to VaR calculations. The general approaches to VaR computation falls into three categories, namely, Analytic (Parametric) Approach, Historical Simulation Approach, and Monte Carlo Simulation Approach. Each of these approaches has its strengths and weaknesses, which will study more closely. The extreme value approach to VaR calculation is a relatively new approach. Since most observed returns are central ones, traditional VaR methods tend to ignore extreme events and focus on risk measures that accommodate the whole empirical distribution of central returns. The danger of this approach is that these models are prone to fail just when they are needed most - in large market moves, when institutions can suffer very large losses. The extreme value approach is a tool that attempts to provide the user with the best possible estimate of the tail area of the distribution. Even in the absence of useful historical data, extreme value theory provides guidance on the kind of distribution that should be selected so that extreme risks are handled conservatively. As an illustration, the extreme value method will be applied to a foreign exchange futures contract. The validity of EVT to VaR calculations will be tested by examining the data of the Rand/Dollar One Year Futures Contracts. An extended worked example will be provided wherein which attempts to highlight the considerable strengths of the methods as well as the pitfalls and limitations. These results will be compared to VaR measures calculated using a GARCH(l,l) model. / AFRIKAANSE OPSOMMING: Handelsbanke, aksepbanke, assuransiemaatskappye, nie-finansiële instellings en pensioenfondse beskik oor portefeuljes van finansiële bates soos aandele, effekte, geldeenhede en afgeleides. Elke instelling moet die omvang kan bepaal van die risiko waaraan die portefeulje blootgestel is in die loop van 'n dag, week, maand of jaar. Uitsonderlike gebeure op finansiële markte, soos die ineenstorting van die aandelemark in Oktober 1987, is van besondere belang vir finansies en veral vir risikobestuur en finansiële regulering. 'n Metode wat genoem word Waarde op Risiko (WoR), kan gebruik word om markverliese te meet. WoR is 'n kragtige maatstaf vir risiko en word deur vele instellings gebruik vir die bestuur van mark-risiko. Waarde op Risiko is 'n raming van die grootste verlies wat 'n portefeulje moontlik kan ly gedurende enige tydperk, met uitsluiting van werklik uitsonderlike tydperke. Van nader beskou, is WoR die maksimum verlies wat 'n instelling kan verwag om gedurende 'n sekere tydperk binne 'n bepaalde periode te ly. Die waarde van die konsep lê in die algemene aard daarvan. WoR metings is van toepassing op portefeuljes in dié geheel en dit omvat baie kategorieë bates en veelvuldige bronne van risiko. Soos met die waarde van die konsep, hou die uitdaging om WoR te bereken ook verband met die algemene aard van die konsep. Ten einde die risiko te bepaal in 'n portefeulje waar WoR gebruik word, moet metodes gevind word waarvolgens 'n opbrengsverdeling vir die portefeulje vasgestel kan word. Daar bestaan 'n groot verskeidenheid literatuur oor die verskillende metodes om WoR te implementeer. Wanneer dit egter kom by die toepassing van die resultate, bly verskeie vrae onbeantwoord. Byvoorbeeld, hoe kan die risikobestuurder aan die hand van 'n gegewe WoR-maatstaf toets of die spesifieke maatstaf reg gespesifiseer is? Tweedens, hoe kan die risikobestuurder die beste maatstaf kies in die geval van twee verskillende WoR-maatstawwe? Ondanks die feit dat WoR algemeen gebruik word vir die meting van markrisiko, is daar nog nie konsensus bereik oor die beste metode om hierdie benadering tot risikometing te implementeer nie. Die feit dat daar nie konsensus bestaan nie, kan deels daaraan toegeskryf word dat elkeen van die metodes wat tans gebruik word, ernstige leemtes het. Die doel van hierdie projek is om die konsep WoR bekend te stel, om die teoretiese grondslag te lê vir die algemene benadering tot die berekening van WoR en om die Ekstreme Waarde-teorie bekend te stel en toe te pas op WoR-berekenings. Die algemene benadering tot die berekening van WoR word in drie kategorieë verdeel naamlik die Analitiese (Parametriese) benadering, die Historiese simulasiebenadering en die Monte Carlo-simulasiebenadering. Elkeen van die benaderings het sterk- en swakpunte wat van nader ondersoek sal word. Die Ekstreme Waarde-benadering tot WoR is 'n relatief nuwe benadering. Aangesien die meeste opbrengste middelwaarde-gesentreer is, is tradisionele WoR-metodes geneig om uitsonderlike gebeure buite rekening te laat en te fokus op risiko-maatstawwe wat die hele empiriese verdeling van middelwaarde-gesentreerde opbrengste akkommodeer. Die gevaar bestaan dan dat hierdie modelle geneig is om te faal juis wanneer dit die meeste benodig word, byvoorbeeld in die geval van groot markverskuiwings waartydens organisasies baie groot verliese kan ly. Daar word beoog om met behulp van die Ekstreme Waarde-benadering aan die gebruiker die beste moontlike skatting van die stert-area van die verdeling te gee. Selfs in die afwesigheid van bruikbare historiese data verskaf die Ekstreme Waarde-teorie riglyne ten opsigte van die aard van die verdeling wat gekies moet word, sodat uiterste risiko's versigtig hanteer kan word. Ten einde hierdie metode te illustreer, word dit in hierdie studie toegepas op 'n termynkontrak ten opsigte van buitelandse wisselkoerse. Die geldigheid van die Ekstreme Waarde-teorie ten opsigte van WoR berekenings word getoets deur die data van die Rand/Dollar Eenjaartermynkontrak te bestudeer. 'n Volledig uitgewerkte voorbeeld word verskaf waarin die slaggate en beperkings asook die talle sterkpunte van die model uitgewys word. Hierdie resultate sal vergelyk word met 'n WoR-meting wat bereken is met die GARCH (1,1) model.
38

A model to investigate the impact of flooding on the vulnerability of value of commercial properties

Bhattacharya, Namrata January 2014 (has links)
Flooding has the potential to have significant impact on the value of properties depending on the level of inherent vulnerability. Experts argue that it is not the actual risk but the perception of risk among property holders that influences vulnerability of value. The hypothesis that changing perception of flood risk could make property value vulnerable in the market is the main focus of the research. This dimension of research has received very low attention in commercial property literature. The existing knowledge base of flooding and property value reveals that focus has been largely associated with residential properties. Conceptual understanding of the extent and scale of the effect of flooding on the vulnerability of property value of commercial properties would be worthwhile for relevant stakeholders. The research methodology follows a quantitative approach with sequential application: of literature review, conceptual model generation, data collection from primary and secondary sources with remote questionnaire survey of selected study areas in the UK. The conceptual model was operationalised using analysis and interpretation of the collected data and finally cross validated with secondary data gained from commercial real estate experts . The strength of this research lies in the conceptualisation of the subject matter of property value in the context of flood vulnerability. This work provides innovative conceptual insight towards business vulnerability and vulnerability of value. The variables contributing towards vulnerability were hierarchically ranked using both collected data and deductive methods. The patterns of impact and recovery analysis emphasized that within the commercial sector indirect effects of flooding should be given equal importance with direct damages. The implication of perception on the vulnerability of property value showed a slightly different picture from business vulnerability in the chosen study areas when differentiated based on flood experience. In a nutshell the study reflected that the commercial property sector does not take flooding as one of their priorities. This is in part due to differential attitude towards risk of the population within the flood plain based on their knowledge and experience of flooding. The perception of stakeholders towards vulnerability of value can change with increasing magnitude and severity of floods and it is possible that the implications on market value of commercial properties will be visible in the future. Practitioners and researchers will find this study useful in developing an understanding of the vulnerability of commercial property value in the context of changing flood risk.
39

Corporate Social Responsibility och riskpåverkan : En studie av det sociala ansvarstagandets effekt på risk i Svenska börsbolag

Elman, Beatrice, Pers, Sebastian January 2016 (has links)
This study uses a quantitative method that aims to investigate the relationship between corporate social responsibility (CSR) and firm risk within Swedish public companies. Despite previous research at Anglo-Saxon companies with similar results, authors found cause for further investigation. Authors identified differences in the Swedish context that could affect the earlier found negative relation between CSR and firm risk, thereby legitimizing further examination. The research is built on secondary data collected from Nasdaq, Morningstar, Orbis and the CSRhub database. Through theory of relevance and current research, it develops a hypothesis which states that as CSR increases, firm risk is reduced in accordance with previous research. Testing was done with Pearsons bivariate correlation table and a multivariate regression analysis, controlling for various firm characteristics. The study found no connection between market risk and CSR, but could not determine whether a relationship between CSR and total risk exists within the population, only partly rejecting the hypothesis. The study raises attention as to how the relation between CSR and risk could be different in a context outside the typical Anglo-Saxon population. It could also be used as a base to further research on the cause to the lack of relation between CSR and market risk, in this study’s particular population.
40

Stress Testing of the Banking Sector in Emerging Markets A Case of the Selected Balkan Countries / Stress Testing of the Banking Sector in Emerging Markets A Case of the Selected Balkan Countries

Vukelić, Tatjana January 2011 (has links)
Stress testing is a macro-prudential analytical method of assessing the financial system's resilience to adverse events. This thesis describes the methodology of the stress tests and illustrates the stress testing for credit and market risks on the real bank-by-bank data in the two Balkan countries: Croatia and Serbia. Credit risk is captured by the macroeconomic credit risk models that estimate the default rates of the corporate and the household sectors. Setting-up the framework for the countries that were not much covered in former studies and that face the limited availability of data has been the main challenge of the thesis. The outcome can help to reveal possible risks to financial stability. The methods described in the thesis can be further developed and applied to the emerging markets that suffer from the similar data limitations. JEL Classification: E37, G21, G28 Keywords: banking, credit risk, default rate, macro stress testing, market risk

Page generated in 0.0551 seconds