• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 44
  • 5
  • 4
  • 1
  • Tagged with
  • 185
  • 185
  • 79
  • 69
  • 38
  • 32
  • 30
  • 29
  • 23
  • 23
  • 18
  • 17
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Portfolio Opportunity Distributions (PODs) for the South African market : based on regulation requirements

Nortje, Hester Maria 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: In this study Portfolio Opportunity Distributions (PODs) is applied as an alternative performance evaluation method. Traditionally, Broad-Market Indices or peer group comparisons are used to perform performance evaluation. These methods however have various biases and other problems related to its use. These biases and problems include composition bias, classification bias, concentration, etc. R.J. Surz (1994) introduced PODs in order to eliminate some of these problems. Each fund has its own opportunity set based on its style mandate and constraints. The style mandate of the fund is determined by calculating the fund’s exposure to the nine Surz Style Indices through the use of Returns-Based Style Analysis (RBSA). The indices are created based on the style proposed by R.J. Surz (1994). Some adjustments were made to incorporate the unique nature of the South African equity market. The combination of the fund’s exposures to the indices best explains the return that the fund generated. In this paper the fund’s constraints are based on the regulation requirements imposed on the funds in South Africa by the Collective Investment Schemes Control Act No. 45 of 2002 (CISCA). Thousands of random portfolios are then generated based on the fund’s opportunity set. The return and risk of the simulated portfolios represent the possible investment outcomes that the manager could have achieved given its opportunity set. Together the return and risk of the simulated portfolios represent a range of possible outcomes against which the performance of the fund is compared. It is also possible to determine the skill of the manager since it can be concluded that a manager who consistently outperforms most of the simulated portfolios shows skill in selecting shares to be included in the portfolio and assigning the correct weights to these shares. The South African Rand depreciated quite a bit during the period under evaluation and therefore funds invested large portions of their assets in foreign investments. These investments mostly yielded very high or very low returns compared to the returns available in the domestic equity market which impacted the application of PODs. Although the PODs methodology shows great potential, it is impossible to conclude with certainty whether the PODs methodology is superior to the traditional methods based on the current data. / AFRIKAANSE OPSOMMING: In hierdie studie word Portefeulje Geleentheids Verdelings (“PODs”) bekendgestel as ‘n alternatiewe manier om die obrengste van bestuurders te evalueer. Gewoonlik word indekse en die vergelyking van die fonds met soortgelyke fondse gebruik om fondse te evalueer. Die metodes het egter verskeie probleme wat met die gebruik daarvan verband hou. Die probleme sluit onder andere in: die samestelling en klassifikasie van soortgelyke fondse, die konsentrasie in die mark, ens. R.J. Surz (1994) het dus Portefeulje Geleentheids Verdelings (“PODs”) bekendgestel in ‘n poging om sommige van die probeleme te elimineer. Elke fonds het sy eie unieke geleentheids versameling wat gebaseer is op die fonds se styl en enige beperkings wat op die fonds van toepassing is. Die fonds se styl word bepaal deur die fonds se blootstelling aan die nege Surz Styl Indekse te meet met behulp van opbrengs-gebaseerde styl analise (“RBSA”). Die indekse is geskep gebaseer op die metode wat deur R.J. Surz (1994) voorgestel is. Daar is egter aanpassings gemaak om die unieke aard van die Suid-Afrikaanse aandele mark in ag te neem. Die kombinasie van die fonds se blootstelling aan die indekse verduidelik waar die fonds se opbrengs vandaan kom. In die navorsingstuk is die beperkings wat van toepassing is op die fonds afkomstig uit die regulasie vereistes wat deur die “Collective Investment Schemes Control Act No. 45 of 2002 (CISCA)” in Suid-Afrika op fondse van toepassing is. Duisende ewekansige portefeuljes word dan gegenereer gebaseer op die fonds se unieke groep aandele waarin die fonds kan belê. Die opbrengs en risiko van die gesimuleerde portefeuljes verteenwoordig al die moontlike beleggings uitkomste wat die fonds bestuurder kon gegenereer het gegewe die fonds se unieke groep aandele waarin dit kon belê. Die opbrengs en risiko van al die gesimuleerde portefeuljes skep saam ‘n verdeling van moontlike beleggings uitkomste waarteen die opbrengs en risiko van die fonds vergelyk word. Hierdie proses maak dit moontlik om die fonds bestuurder se vermoë om beter as meeste van die gesimuleerde portefeuljes te presteer te bepaal. Die aanname kan gemaak word dat ‘n bestuurder wat konsekwent oor tyd beter as meeste van die gesimuleerde portefeuljes presteer oor die vermoë beskik om die regte aandele te kies om in die portefeulje in te sluit en ook die regte gewigte aan die aandele toe te ken. Die Suid-Afrikaanse Rand het heelwat gedepresieer tydens die evaluasie periode en daarom het fondse groot porsies van hul beleggings oorsee belê. Die beleggings het dus of heelwat groter of heelwat kleiner opbrengste gehad in vergelyking met die opbrengste beskikbaar in die plaaslike aandelemark en dit het die toepassing van PODs beïnvloed. PODs toon baie potential, maar dit is egter onmoontlik om met die huidige data stel vas te stel of dit ‘n beter metode is.
102

Interest rate model theory with reference to the South African market

Van Wijck, Tjaart 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2006. / An overview of modern and historical interest rate model theory is given with the specific aim of derivative pricing. A variety of stochastic interest rate models are discussed within a South African market context. The various models are compared with respect to characteristics such as mean reversion, positivity of interest rates, the volatility structures they can represent, the yield curve shapes they can represent and weather analytical bond and derivative prices can be found. The distribution of the interest rates implied by some of these models is also found under various measures. The calibration of these models also receives attention with respect to instruments available in the South African market. Problems associated with the calibration of the modern models are also discussed.
103

South African security market imperfections

Jooste, Dirk 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2006. / In recent times many theories have surfaced posing challenging threats to the Efficient Market Hypothesis. We are entering an exciting era of financial economics fueled by the urge to have a better understanding of the intricate workings of financial markets. Many studies are emerging that investigate the relationship between stock market predictability and efficiency. This paper studies the existence of calendar-based patterns in equity returns, price momentum and earnings momentum in the South African securities market. These phenomena are commonly referred to in the literature as security market imperfections, financial market puzzles and market anomalies. We provide evidence that suggests that they do exist in the South African context, which is consistent with findings in various international markets. A vast number of papers on the subject exist in the international arena. However, very few empirical studies on the South African market can be found in the public domain. We aim to contribute to the literature by investigating the South African case.
104

Aspects of some exotic options

Theron, Nadia 12 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2007. / The use of options on various stock markets over the world has introduced a unique opportunity for investors to hedge, speculate, create synthetic financial instruments and reduce funding and other costs in their trading strategies. The power of options lies in their versatility. They enable an investor to adapt or adjust her position according to any situation that arises. Another benefit of using options is that they provide leverage. Since options cost less than stock, they provide a high-leverage approach to trading that can significantly limit the overall risk of a trade, or provide additional income. This versatility and leverage, however, come at a price. Options are complex securities and can be extremely risky. In this document several aspects of trading and valuing some exotic options are investigated. The aim is to give insight into their uses and the risks involved in their trading. Two volatility-dependent derivatives, namely compound and chooser options; two path-dependent derivatives, namely barrier and Asian options; and lastly binary options, are discussed in detail. The purpose of this study is to provide a reference that contains both the mathematical derivations and detail in valuating these exotic options, as well as an overview of their applicability and use for students and other interested parties.
105

Non-parametric volatility measurements and volatility forecasting models

Du Toit, Cornel 03 1900 (has links)
Assignment (MComm)--Stellenbosch University, 2005. / ENGLISH ABSTRACT: Volatilty was originally seen to be constant and deterministic, but it was later realised that return series are non-stationary. Owing to this non-stationarity nature of returns, there were no reliable ex-post volatility measurements. Subsequently, researchers focussed on ex-ante volatility models. It was only then realised that before good volatility models can be created, reliable ex-post volatility measuremetns need to be defined. In this study we examine non-parametric ex-post volatility measurements in order to obtain approximations of the variances of non-stationary return series. A detailed mathematical derivation and discussion of the already developed volatility measurements, in particular the realised volatility- and DST measurements, are given In theory, the higher the sample frequency of returns is, the more accurate the measurements are. These volatility measurements referred to above, however, all have short-comings in that the realised volatility fails if the sample frequency becomes to high owing to microstructure effects. On the other hand, the DST measurement cannot handle changing instantaneous volatility. In this study we introduce a new volatility measurement, termed microstructure realised volatility, that overcomes these shortcomings. This measurement, as with realised volatility, is based on quadratic variation theory, but the underlying return model is more realistic. / AFRIKAANSE OPSOMMING: Volatiliteit is oorspronklik as konstant en deterministies beskou, dit was eers later dat besef is dat opbrengste nie-stasionêr is. Betroubare volatiliteits metings was nie beskikbaar nie weens die nie-stasionêre aard van opbrengste. Daarom het navorsers gefokus op vooruitskattingvolatiliteits modelle. Dit was eers op hierdie stadium dat navorsers besef het dat die definieering van betroubare volatiliteit metings 'n voorvereiste is vir die skepping van goeie vooruitskattings modelle. Nie-parametriese volatiliteit metings word in hierdie studie ondersoek om sodoende benaderings van die variansies van die nie-stasionêre opbrengste reeks te beraam. 'n Gedetaileerde wiskundige afleiding en bespreking van bestaande volatiliteits metings, spesifiek gerealiseerde volatiliteit en DST- metings, word gegee. In teorie salopbrengste wat meer dikwels waargeneem word tot beter akkuraatheid lei. Bogenoemde volatilitieits metings het egter tekortkominge aangesien gerealiseerde volatiliteit faal wanneer dit te hoog raak, weens mikrostruktuur effekte. Aan die ander kant kan die DST meting nie veranderlike oombliklike volatilitiet hanteer nie. Ons stel in hierdie studie 'n nuwe volatilitieits meting bekend, naamlik mikro-struktuur gerealiseerde volatiliteit, wat nie hierdie tekortkominge het nie. Net soos met gerealiseerde volatiliteit sal hierdie meting gebaseer wees op kwadratiese variasie teorie, maar die onderliggende opbrengste model is meer realisties.
106

Improving the accuracy of prediction using singular spectrum analysis by incorporating internet activity

Badenhorst, Dirk Jakobus Pretorius 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Researchers and investors have been attempting to predict stock market activity for years. The possible financial gain that accurate predictions would offer lit a flame of greed and drive that would inspire all kinds of researchers. However, after many of these researchers have failed, they started to hypothesize that a goal such as this is not only improbable, but impossible. Previous predictions were based on historical data of the stock market activity itself and would often incorporate different types of auxiliary data. This auxiliary data ranged as far as imagination allowed in an attempt to find some correlation and some insight into the future, that could in turn lead to the figurative pot of gold. More often than not, the auxiliary data would not prove helpful. However, with the birth of the internet, endless amounts of new sources of auxiliary data presented itself. In this thesis I propose that the near in finite amount of data available on the internet could provide us with information that would improve stock market predictions. With this goal in mind, the different sources of information available on the internet are considered. Previous studies on similar topics presented possible ways in which we can measure internet activity, which might relate to stock market activity. These studies also gave some insights on the advantages and disadvantages of using some of these sources. These considerations are investigated in this thesis. Since a lot of this work is therefore based on the prediction of a time series, it was necessary to choose a prediction algorithm. Previously used linear methods seemed too simple for prediction of stock market activity and a new non-linear method, called Singular Spectrum Analysis, is therefore considered. A detailed study of this algorithm is done to ensure that it is an appropriate prediction methodology to use. Furthermore, since we will be including auxiliary information, multivariate extensions of this algorithm are considered as well. Some of the inaccuracies and inadequacies of these current multivariate extensions are studied and an alternative multivariate technique is proposed and tested. This alternative approach addresses the inadequacies of existing methods. With the appropriate methodology chosen and the appropriate sources of auxiliary information chosen, a concluding chapter is done on whether predictions that includes auxiliary information (obtained from the internet) improve on baseline predictions that are simply based on historical stock market data. / AFRIKAANSE OPSOMMING: Navorsers en beleggers is vir jare al opsoek na maniere om aandeelpryse meer akkuraat te voorspel. Die moontlike finansiële implikasies wat akkurate vooruitskattings kan inhou het 'n vlam van geldgierigheid en dryf wakker gemaak binne navorsers regoor die wêreld. Nadat baie van hierdie navorsers onsuksesvol was, het hulle begin vermoed dat so 'n doel nie net onwaarskynlik is nie, maar onmoontlik. Vorige vooruitskattings was bloot gebaseer op historiese aandeelprys data en sou soms verskillende tipes bykomende data inkorporeer. Die tipes data wat gebruik was het gestrek so ver soos wat die verbeelding toegelaat het, in 'n poging om korrelasie en inligting oor die toekoms te kry wat na die guurlike pot goud sou lei. Navorsers het gereeld gevind dat hierdie verskillende tipes bykomende inligting nie van veel hulp was nie, maar met die geboorte van die internet het 'n oneindige hoeveelheid nuwe bronne van bykomende inligting bekombaar geraak. In hierdie tesis stel ek dus voor dat die data beskikbaar op die internet dalk vir ons kan inligting gee wat verwant is aan toekomstige aandeelpryse. Met hierdie doel in die oog, is die verskillende bronne van inligting op die internet gebestudeer. Vorige studies op verwante werk het sekere spesifieke maniere voorgestel waarop ons internet aktiwiteit kan meet. Hierdie studies het ook insig gegee oor die voordele en die nadele wat sommige bronne inhou. Hierdie oorwegings word ook in hierdie tesis bespreek. Aangesien 'n groot gedeelte van hierdie tesis dus gebasseer word op die vooruitskatting van 'n tydreeks, is dit nodig om 'n toepaslike vooruitskattings algoritme te kies. Baie navorsers het verkies om eenvoudige lineêre metodes te gebruik. Hierdie metodes het egter te eenvoudig voorgekom en 'n relatiewe nuwe nie-lineêre metode (met die naam "Singular Spectrum Analysis") is oorweeg. 'n Deeglike studie van hierdie algoritme is gedoen om te verseker dat die metode van toepassing is op aandeelprys data. Verder, aangesien ons gebruik wou maak van bykomende inligting, is daar ook 'n studie gedoen op huidige multivariaat uitbreidings van hierdie algoritme en die probleme wat dit inhou. 'n Alternatiewe multivariaat metode is toe voorgestel en getoets wat hierdie probleme aanspreek. Met 'n gekose vooruitskattingsmetode en gekose bronne van bykomende data is 'n gevolgtrekkende hoofstuk geskryf oor of vooruitskattings, wat die bykomende internet data inkorporeer, werklik in staat is om te verbeter op die eenvoudige vooruitskattings, wat slegs gebaseer is op die historiese aandeelprys data.
107

PCA and CVA biplots : a study of their underlying theory and quality measures

Brand, Hilmarie 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The main topics of study in this thesis are the Principal Component Analysis (PCA) and Canonical Variate Analysis (CVA) biplots, with the primary focus falling on the quality measures associated with these biplots. A detailed study of different routes along which PCA and CVA can be derived precedes the study of the PCA biplot and CVA biplot respectively. Different perspectives on PCA and CVA highlight different aspects of the theory that underlie PCA and CVA biplots respectively and so contribute to a more solid understanding of these biplots and their interpretation. PCA is studied via the routes followed by Pearson (1901) and Hotelling (1933). CVA is studied from the perspectives of Linear Discriminant Analysis, Canonical Correlation Analysis as well as a two-step approach introduced in Gower et al. (2011). The close relationship between CVA and Multivariate Analysis of Variance (MANOVA) also receives some attention. An explanation of the construction of the PCA biplot is provided subsequent to the study of PCA. Thereafter follows an in depth investigation of quality measures of the PCA biplot as well as the relationships between these quality measures. Specific attention is given to the effect of standardisation on the PCA biplot and its quality measures. Following the study of CVA is an explanation of the construction of the weighted CVA biplot as well as two different unweighted CVA biplots based on the two-step approach to CVA. Specific attention is given to the effect of accounting for group sizes in the construction of the CVA biplot on the representation of the group structure underlying a data set. It was found that larger groups tend to be better separated from other groups in the weighted CVA biplot than in the corresponding unweighted CVA biplots. Similarly it was found that smaller groups tend to be separated to a greater extent from other groups in the unweighted CVA biplots than in the corresponding weighted CVA biplot. A detailed investigation of previously defined quality measures of the CVA biplot follows the study of the CVA biplot. It was found that the accuracy with which the group centroids of larger groups are approximated in the weighted CVA biplot is usually higher than that in the corresponding unweighted CVA biplots. Three new quality measures that assess that accuracy of the Pythagorean distances in the CVA biplot are also defined. These quality measures assess the accuracy of the Pythagorean distances between the group centroids, the Pythagorean distances between the individual samples and the Pythagorean distances between the individual samples and group centroids in the CVA biplot respectively. / AFRIKAANSE OPSOMMING: Die hoofonderwerpe van studie in hierdie tesis is die Hoofkomponent Analise (HKA) bistipping asook die Kanoniese Veranderlike Analise (KVA) bistipping met die primêre fokus op die kwaliteitsmaatstawwe wat daarmee geassosieer word. ’n Gedetailleerde studie van verskillende roetes waarlangs HKA en KVA afgelei kan word, gaan die studie van die HKA en KVA bistippings respektiewelik vooraf. Verskillende perspektiewe op HKA en KVA belig verskillende aspekte van die teorie wat onderliggend is tot die HKA en KVA bistippings respektiewelik en dra sodoende by tot ’n meer breedvoerige begrip van hierdie bistippings en hulle interpretasies. HKA word bestudeer volgens die roetes wat gevolg is deur Pearson (1901) en Hotelling (1933). KVA word bestudeer vanuit die perspektiewe van Linieêre Diskriminantanalise, Kanoniese Korrelasie-analise sowel as ’n twee-stap-benadering soos voorgestel in Gower et al. (2011). Die noue verwantskap tussen KVA en Meerveranderlike Analise van Variansie (MANOVA) kry ook aandag. ’n Verduideliking van die konstruksie van die HKA bistipping word voorsien na afloop van die studie van HKA. Daarna volg ’n indiepte-ondersoek van die HKA bistipping kwaliteitsmaatstawwe sowel as die onderlinge verhoudings tussen hierdie kwaliteitsmaatstawe. Spesifieke aandag word gegee aan die effek van die standaardisasie op die HKA bistipping en sy kwaliteitsmaatstawe. Opvolgend op die studie van KVA is ’n verduideliking van die konstruksie van die geweegde KVA bistipping sowel as twee veskillende ongeweegde KVA bistippings gebaseer op die twee-stap-benadering tot KVA. Spesifieke aandag word gegee aan die effek wat die inagneming van die groepsgroottes in die konstruksie van die KVA bistipping op die voorstelling van die groepstruktuur onderliggend aan ’n datastel het. Daar is gevind dat groter groepe beter geskei is van ander groepe in die geweegde KVA bistipping as in die oorstemmende ongeweegde KVA bistipping. Soortgelyk daaraan is gevind dat kleiner groepe tot ’n groter mate geskei is van ander groepe in die ongeweegde KVA bistipping as in die oorstemmende geweegde KVA bistipping. ’n Gedetailleerde ondersoek van voorheen gedefinieerde kwaliteitsmaatstawe van die KVA bistipping volg op die studie van die KVA bistipping. Daar is gevind dat die akkuraatheid waarmee die groepsgemiddeldes van groter groepe benader word in die geweegde KVA bistipping, gewoonlik hoër is as in die ooreenstemmende ongeweegde KVA bistippings. Drie nuwe kwaliteitsmaatstawe wat die akkuraatheid van die Pythagoras-afstande in die KVA bistipping meet, word gedefinieer. Hierdie kwaliteitsmaatstawe beskryf onderskeidelik die akkuraatheid van die voorstelling van die Pythagoras-afstande tussen die groepsgemiddeldes, die Pythagoras-afstande tussen die individuele observasies en die Pythagoras-afstande tussen die individuele observasies en groepsgemiddeldes in die KVA bistipping.
108

Variable selection for kernel methods with application to binary classification

Oosthuizen, Surette 03 1900 (has links)
Thesis (PhD (Statistics and Actuarial Science))—University of Stellenbosch, 2008. / The problem of variable selection in binary kernel classification is addressed in this thesis. Kernel methods are fairly recent additions to the statistical toolbox, having originated approximately two decades ago in machine learning and artificial intelligence. These methods are growing in popularity and are already frequently applied in regression and classification problems. Variable selection is an important step in many statistical applications. Thereby a better understanding of the problem being investigated is achieved, and subsequent analyses of the data frequently yield more accurate results if irrelevant variables have been eliminated. It is therefore obviously important to investigate aspects of variable selection for kernel methods. Chapter 2 of the thesis is an introduction to the main part presented in Chapters 3 to 6. In Chapter 2 some general background material on kernel methods is firstly provided, along with an introduction to variable selection. Empirical evidence is presented substantiating the claim that variable selection is a worthwhile enterprise in kernel classification problems. Several aspects which complicate variable selection in kernel methods are discussed. An important property of kernel methods is that the original data are effectively transformed before a classification algorithm is applied to it. The space in which the original data reside is called input space, while the transformed data occupy part of a feature space. In Chapter 3 we investigate whether variable selection should be performed in input space or rather in feature space. A new approach to selection, so-called feature-toinput space selection, is also proposed. This approach has the attractive property of combining information generated in feature space with easy interpretation in input space. An empirical study reveals that effective variable selection requires utilisation of at least some information from feature space. Having confirmed in Chapter 3 that variable selection should preferably be done in feature space, the focus in Chapter 4 is on two classes of selecion criteria operating in feature space: criteria which are independent of the specific kernel classification algorithm and criteria which depend on this algorithm. In this regard we concentrate on two kernel classifiers, viz. support vector machines and kernel Fisher discriminant analysis, both of which are described in some detail in Chapter 4. The chapter closes with a simulation study showing that two of the algorithm-independent criteria are very competitive with the more sophisticated algorithm-dependent ones. In Chapter 5 we incorporate a specific strategy for searching through the space of variable subsets into our investigation. Evidence in the literature strongly suggests that backward elimination is preferable to forward selection in this regard, and we therefore focus on recursive feature elimination. Zero- and first-order forms of the new selection criteria proposed earlier in the thesis are presented for use in recursive feature elimination and their properties are investigated in a numerical study. It is found that some of the simpler zeroorder criteria perform better than the more complicated first-order ones. Up to the end of Chapter 5 it is assumed that the number of variables to select is known. We do away with this restriction in Chapter 6 and propose a simple criterion which uses the data to identify this number when a support vector machine is used. The proposed criterion is investigated in a simulation study and compared to cross-validation, which can also be used for this purpose. We find that the proposed criterion performs well. The thesis concludes in Chapter 7 with a summary and several discussions for further research.
109

Completion of an incomplete market by quadratic variation assets.

Mgobhozi, S. W. January 2011 (has links)
It is well known that the general geometric L´evy market models are incomplete, except for the geometric Brownian and the geometric Poissonian, but such a market can be completed by enlarging it with power-jump assets as Corcuera and Nualart [12] did on their paper. With the knowledge that an incomplete market due to jumps can be completed, we look at other cases of incompleteness. We will consider incompleteness due to more sources of randomness than tradable assets, transactions costs and stochastic volatility. We will show that such markets are incomplete and propose a way to complete them. By doing this we show that such markets can be completed. In the case of incompleteness due to more randomness than tradable assets, we will enlarge the market using the market’s underlying quadratic variation assets. By doing this we show that the market can be completed. Looking at a market paying transactional costs, which is also an incomplete market model due to indifference between the buyers and sellers price, we will show that a market paying transactional costs as the one given by, Cvitanic and Karatzas [13] can be completed. Empirical findings have shown that the Black and Scholes assumption of constant volatility is inaccurate (see Tompkins [40] for empirical evidence). Volatility is in some sense stochastic, and is divided into two broad classes. The first class being single-factor models, which have only one source of randomness, and are complete markets models. The other class being the multi-factor models in which other random elements are introduced, hence are an incomplete markets models. In this project we look at some commonly used multi-factor models and attempt to complete one of them. / Thesis (M.Sc.)-University of KwaZulu-Natal, Durban, 2011.
110

Notions of Dependence with Applications in Insurance and Finance

Wei, Wei January 2013 (has links)
Many insurance and finance activities involve multiple risks. Dependence structures between different risks play an important role in both theoretical models and practical applications. However, stochastic and actuarial models with dependence are very challenging research topics. In most literature, only special dependence structures have been considered. However, most existing special dependence structures can be integrated into more-general contexts. This thesis is motivated by the desire to develop more-general dependence structures and to consider their applications. This thesis systematically studies different dependence notions and explores their applications in the fields of insurance and finance. It contributes to the current literature in the following three main respects. First, it introduces some dependence notions to actuarial science and initiates a new approach to studying optimal reinsurance problems. Second, it proposes new notions of dependence and provides a general context for the studies of optimal allocation problems in insurance and finance. Third, it builds the connections between copulas and the proposed dependence notions, thus enabling the constructions of the proposed dependence structures and enhancing their applicability in practice. The results derived in the thesis not only unify and generalize the existing studies of optimization problems in insurance and finance, but also admit promising applications in other fields, such as operations research and risk management.

Page generated in 0.0567 seconds