Spelling suggestions: "subject:"etheses -- mathematics."" "subject:"etheses -- amathematics.""
191 |
Pricing multi-asset options with levy copulasDushimimana, Jean Claude 03 1900 (has links)
Thesis (MSc (Mathematical Sciences))--University of Stellenbosch, 2011. / Imported from http://etd.sun.ac.za / ENGLISH ABSTRACT: In this thesis, we propose to use Levy processes to model the dynamics of asset prices. In
the first part, we deal with single asset options and model the log stock prices with a Levy
process. We employ pure jump Levy processes of infinite activity, in particular variance
gamma and CGMY processes. We fit the log-returns of six stocks to variance gamma and
CGMY distributions and check the goodness of fit using statistical tests. It is observed
that the variance gamma and the CGMY distributions fit the financial market data much
better than the normal distribution. Calibration shows that at given maturity time the
two models fit into the option prices very well.
In the second part, we investigate the effect of dependence structure to multivariate option
pricing. We use the new concept of Levy copula introduced in the literature by Tankov
[40]. Levy copulas allow us to separate the dependence structure from the behavior of
the marginal components. We consider bivariate variance gamma and bivariate CGMY
models. To model the dependence structure between underlying assets we use the Clayton
Levy copula. The empirical results on six stocks indicate a strong dependence between
two different stock prices. Subsequently, we compute bivariate option prices taking into
account the dependence structure. It is observed that option prices are highly sensitive to
the dependence structure between underlying assets, and neglecting tail dependence will
lead to errors in option pricing. / AFRIKAANSE OPSOMMING: In hierdie proefskrif word Levy prosesse voorgestel om die bewegings van batepryse te
modelleer. Levy prosesse besit die vermoe om die risiko van spronge in ag te neem, asook
om die implisiete volatiliteite, wat in finansiele opsie pryse voorkom, te reproduseer. Ons
gebruik suiwer–sprong Levy prosesse met oneindige aktiwiteit, in besonder die gamma–
variansie (Eng. variance gamma) en CGMY–prosesse. Ons pas die log–opbrengste van ses
aandele op die gamma–variansie en CGMY distribusies, en kontroleer die resultate met
behulp van statistiese pasgehaltetoetse. Die resultate bevestig dat die gamma–variansie en
CGMY modelle die finansiele data beter pas as die normaalverdeling. Kalibrasie toon ook
aan dat vir ’n gegewe verstryktyd die twee modelle ook die opsiepryse goed pas.
Ons ondersoek daarna die gebruik van Levy prosesse vir opsies op meervoudige bates.
Ons gebruik die nuwe konsep van Levy copulas, wat deur Tankov[40] ingelei is. Levy
copulas laat toe om die onderlinge afhanklikheid tussen bateprysspronge te skei van die
randkomponente. Ons bespreek daarna die simulasie van meerveranderlike Levy prosesse
met behulp van Levy copulas. Daarna bepaal ons die pryse van opsies op meervoudige bates
in multi–dimensionele exponensiele Levy modelle met behulp van Monte Carlo–metodes.
Ons beskou die tweeveranderlike gamma-variansie en – CGMY modelle en modelleer die
afhanklikheidsstruktuur tussen onderleggende bates met ’n Levy Clayton copula. Daarna
bereken ons tweeveranderlike opsiepryse. Kalibrasie toon aan dat hierdie opsiepryse baie
sensitief is vir die afhanlikheidsstruktuur, en dat prysbepaling foutief is as die afhanklikheid
tussen die sterte van die onderleggende verdelings verontagsaam word.
|
192 |
Perturbation methods in derivatives pricing under stochastic volatilityKateregga, Michael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This work employs perturbation techniques to price and hedge financial derivatives in a
stochastic volatility framework. Fouque et al. [44] model volatility as a function of two processes
operating on different time-scales. One process is responsible for the fast-fluctuating
feature of volatility and corresponds to the slow time-scale and the second is for slowfluctuations
or fast time-scale. The former is an Ergodic Markov process and the latter is
a strong solution to a Lipschitz stochastic differential equation. This work mainly involves
modelling, analysis and estimation techniques, exploiting the concept of mean reversion of
volatility. The approach used is robust in the sense that it does not assume a specific volatility
model. Using singular and regular perturbation techniques on the resulting PDE a first-order
price correction to Black-Scholes option pricing model is derived. Vital groupings of market
parameters are identified and their estimation from market data is extremely efficient and
stable. The implied volatility is expressed as a linear (affine) function of log-moneyness-tomaturity
ratio, and can be easily calibrated by estimating the grouped market parameters
from the observed implied volatility surface. Importantly, the same grouped parameters
can be used to price other complex derivatives beyond the European and American options,
which include Barrier, Asian, Basket and Forward options. However, this semi-analytic perturbative
approach is effective for longer maturities and unstable when pricing is done close
to maturity. As a result a more accurate technique, the decomposition pricing approach
that gives explicit analytic first- and second-order pricing and implied volatility formulae is
discussed as one of the current alternatives. Here, the method is only employed for European
options but an extension to other options could be an idea for further research. The
only requirements for this method are integrability and regularity of the stochastic volatility
process. Corrections to [3] remarkable work are discussed here. / AFRIKAANSE OPSOMMING: Hierdie werk gebruik steuringstegnieke om finansiële afgeleide instrumente in ’n stogastiese
wisselvalligheid raamwerk te prys en te verskans. Fouque et al. [44] gemodelleer wisselvalligheid
as ’n funksie van twee prosesse wat op verskillende tyd-skale werk. Een proses
is verantwoordelik vir die vinnig-wisselende eienskap van die wisselvalligheid en stem
ooreen met die stadiger tyd-skaal en die tweede is vir stadig-wisselende fluktuasies of ’n
vinniger tyd-skaal. Die voormalige is ’n Ergodiese-Markov-proses en die laasgenoemde is
’n sterk oplossing vir ’n Lipschitz stogastiese differensiaalvergelyking. Hierdie werk behels
hoofsaaklik modellering, analise en skattingstegnieke, wat die konsep van terugkeer
to die gemiddelde van die wisseling gebruik. Die benadering wat gebruik word is rubuust
in die sin dat dit nie ’n aanname van ’n spesifieke wisselvalligheid model maak nie. Deur
singulêre en reëlmatige steuringstegnieke te gebruik op die PDV kan ’n eerste-orde pryskorreksie
aan die Black-Scholes opsie-waardasiemodel afgelei word. Belangrike groeperings
van mark parameters is geïdentifiseer en hul geskatte waardes van mark data is uiters
doeltreffend en stabiel. Die geïmpliseerde onbestendigheid word uitgedruk as ’n lineêre
(affiene) funksie van die log-geldkarakter-tot-verval verhouding, en kan maklik gekalibreer
word deur gegroepeerde mark parameters te beraam van die waargenome geïmpliseerde
wisselvalligheids vlak. Wat belangrik is, is dat dieselfde gegroepeerde parameters gebruik
kan word om ander komplekse afgeleide instrumente buite die Europese en Amerikaanse
opsies te prys, dié sluit in Barrier, Asiatiese, Basket en Stuur opsies. Hierdie semi-analitiese
steurings benadering is effektief vir langer termyne en onstabiel wanneer pryse naby aan
die vervaldatum beraam word. As gevolg hiervan is ’n meer akkurate tegniek, die ontbinding
prys benadering wat eksplisiete analitiese eerste- en tweede-orde pryse en geïmpliseerde
wisselvalligheid formules gee as een van die huidige alternatiewe bespreek. Hier
word slegs die metode vir Europese opsies gebruik, maar ’n uitbreiding na ander opsies
kan’n idee vir verdere navorsing wees. Die enigste vereistes vir hierdie metode is integreerbaarheid
en reëlmatigheid van die stogastiese wisselvalligheid proses. Korreksies tot [3] se
noemenswaardige werk word ook hier bespreek.
|
193 |
Bivariate box splines and surface subdivisionKelil, Abey Sherif 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / Please refer to full text to view abstract.
|
194 |
Energy and related graph invariantsAndriantiana, Eric Ould Dadah 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2013. / Please refer to full text to view abstract.
|
195 |
Random walks on graphsOosthuizen, Joubert 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: We study random walks on nite graphs. The reader is introduced to general
Markov chains before we move on more specifically to random walks on graphs.
A random walk on a graph is just a Markov chain that is time-reversible. The
main parameters we study are the hitting time, commute time and cover time.
We nd novel formulas for the cover time of the subdivided star graph and
broom graph before looking at the trees with extremal cover times.
Lastly we look at a connection between random walks on graphs and electrical
networks, where the hitting time between two vertices of a graph is expressed
in terms of a weighted sum of e ective resistances. This expression in turn
proves useful when we study the cover cost, a parameter related to the cover
time. / AFRIKAANSE OPSOMMING: Ons bestudeer toevallige wandelings op eindige gra eke in hierdie tesis. Eers
word algemene Markov kettings beskou voordat ons meer spesi ek aanbeweeg
na toevallige wandelings op gra eke. 'n Toevallige wandeling is net 'n Markov
ketting wat tyd herleibaar is. Die hoof paramaters wat ons bestudeer is die
treftyd, pendeltyd en dektyd. Ons vind oorspronklike formules vir die dektyd
van die verdeelde stergra ek sowel as die besemgra ek en kyk daarna na die
twee bome met uiterste dektye.
Laastens kyk ons na 'n verband tussen toevallige wandelings op gra eke en
elektriese netwerke, waar die treftyd tussen twee punte op 'n gra ek uitgedruk
word in terme van 'n geweegde som van e ektiewe weerstande. Hierdie uitdrukking
is op sy beurt weer nuttig wanneer ons die dekkoste bestudeer, waar
die dekkoste 'n paramater is wat verwant is aan die dektyd.
|
196 |
The risk parity approach to asset allocationGalane, Lesiba Charles 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: We consider the problem of portfolio's asset allocation characterised by risk
and return. Prior to the 2007-2008 financial crisis, this important problem
was tackled using mainly the Markowitz mean-variance framework. However,
throughout the past decade of challenging markets, particularly for equities,
this framework has exhibited multiple drawbacks.
Today many investors approach this problem with a 'safety first' rule that
puts risk management at the heart of decision-making. Risk-based strategies
have gained a lot of popularity since the recent financial crisis. One of the
'trendiest' of the modern risk-based strategies is the Risk Parity model, which
puts diversification in terms of risk, but not in terms of dollar values, at the
core of portfolio risk management.
Inspired by the works of Maillard et al. (2010), Bruder and Roncalli (2012),
and Roncalli and Weisang (2012), we examine the reliability and relationship
between the traditional mean-variance framework and risk parity. We emphasise,
through multiple examples, the non-diversification of the traditional
mean-variance framework. The central focus of this thesis is on examining the
main Risk-Parity strategies, i.e. the Inverse Volatility, Equal Risk Contribution
and the Risk Budgeting strategies.
Lastly, we turn our attention to the problem of maximizing the absolute
expected value of the logarithmic portfolio wealth (sometimes called the drift
term) introduced by Oderda (2013). The drift term of the portfolio is given by
the sum of the expected price logarithmic growth rate, the expected cash flow,
and half of its variance. The solution to this problem is a linear combination
of three famous risk-based strategies and the high cash flow return portfolio. / AFRIKAANSE OPSOMMING: Ons kyk na die probleem van batetoewysing in portefeuljes wat gekenmerk
word deur risiko en wins. Voor die 2007-2008 finansiele krisis, was hierdie belangrike
probleem deur die Markowitz gemiddelde-variansie raamwerk aangepak.
Gedurende die afgelope dekade van uitdagende markte, veral vir aandele, het
hierdie raamwerk verskeie nadele getoon.
Vandag, benader baie beleggers hierdie probleem met 'n 'veiligheid eerste'
reël wat risikobestuur in die hart van besluitneming plaas. Risiko-gebaseerde
strategieë het baie gewild geword sedert die onlangse finansiële krisis. Een
van die gewildste van die moderne risiko-gebaseerde strategieë is die Risiko-
Gelykheid model wat diversifikasie in die hart van portefeulje risiko bestuur
plaas.
Geïnspireer deur die werke van Maillard et al. (2010), Bruder and Roncalli
(2012), en Roncalli and Weisang (2012), ondersoek ons die betroubaarheid en
verhouding tussen die tradisionele gemiddelde-variansie raamwerk en Risiko-
Gelykheid. Ons beklemtoon, deur middel van verskeie voorbeelde, die niediversifikasie van die tradisionele gemiddelde-variansie raamwerk. Die sentrale
fokus van hierdie tesis is op die behandeling van Risiko-Gelykheid strategieë,
naamlik, die Omgekeerde Volatiliteit, Gelyke Risiko-Bydrae en Risiko Begroting
strategieë.
Ten slotte, fokus ons aandag op die probleem van maksimering van absolute
verwagte waarde van die logaritmiese portefeulje welvaart (soms genoem die
drif term) bekendgestel deur Oderda (2013). Die drif term van die portefeulje
word gegee deur die som van die verwagte prys logaritmiese groeikoers, die
verwagte kontantvloei, en die helfte van die variansie. Die oplossing vir hierdie
probleem is 'n lineêre kombinasie van drie bekende risiko-gebaseerde strategieë
en die hoë kontantvloei wins portefeulje.
|
197 |
Analysing ranking algorithms and publication trends on scholarly citation networksDunaiski, Marcel Paul 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Citation analysis is an important tool in the academic community. It can aid universities,
funding bodies, and individual researchers to evaluate scientific work and direct resources
appropriately. With the rapid growth of the scientific enterprise and the increase of online
libraries that include citation analysis tools, the need for a systematic evaluation of these
tools becomes more important.
The research presented in this study deals with scientific research output, i.e., articles
and citations, and how they can be used in bibliometrics to measure academic success.
More specifically, this research analyses algorithms that rank academic entities such as
articles, authors and journals to address the question of how well these algorithms can
identify important and high-impact entities.
A consistent mathematical formulation is developed on the basis of a categorisation
of bibliometric measures such as the h-index, the Impact Factor for journals, and ranking
algorithms based on Google’s PageRank. Furthermore, the theoretical properties of each
algorithm are laid out.
The ranking algorithms and bibliometric methods are computed on the Microsoft
Academic Search citation database which contains 40 million papers and over 260 million
citations that span across multiple academic disciplines.
We evaluate the ranking algorithms by using a large test data set of papers and authors
that won renowned prizes at numerous Computer Science conferences. The results show
that using citation counts is, in general, the best ranking metric. However, for certain
tasks, such as ranking important papers or identifying high-impact authors, algorithms
based on PageRank perform better. As a secondary outcome of this research, publication
trends across academic disciplines are analysed to show changes in publication behaviour
over time and differences in publication patterns between disciplines. / AFRIKAANSE OPSOMMING: Sitasiesanalise is ’n belangrike instrument in die akademiese omgewing. Dit kan universiteite,
befondsingsliggams en individuele navorsers help om wetenskaplike werk te evalueer
en hulpbronne toepaslik toe te ken. Met die vinnige groei van wetenskaplike uitsette
en die toename in aanlynbiblioteke wat sitasieanalise insluit, word die behoefte aan ’n
sistematiese evaluering van hierdie gereedskap al hoe belangriker.
Die navorsing in hierdie studie handel oor die uitsette van wetenskaplike navorsing,
dit wil sê, artikels en sitasies, en hoe hulle gebruik kan word in bibliometriese studies
om akademiese sukses te meet. Om meer spesifiek te wees, hierdie navorsing analiseer
algoritmes wat akademiese entiteite soos artikels, outeers en journale gradeer. Dit wys
hoe doeltreffend hierdie algoritmes belangrike en hoë-impak entiteite kan identifiseer.
’n Breedvoerige wiskundige formulering word ontwikkel uit ’n versameling van bibliometriese
metodes soos byvoorbeeld die h-indeks, die Impak Faktor vir journaale en die
rang-algoritmes gebaseer op Google se PageRank. Verder word die teoretiese eienskappe
van elke algoritme uitgelê.
Die rang-algoritmes en bibliometriese metodes gebruik die sitasiedatabasis van Microsoft
Academic Search vir berekeninge. Dit bevat 40 miljoen artikels en meer as 260
miljoen sitasies, wat oor verskeie akademiese dissiplines strek.
Ons gebruik ’n groot stel toetsdata van dokumente en outeers wat bekende pryse op
talle rekenaarwetenskaplike konferensies gewen het om die rang-algoritmes te evalueer.
Die resultate toon dat die gebruik van sitasietellings, in die algemeen, die beste rangmetode
is. Vir sekere take, soos die gradeering van belangrike artikels, of die identifisering
van hoë-impak outeers, presteer algoritmes wat op PageRank gebaseer is egter beter. ’n
Sekondêre resultaat van hierdie navorsing is die ontleding van publikasie tendense in
verskeie akademiese dissiplines om sodoende veranderinge in publikasie gedrag oor tyd
aan te toon en ook die verskille in publikasie patrone uit verskillende dissiplines uit te
wys.
|
198 |
A no-arbitrage macro finance approach to the term structure of interest ratesThafeni, Phumza 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: This work analysis the main macro-finance models of the term structure of
interest rates that determines the joint dynamics of the term structure and the
macroeconomic fundamentals under no-arbitrage approach. There has been a
long search during the past decades of trying to study the relationship between
the term structure of interest rates and the economy, to the extent that much
of recent research has combined elements of finance, monetary economics, and
the macroeconomics to analyse the term structure.
The central interest of the thesis is based on two important notions. Firstly,
it is picking up from the important work of Ang and Piazzesi (2003) model
who suggested a joint macro- finance strategy in a discrete time affine setting,
by also imposing the classical Taylor (1993) rule to determine the association
between yields and macroeconomic variables through monetary policy. There
is a strong intuition from the Taylor rule literature that suggests that such
macroeconomic variables as in inflation and real activity should matter for the
interest rate, which is the monetary policy instrument. Since from this important
framework, no-arbitrage macro-finance approach to the term structure of
interest rates has become an active field of cross-disciplinary research between
financial economics and macroeconomics.
Secondly, the importance of forecasting the yield curve using the variations
on the Nelson and Siegel (1987) exponential components framework to capture
the dynamics of the entire yield curve into three dimensional parameters evolving
dynamically. Nelson-Siegel approach is a convenient and parsimonious
approximation method which has been trusted to work best for fitting and
forecasting the yield curve. The work that has caught quite much of interest
under this framework is the generalized arbitrage-free Nelson-Siegel macro-
nance term structure model with macroeconomic fundamentals, (Li et al.
(2012)), that characterises the joint dynamic interaction between yields and
the macroeconomy and the dynamic relationship between bond risk-premia
and the economy. According to Li et al. (2012), risk-premia is found to be
closely linked to macroeconomic activities and its variations can be analysed.
The approach improves the estimation and the challenges on identication of
risk parameters that has been faced in recent macro-finance literature. / AFRIKAANSE OPSOMMING: Hierdie werk ontleed die makro- nansiese modelle van die term struktuur van
rentekoers pryse wat die gesamentlike dinamika bepaal van die term struktuur
en die makroekonomiese fundamentele faktore in 'n geen arbitrage wêreld.
Daar was 'n lang gesoek in afgelope dekades gewees wat probeer om die
verhouding tussen die term struktuur van rentekoerse en die ekonomie te
bestudeer, tot die gevolg dat baie onlangse navorsing elemente van nansies,
monetêre ekonomie en die makroekonomie gekombineer het om die term struktuur
te analiseer.
Die sentrale belang van hierdie proefskrif is gebaseer op twee belangrike
begrippe. Eerstens, dit tel op by die belangrike werk van die Ang and Piazzesi
(2003) model wat 'n gesamentlike makro- nansiering strategie voorstel in 'n
diskrete tyd a ene ligging, deur ook die klassieke Taylor (1993) reël om assosiasie
te bepaal tussen opbrengste en makroekonomiese veranderlikes deur
middel van monetêre beleid te imposeer. Daar is 'n sterk aanvoeling van die
Taylor reël literatuur wat daarop dui dat sodanige makroekonomiese veranderlikes
soos in asie en die werklike aktiwiteit moet saak maak vir die rentekoers,
wat die monetêre beleid instrument is. Sedert hierdie belangrike raamwerk, het
geen-arbitrage makro- nansies benadering tot term struktuur van rentekoerse
'n aktiewe gebied van kruis-dissiplinêre navorsing tussen nansiële ekonomie
en makroekonomie geword.
Tweedens, die belangrikheid van voorspelling van opbrengskromme met
behulp van variasies op die Nelson and Siegel (1987) eksponensiële komponente
raamwerk om dinamika van die hele opbrengskromme te vang in drie
dimensionele parameters wat dinamies ontwikkel. Die Nelson-Siegel benadering
is 'n gerie ike en spaarsamige benaderingsmetode wat reeds vertrou word
om die beste pas te bewerkstellig en voorspelling van die opbrengskromme.
Die werk wat nogal baie belangstelling ontvang het onder hierdie raamwerk
is die algemene arbitrage-vrye Nelson-Siegel makro- nansiele term struktuur
model met makroekonomiese grondbeginsels, (Li et al. (2012)), wat kenmerkend
van die gesamentlike dinamiese interaksie tussen die opbrengs en die
makroekonomie en die dinamiese verhouding tussen band risiko-premies en
die ekonomie is. Volgens Li et al. (2012), word risiko-premies bevind om nou gekoppel te wees aan makroekonomiese aktiwiteite en wat se variasies ontleed
kan word. Die benadering verbeter die skatting en die uitdagings van identi-
sering van risiko parameters wat teegekom is in die afgelope makro- nansiese literatuur.
|
199 |
Bivariate wavelet construction based on solutions of algebraic polynomial identitiesVan der Bijl, Rinske 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-resolution analysis (MRA) has become a very popular eld of mathematical study
in the past two decades, being not only an area rich in applications but one that remains
lled with open problems. Building on the foundation of re nability of functions, MRA
seeks to lter through levels of ever-increasing detail components in data sets { a concept
enticing to an age where development of digital equipment (to name but one example)
needs to capture more and more information and then store this information in di erent
levels of detail. Except for designing digital objects such as animation movies, one of the
most recent popular research areas in which MRA is applied, is inpainting, where \lost"
data (in example, a photograph) is repaired by using boundary values of the data set
and \smudging" these values into the empty entries. Two main branches of application
in MRA are subdivision and wavelet analysis. The former uses re nable functions to
develop algorithms with which digital curves are created from a nite set of initial points
as input, the resulting curves (or drawings) of which possess certain levels of smoothness
(or, mathematically speaking, continuous derivatives). Wavelets on the other hand, yield
lters with which certain levels of detail components (or noise) can be edited out of a
data set. One of the greatest advantages when using wavelets, is that the detail data is
never lost, and the user can re-insert it to the original data set by merely applying the
wavelet algorithm in reverse. This opens up a wonderful application for wavelets, namely
that an existent data set can be edited by inserting detail components into it that were
never there, by also using such a wavelet algorithm. In the recent book by Chui and De Villiers (see [2]), algorithms for both subdivision and wavelet applications were developed
without using Fourier analysis as foundation, as have been done by researchers in earlier
years and which have left such algorithms unaccessible to end users such as computer
programmers. The fundamental result of Chapter 9 on wavelets of [2] was that feasibility
of wavelet decomposition is equivalent to the solvability of a certain set of identities
consisting of Laurent polynomials, referred to as Bezout identities, and it was shown how
such a system of identities can be solved in a systematic way. The work in [2] was done in
the univariate case only, and it will be the purpose of this thesis to develop similar results
in the bivariate case, where such a generalization is entirely non-trivial. After introducing
MRA in Chapter 1, as well as discussing the re nability of functions and introducing box
splines as prototype examples of functions that are re nable in the bivariate setting, our
fundamental result will also be that wavelet decomposition is equivalent to solving a set
of Bezout identities; this will be shown rigorously in Chapter 2. In Chapter 3, we give
a set of Laurent polynomials of shortest possible length satisfying the system of Bezout
identities in Chapter 2, for the particular case of the Courant hat function, which will
have been introduced as a linear box spline in Chapter 1. In Chapter 4, we investigate
an application of our result in Chapter 3 to bivariate interpolatory subdivision. With the
view to establish a general class of wavelets corresponding to the Courant hat function,
we proceed in the subsequent Chapters 5 { 8 to develop a general theory for solving the
Bezout identities of Chapter 2 separately, before suggesting strategies for reconciling these
solution classes in order to be a simultaneous solution of the system. / AFRIKAAANSE OPSOMMING: Multi-resolusie analise (MRA) het in die afgelope twee dekades toenemende gewildheid
geniet as 'n veld in wiskundige wetenskappe. Nie net is dit 'n area wat ryklik toepaslik
is nie, maar dit bevat ook steeds vele oop vraagstukke. MRA bou op die grondleggings
van verfynbare funksies en poog om deur vlakke van data-komponente te sorteer, of te
lter, 'n konsep wat aanloklik is in 'n era waar die ontwikkeling van digitale toestelle
(om maar 'n enkele voorbeeld te noem) sodanig moet wees dat meer en meer inligting
vasgel^e en gestoor moet word. Behalwe vir die ontwerp van digitale voorwerpe, soos
animasie- lms, word MRA ook toegepas in 'n mees vername navorsingsgebied genaamd
inverwing, waar \verlore" data (soos byvoorbeeld in 'n foto) herwin word deur data te
neem uit aangrensende gebiede en dit dan oor die le e data-dele te \smeer." Twee hooftakke
in toepassing van MRA is subdivisie en gol e-analise. Die eerste gebruik verfynbare
funksies om algoritmes te ontwikkel waarmee digitale krommes ontwerp kan word vanuit 'n
eindige aantal aanvanklike gegewe punte. Die verkrygde krommes (of sketse) kan voldoen
aan verlangde vlakke van gladheid (of verlangde grade van kontinue afgeleides, wiskundig
gesproke). Gol es word op hul beurt gebruik om lters te bou waarmee gewensde dataof
geraas-komponente verwyder kan word uit datastelle. Een van die grootste voordeel
van die gebruik van gol es bo ander soortgelyke instrumente om data lters mee te bou,
is dat die geraas-komponente wat uitgetrek word nooit verlore gaan nie, sodat die proses
omkeerbaar is deurdat die gebruiker die sodanige geraas-komponente in die groter datastel
kan terugbou deur die gol e-algoritme in trurat toe te pas. Hierdie eienskap van gol fies open 'n wonderlike toepassingsmoontlikheid daarvoor, naamlik dat 'n bestaande datastel
verander kan word deur data-komponente daartoe te voeg wat nooit daarin was nie,
deur so 'n gol e-algoritme te gebruik. In die onlangse boek deur Chui and De Villiers
(sien [2]) is algoritmes ontwikkel vir die toepassing van subdivisie sowel as gol es, sonder
om staat te maak op die grondlegging van Fourier-analise, soos wat die gebruik was in
vroe ere navorsing en waardeur algoritmes wat ontwikkel is minder e ektief was vir eindgebruikers.
Die fundamentele resultaat oor gol es in Hoofstuk 9 in [2], verduidelik hoe
suksesvolle gol e-ontbinding ekwivalent is aan die oplosbaarheid van 'n sekere versameling
van identiteite bestaande uit Laurent-polinome, bekend as Bezout-identiteite, en dit is
bewys hoedat sodanige stelsels van identiteite opgelos kan word in 'n sistematiese proses.
Die werk in [2] is gedoen in die eenveranderlike geval, en dit is die doelwit van hierdie
tesis om soortgelyke resultate te ontwikkel in die tweeveranderlike geval, waar sodanige
veralgemening absoluut nie-triviaal is. Nadat 'n inleiding tot MRA in Hoofstuk 1 aangebied
word, terwyl die verfynbaarheid van funksies, met boks-latfunksies as prototipes van
verfynbare funksies in die tweeveranderlike geval, bespreek word, word ons fundamentele
resultaat gegee en bewys in Hoofstuk 2, naamlik dat gol e-ontbinding in die tweeveranderlike
geval ook ekwivalent is aan die oplos van 'n sekere stelsel van Bezout-identiteite. In
Hoofstuk 3 word 'n versameling van Laurent-polinome van korste moontlike lengte gegee
as illustrasie van 'n oplossing van 'n sodanige stelsel van Bezout-identiteite in Hoofstuk 2,
vir die besondere geval van die Courant hoedfunksie, wat in Hoofstuk 1 gede nieer word.
In Hoofstuk 4 ondersoek ons 'n toepassing van die resultaat in Hoofstuk 3 tot tweeveranderlike
interpolerende subdivisie. Met die oog op die ontwikkeling van 'n algemene klas
van gol es verwant aan die Courant hoedfunksie, brei ons vervolglik in Hoofstukke 5 {
8 'n algemene teorie uit om die oplossing van die stelsel van Bezout-identiteite te ondersoek,
elke identiteit apart, waarna ons moontlike strategie e voorstel vir die versoening van
hierdie klasse van gelyktydige oplossings van die Bezout stelsel.
|
200 |
Modelling the control of tsetse and African trypanosomiasis through application of insecticides on cattle in Southeastern UgandaKajunguri, Damian 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: In Uganda, cattle are an important reservoir of Trypanosoma brucei rhodesiense, a parasite
that causes human African trypanosomiasis or sleeping sickness. We developed mathematical
models to examine the transmission of T. b. rhodesiense by tsetse vector species,
Glossina fuscipes fuscipes in a host population that consists of humans, domestic and wild
mammals, and reptiles. The models were developed and analysed based on the situation in
Tororo district in Southeastern Uganda, where sleeping sickness is endemic and which has a
cattle and human population of 40, 000 and 500, 000, respectively. Assuming populations of
cattle and humans only, the impact of mass chemoprophylaxis and vector control through
insecticide-treated cattle (ITC) is evaluated. Keeping 12% or 82% of the cattle population
on insecticides that have an insecticidal killing effect of 100% at all times or trypanocides
that have 100% efficacy, respectively, can lead to the control of T. b. rhodesiense in both
humans and cattle. Optimal control of T. b. rhodesiense is shown to be achieved through
ITC alone or a combination of chemoprophylaxis and ITC, the former being the cheapest
control strategy. Allowing for the waning effect of insecticides and including wildhosts,
T. b. rhodesiense control can be achieved by keeping 21% or 27% of the cattle population
on insecticides through whole-body or restricted application, respectively. Restricting
the treatment of insecticides to adult cattle only would require 24% or 33% of the adult
cattle population to be kept on insecticides through whole-body or restricted application,
respectively, to control T. b. rhodesiense. A cost-effectiveness and benefit-cost analysis of
using ITC to control T. b. rhodesiense show that restricted application of insecticides is
a cheaper and more beneficial strategy compared to whole-body treatment. The results of
the study show that the restricted application of insecticides on cattle provides a cheap,
safe and farmer-based strategy for controlling tsetse and trypanosomiasis. / AFRIKAANSE OPSOMMING: In Uganda is beeste ’n belangrike reservoir van Trypanosoma brucei rhodesiense, ’n parasiet
wat tripanosomiase of slaapsiekte in mense veroorsaak. Ons het wiskundige modelle ontwikkel
wat die oordrag van T. b. Rhodesiense deur tesetse vektor spesies, Glossina fuscipes
fuscipes in ’n draer populasie wat bestaan uit mense, mak en wilde diere en reptiele, ondersoek.
Die modelle was ontwikkel en geanaliseer gebaseer op die oordrag situasie in die
Tororo distrik in Suidoostelike Uganda, ’n gebied waar slaapsiekte endemies is en wat ’n
populasie van 40, 000 beeste en 500, 000 mense het. Die impak van massa chemoprofilakse
en vektor beheer deur insekdoder-behandelde beeste is gevalueer onder die aanname van
bees en mens populasies alleenlik. Beheer oor T. b. Rhodesiense in beide mense en beeste
kan verkry word deur of 12% van die bees populasie te behandel met ’n insekdoder wat
100% effektief is ten alle tye of 82% van die bees populasie te behandel met tripanosiedes
wat 100% effektief is. Daar is aangetoon dat optimale beheer van T. b. Rhodesiense
bereik kan word deur die gebruik van insekdoders alleenlik of ’n kombinasie van insekdoders
en chemoprofilakse, hoewel eersgenoemde die goedkoopste strategie is. Wanneer die
kwynende effek van insekdoders asook wilde diere as draers in ag geneem word, kan T.
b. Rhodesiense beheer verkry word deur 21% van beeste se hele liggaam met insekdoders
te behandel of 27% gedeeltelik te behandel. As slegs volwasse beeste met insekdoders
behandel word, moet 24% se hele liggaam of 33% gedeeltelik behandel word vir beheer
van T. b. Rhodesiense. ’n Koste-effektiwiteit en voordeel-koste analise van insekdoders as
beheermaatstaf vir T. b. Rhodesiense toon aan dat gedeeltelike behandeling van die bees
se liggaam die goedkoper en meer voordelige strategie is in vergelyking met behandeling
van die hele liggaam. Die resultate van die studie wys dat gedeeltelike behandeling van
beeste met insekdoders ’n goedkoop, veilige en landbouer-gebaseerde strategie is om tsetse
en tripanosomiase te beheer.
|
Page generated in 0.075 seconds