• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 44
  • 24
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 188
  • 85
  • 69
  • 49
  • 38
  • 35
  • 32
  • 32
  • 31
  • 27
  • 26
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

News media, asset prices and capital flows: evidence from a small open economy

Sher, Galen January 2017 (has links)
Objectives: This work investigates the role for the content of print news media in determining asset prices and capital flows in a small open economy (South Africa). Specifically, it examines how much of the daily variation in stock prices, bond prices, trading volume and capital flows can be explained by phrases in the print news media. Furthermore, this work links such evidence to the existing theoretical and empirical literature. Methods: This work employs natural language processing techniques for counting words and phrases within articles published in national newspapers. Variance decompositions of the resulting word and phrase counts summarise the information extracted from national newspapers in this way. Following previous studies of the United States, least squares regression relates stock returns to single positive or negative 'sentiment' factors. New in this study, support vector regression relates South African stock returns, bond returns and capital flows to the high-dimensional word and phrase counts from national newspapers. Results: I find that domestic asset prices and capital flows between residents and non-residents reflect the content of domestic print news media. In particular, I find that the contents of national newspapers can predict 9 percent of the variation in daily stock returns one day ahead and 7 percent of the variation in the daily excess return of long-term bonds over short-term bonds three days ahead. This predictability in stocks and bonds coincides with predictability of the content of domestic print news media for net equity and debt portfolio capital inflows, suggesting that the domestic print news media affects foreign residents' demand for domestic assets. Moreover, predictability of domestic print news media for near future stock returns is driven by emotive language, suggesting a role for 'sentiment', while such predictability for stock returns further ahead and the premium on long-term bonds is driven by non-emotive language, suggesting a role for other media factors in determining asset prices. These results do not seem to reflect a purely historical phenomenon, finite-sample biases, reverse causality, serial correlation, volatility or day-of-the-week effects. The results support models where foreign agents' short-run beliefs or preferences respond to the content of domestic print news media heterogeneously from those of domestic agents, while becoming more homogeneous in the medium term.
182

Analysing the structure and nature of medical scheme benefit design in South Africa

Kaplan, Josh Tana January 2015 (has links)
Includes bibliographical references / This dissertation intends to shed light on open-membership medical scheme benefit design in South Africa. This will be done by analysing the benefit design of 118 benefit options, so as to provide an overview of the structure and nature of the benefit offerings available in the market in 2014. In addition, affordability of these benefit options was analysed in order to identify whether or not there exist connections between the benefits on offer and the price of cover. This paper will argue that at present, the large number of benefit options available in the market, the lack of standardisation between benefit options, together with the mosaic of confusing terminology employed in scheme brochures, creates a highly complex environment that hampers consumer decision making. However, this implicit complexity was found to be necessary owing to the incomplete regulatory environment surrounding medical schemes. The findings of this investigation show that benefit design requires significant attention in order to facilitate equitable access to cover in South Africa.
183

Initial Security Classification in Canadian Prisons: A Qualitative Content Analysis Examining Actuarial Risk Assessment Tools as Reproducing a Settler Colonial Logic of Elimination

Malalla, Sahr 06 January 2022 (has links)
Actuarial risk assessment tools have been part of the initial security classification process in Canadian prisons since the 1990s. Developed initially on a white, homogenous male prison population (Hannah-Moffat, 2015b), actuarial instruments have been championed by researchers in the field of corrections and psychology as an “objective” instrument that can standardize the classification procedure (Andrews et al., 1990; Barnum & Gobeil, 2012). However, the universal application of such tools has been met with resistance, criticized for having not been validated on an Indigenous prison population and thus culturally inappropriate for use (Martel et al., 2011; Monture-Angus, 1999; Webster & Doob, 2004). This thesis intends to examine how Correctional Service Canada (CSC) has legitimated the use of actuarial tools in its initial security classification and penitentiary placement procedure. Guided by the theoretical framework of governmentality (Foucault, 1991) and the logic of elimination (Wolfe, 1994; 2006), this study undertakes a qualitative content analysis of seven CSC research documents that evaluated the empirical validity and reliability of the Custody Rating Scale (CRS), a 12-item structured instrument that calculates a prisoner’s recommended security classification level. I put forth the argument that, in the process of legitimating actuarial instruments by appealing to justifications grounded in an actuarial rationality, CSC simultaneously facilitates the ontological erasure of Indigenous people in prison that is consistent with a logic of elimination inherent in settler colonial societies.
184

A General Approach to Buhlmann Credibility Theory

Yan, Yujie yy 08 1900 (has links)
Credibility theory is widely used in insurance. It is included in the examination of the Society of Actuaries and in the construction and evaluation of actuarial models. In particular, the Buhlmann credibility model has played a fundamental role in both actuarial theory and practice. It provides a mathematical rigorous procedure for deciding how much credibility should be given to the actual experience rating of an individual risk relative to the manual rating common to a particular class of risks. However, for any selected risk, the Buhlmann model assumes that the outcome random variables in both experience periods and future periods are independent and identically distributed. In addition, the Buhlmann method uses sample mean-based estimators to insure the selected risk, which may be a poor estimator of future costs if only a few observations of past events (costs) are available. We present an extension of the Buhlmann model and propose a general method based on a linear combination of both robust and efficient estimators in a dependence framework. The performance of the proposed procedure is demonstrated by Monte Carlo simulations.
185

Actuarial Risk Assessment and Ontario Child Welfare Transformation: A Parodox of Purpose

McVeigh, Palmer E Mary 10 1900 (has links)
<p>When Ontario Child Welfare Transformation was initiated a new risk assessment document was implemented. This study explored the opinions of protection staff on the present risk assessment. Participants’ negative opinions about the document contrast sharply with their enthusiasm over transformation initiatives and illustrate a paradox between the document and transformation goals. Their concerns for the document and the standardized approach of the document centre on the theme of what is lost in the approach. In particular, they spoke to the present risk assessment missing assessment of the strengths and protective factors, which could mitigate risk and that it fails to capture the spirit and goals of transformation. They expressed concern that it misses the whole picture, social context and ambiguity of many situations. It fails to account for progress, adaptation and flexibility and instead focuses on static, fixed and unchangeable factors. They expressed concern for a lack of written or area for workers to explain context. Workers noted that the families’ views are absent in the fixed categories, as well they questioned claims for the documents accuracy. Workers also noted that the standardized and rigid approach of the document failed to take into account structural and anti oppressive considerations. As a result, workers felt much was lost from the document.</p> <p>Workers found ways to resist their assessments being constrained. Their attempts to assess risk outside the box were varied and creative and made use of a wide range of tools, paradigms and experiences. Workers own assessments of risk, were more consistent with transformation initiatives than that of the document. Workers offered suggestions as to how they would improve risk assessments in child protection.</p> <p>The shifts in paradigms in child protection, the debate over the “how to” assess for risk, and the factors presently absent on the risk assessment, highlight possibilities of other ways of assessing risk and intervening. This study offers an interpretive account of the present actuarial risk assessment document from the perspective of protection staff. The concerns raised with the document, infuse the paper with critical questions regarding the objectivity and validity of positivistic approaches, the utility and futility of standardized approaches, and existence of competing claims and discourses regarding risks to children. Their concerns beg the question, could there be other possibilities to assess risk and intervene in a manner more consistent with transformation initiatives and anti oppressive practice?</p> / Master of Social Work (MSW)
186

The valuation of no-negative equity guarantees and equity release mortgages

Dowd, K., Buckner, D., Blake, D., Fry, John 05 January 2020 (has links)
Yes / We outline the valuation process for a No-Negative Equity Guarantee in an Equity Release Mortgage loan and for an Equity Release Mortgage that has such a guarantee. Illustrative valuations are provided based on the Black ’76 put pricing formula and mortality projections based on the M5, M6 and M7 mortality versions of the Cairns–Blake–Dowd (CBD) family of mortality models. Results indicate that the valuations of No-Negative Equity Guarantees are high relative to loan amounts and subject to considerable model risk but that the valuations of Equity Release Mortgage loans are robust to the choice of mortality model. Results have significant ramifications for industry practice and prudential regulation.
187

Evaluating the Predictive Power and suitability of Mortality Rate Models : A Comparison of Makeham and Lee-Carter for Life Insurance Applications

Ljunggren, Carl January 2024 (has links)
Life insurance companies rely on mortality rate models to set appropriate premiums for their services. Over the past century, average life expectancy has increased and continues to do so, necessitating more accurate models. Two commonly used models are the Gompertz-Makeham law of mortality and the Lee-Carter model. The Gompertz-Makeham model depends solely on an age variable, while the Lee-Carter model incorporates a time-varying aspect which accounts for the increase in life expectancy over time. This paper constructs both models using training data acquired from Skandia Mutual Life Insurance Company and compares them to validation data from the same set. The study suggests that the Lee-Carter model may be able to offer some improvements compared to the Gompertz-Makeham law of mortality in terms of predicting future mortality rates. However, due to a lack of qualitative data, creating a competitive Lee-Carter model through Singular Value Decomposition, SVD, proved to be problematic. Switching from the current Gompertz-Makeham model to the Lee-Carter model should, therefore, be explored further when more high quality data becomes available.
188

A framework for estimating risk

Kroon, Rodney Stephen 03 1900 (has links)
Thesis (PhD (Statistics and Actuarial Sciences))--Stellenbosch University, 2008. / We consider the problem of model assessment by risk estimation. Various approaches to risk estimation are considered in a uni ed framework. This a discussion of various complexity dimensions and approaches to obtaining bounds on covering numbers is also presented. The second type of training sample interval estimator discussed in the thesis is Rademacher bounds. These bounds use advanced concentration inequalities, so a chapter discussing such inequalities is provided. Our discussion of Rademacher bounds leads to the presentation of an alternative, slightly stronger, form of the core result used for deriving local Rademacher bounds, by avoiding a few unnecessary relaxations. Next, we turn to a discussion of PAC-Bayesian bounds. Using an approach developed by Olivier Catoni, we develop new PAC-Bayesian bounds based on results underlying Hoe ding's inequality. By utilizing Catoni's concept of \exchangeable priors", these results allowed the extension of a covering number-based result to averaging classi ers, as well as its corresponding algorithm- and data-dependent result. The last contribution of the thesis is the development of a more exible shell decomposition bound: by using Hoe ding's tail inequality rather than Hoe ding's relative entropy inequality, we extended the bound to general loss functions, allowed the use of an arbitrary number of bins, and introduced between-bin and within-bin \priors". Finally, to illustrate the calculation of these bounds, we applied some of them to the UCI spam classi cation problem, using decision trees and boosted stumps. framework is an extension of a decision-theoretic framework proposed by David Haussler. Point and interval estimation based on test samples and training samples is discussed, with interval estimators being classi ed based on the measure of deviation they attempt to bound. The main contribution of this thesis is in the realm of training sample interval estimators, particularly covering number-based and PAC-Bayesian interval estimators. The thesis discusses a number of approaches to obtaining such estimators. The rst type of training sample interval estimator to receive attention is estimators based on classical covering number arguments. A number of these estimators were generalized in various directions. Typical generalizations included: extension of results from misclassi cation loss to other loss functions; extending results to allow arbitrary ghost sample size; extending results to allow arbitrary scale in the relevant covering numbers; and extending results to allow arbitrary choice of in the use of symmetrization lemmas. These extensions were applied to covering number-based estimators for various measures of deviation, as well as for the special cases of misclassi - cation loss estimators, realizable case estimators, and margin bounds. Extended results were also provided for strati cation by (algorithm- and datadependent) complexity of the decision class. In order to facilitate application of these covering number-based bounds,
189

An analysis of income and poverty in South Africa

Malherbe, Jeanine Elizabeth 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2007. / The aim of this study is to assess the welfare of South Africa in terms of poverty and inequality. This is done using the Income and Expenditure Survey (IES) of 2000, released by Statistics South Africa, and reviewing the distribution of income in the country. A brief literature review of similar studies is given along with a broad de nition of poverty and inequality. A detailed description of the dataset used is given together with aspects of concern surrounding the dataset. An analysis of poverty and income inequality is made using datasets containing the continuous income variable, as well as a created grouped income variable. Results from these datasets are compared and conclusions made on the use of continuous or grouped income variables. Covariate analysis is also applied in the form of biplots. A brief overview of biplots is given and it is then used to obtain a graphical description of the data and identify any patterns. Lastly, the conclusions made in this study are put forward and some future research is mentioned.
190

Value at risk and expected shortfall : traditional measures and extreme value theory enhancements with a South African market application

Dicks, Anelda 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Accurate estimation of Value at Risk (VaR) and Expected Shortfall (ES) is critical in the management of extreme market risks. These risks occur with small probability, but the financial impacts could be large. Traditional models to estimate VaR and ES are investigated. Following usual practice, 99% 10 day VaR and ES measures are calculated. A comprehensive theoretical background is first provided and then the models are applied to the Africa Financials Index from 29/01/1996 to 30/04/2013. The models considered include independent, identically distributed (i.i.d.) models and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) stochastic volatility models. Extreme Value Theory (EVT) models that focus especially on extreme market returns are also investigated. For this, the Peaks Over Threshold (POT) approach to EVT is followed. For the calculation of VaR, various scaling methods from one day to ten days are considered and their performance evaluated. The GARCH models fail to converge during periods of extreme returns. During these periods, EVT forecast results may be used. As a novel approach, this study considers the augmentation of the GARCH models with EVT forecasts. The two-step procedure of pre-filtering with a GARCH model and then applying EVT, as suggested by McNeil (1999), is also investigated. This study identifies some of the practical issues in model fitting. It is shown that no single forecasting model is universally optimal and the choice will depend on the nature of the data. For this data series, the best approach was to augment the GARCH stochastic volatility models with EVT forecasts during periods where the first do not converge. Model performance is judged by the actual number of VaR and ES violations compared to the expected number. The expected number is taken as the number of return observations over the entire sample period, multiplied by 0.01 for 99% VaR and ES calculations. / AFRIKAANSE OPSOMMING: Akkurate beraming van Waarde op Risiko (Value at Risk) en Verwagte Tekort (Expected Shortfall) is krities vir die bestuur van ekstreme mark risiko’s. Hierdie risiko’s kom met klein waarskynlikheid voor, maar die finansiële impakte is potensieel groot. Tradisionele modelle om Waarde op Risiko en Verwagte Tekort te beraam, word ondersoek. In ooreenstemming met die algemene praktyk, word 99% 10 dag maatstawwe bereken. ‘n Omvattende teoretiese agtergrond word eers gegee en daarna word die modelle toegepas op die Africa Financials Index vanaf 29/01/1996 tot 30/04/2013. Die modelle wat oorweeg word sluit onafhanklike, identies verdeelde modelle en Veralgemeende Auto-regressiewe Voorwaardelike Heteroskedastiese (GARCH) stogastiese volatiliteitsmodelle in. Ekstreemwaarde Teorie modelle, wat spesifiek op ekstreme mark opbrengste fokus, word ook ondersoek. In hierdie verband word die Peaks Over Threshold (POT) benadering tot Ekstreemwaarde Teorie gevolg. Vir die berekening van Waarde op Risiko word verskillende skaleringsmetodes van een dag na tien dae oorweeg en die prestasie van elk word ge-evalueer. Die GARCH modelle konvergeer nie gedurende tydperke van ekstreme opbrengste nie. Gedurende hierdie tydperke, kan Ekstreemwaarde Teorie modelle gebruik word. As ‘n nuwe benadering oorweeg hierdie studie die aanvulling van die GARCH modelle met Ekstreemwaarde Teorie vooruitskattings. Die sogenaamde twee-stap prosedure wat voor-af filtrering met ‘n GARCH model behels, gevolg deur die toepassing van Ekstreemwaarde Teorie (soos voorgestel deur McNeil, 1999), word ook ondersoek. Hierdie studie identifiseer sommige van die praktiese probleme in model passing. Daar word gewys dat geen enkele vooruistkattingsmodel universeel optimaal is nie en die keuse van die model hang af van die aard van die data. Die beste benadering vir die data reeks wat in hierdie studie gebruik word, was om die GARCH stogastiese volatiliteitsmodelle met Ekstreemwaarde Teorie vooruitskattings aan te vul waar die voorafgenoemde nie konvergeer nie. Die prestasie van die modelle word beoordeel deur die werklike aantal Waarde op Risiko en Verwagte Tekort oortredings met die verwagte aantal te vergelyk. Die verwagte aantal word geneem as die aantal obrengste waargeneem oor die hele steekproefperiode, vermenigvuldig met 0.01 vir die 99% Waarde op Risiko en Verwagte Tekort berekeninge.

Page generated in 0.5485 seconds