• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7423
  • 2116
  • 1527
  • 586
  • 532
  • 378
  • 378
  • 378
  • 378
  • 378
  • 367
  • 315
  • 219
  • 174
  • 66
  • Tagged with
  • 15758
  • 7451
  • 2022
  • 1899
  • 1517
  • 1294
  • 1288
  • 1244
  • 1180
  • 1067
  • 1056
  • 1032
  • 997
  • 922
  • 816
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Optimizaiton with random error

Booth, Robin Geoffrey January 1968 (has links)
A new evolutionary operation called the complicial method is presented. The main criterion, which is adhered to, is that changes in the independent variables are restricted to a small step-size from a previous best trial. The complicial method is essentially a modification of the simplicial method proposed by Spendley, Hext and Himsworth in which these authors employ regular type arrays in a sequential type search for the optimum. The complicial method differs from the simplicial method in that an irregular array is formed when (and only when) the last trial is proven to be the best of those previously tested. The design of this irregular array is such that a regular array can be formed when the last trial is proven not to be the best so far. The complicial method is compared to the simplicial method for a wide variety of response surfaces in both the absence and presence of random error. It is found that the complicial method is much more effective (i.e. the relative effectiveness is very large) for almost all the test response surfaces involving a small number of variables. Although an increase in the amount of random error decreases the effectiveness of both methods, the relative effectiveness generally remains unchanged. However, as the number of variables is increased the relative effectiveness is found to decrease markedly. This is explained by considerations of the basic design of the regular and irregular arrays. Because the complicial method sacrifices some of the simplicity characteristic of the simplicial method, it is recommended that the complicial method be applied only in situations where the relative effectiveness is very large. Therefore, this method is best used for all types of response surfaces involving a small number, of variables. / Applied Science, Faculty of / Chemical and Biological Engineering, Department of / Graduate
82

The study of intermetallic particles in aluminium alloy AA3104 can-body stock during homogenisation

Magidi, Livhuwani Tessa January 2017 (has links)
Aluminium alloy AA3104 is commonly used for the manufacture of beverage can bodies. This alloy has excellent formability and strength properties. The evolution of the AA3104 microstructure and intermetallic particles during thermo-mechanical processing (TMP) has a direct impact on quality parameters, which influence the formability of the material during beverage can deep drawing and wall ironing. These parameters are earing, tear-off and galling resistance. During homogenisation of AA3104 direct chill (DC) ingot, there is a phase transformation from β-Al₆(Fe,Mn) orthorhombic phase to the harder α-Alₓ(Fe,Mn)₃Si₂ cubic phase. Phase transformation occurs by diffusion of Si and Mn, where diffusion of Mn determines the rate of transformation. The presence of the α-phase intermetallic particles is crucial for galling resistance, thus improving the formability of the material. Ideal galling resistance requires 1- 3% total volume fraction (VF) of intermetallic particles, 50% of which should be the harder α- phase. The homogenisation treatment variables, such as temperature, as well as the effect of the intermetallic particle VFs with the correct β to α ratio is investigated. The aim of this research is to characterise intermetallic particles in the as-cast condition and investigate the evolution of particles as a result of a two-step homogenisation treatment, where the primary step temperature was varied between 560⁰C and 580⁰C, and the secondary step was performed at 520⁰C. The characterisation process involves particle phase identification using compositional and morphological analysis. A particle extraction setup is then used to extract intermetallic particles from the bulk specimen by dissolving Al matrix in dry butanol and those particles are analysed. The evolution of volume fraction of particles and their distribution is then investigated using light microscopy, image analysis, XRD and the Rietveld method. The SEM micrographs show a larger quantity of smaller, more closely dispersed intermetallic particles at the edge of the ingot, compared to those at the centre. The β-Al₆(Fe,Mn) phase is more geometric in shape, while the α-Alₓ(Fe,Mn)₃Si₂ phase comprises isolated areas of Almatrix within the particle centres (Chinese-script like). The phases are distinguished based on morphological identification using SEM and compositional identification using EDS, where Si content within the α-phase is used to differentiate between the phases. XRD patterns with the Rietveld method show the presence of β and α as the major phase particles within the homogenised specimens near the edge and at the centre. Phase quantification using 2-D analysis and particle extraction shows more α-phase near the edge and less α-phase at the centre. The two techniques agree in trend but differ in values. The particle extraction analysis is more trustworthy than 2-D particle analysis, where error is suggested to arise during thresholding in 2-D microstructural analysis. Additionally, homogenisation at 580°C/520°C yields more α-phase than homogenisation at 560°C/520°C both near the edge and at the centre of the ingot. Important observations emerge from this study: (i) Microstructural [two-dimensional (2-D)] and particle extraction [three-dimensional (3-D)] techniques agree when it comes to microstructural qualification and tend to slightly differ on particle quantification (value obtained from both techniques), (ii) both techniques show the presence of α and β phases, as well as reveal the morphological differences within the particles, (iii) both techniques show similar trends of high amount of β-phase during as-cast and an increase in α-phase after homogenisation due to phase transformation. Additionally, phase quantification reveals that more α-phase near the edge and less α-phase at the centre, and (iv) homogenisation at 560°C/520°C yields α-phase VF which is closer to the desired β→α ratio of 50% compared to homogenisation at 580°C/520°C. Therefore, homogenisation at 560°C/520°C is the better homogenisation treatment temperature option. Furthermore, both 2-D microstructural analysis and particle extraction analysis are reliable techniques that complement each other when qualitatively and quantitatively studying the evolution of intermetallic particles in aluminium alloy AA3104 canbody stock during homogenisation. However, particle extraction analysis has been shown to have a higher accuracy, thus is deemed more reliable.
83

Estimating dynamic affine term structure models

Pitsillis, Zachry Steven January 2015 (has links)
Duffee and Stanton (2012) demonstrated some pointed problems in estimating affine term structure models when the price of risk is dynamic, that is, risk factor dependent. The risk neutral parameters are estimated with precision, while the price of risk parameters are not. For the Gaussian models they investigated, these problems are replicated and are shown to stem from a lack of curvature in the log-likelihood function. This geometric issue for identifying the maximum of an essentially horizontal log-likelihood has statistical meaning. The Fisher information for the price of risk parameters is multiple orders of magnitude smaller than that of the risk neutral parameters. Prompted by the recent results of Christoffersen et al. (2014) a remedy to the lack of curvature is attempted. An unscented Kalman filter is used to estimate models where the observations are portfolios of FRAs, Swaps and Zero Coupon Bond Options. While the unscented Kalman filter performs admirably in identifying the unobserved risk factor processes, there is little improvement in the Fisher information.
84

Variable selection in logistic regression, with special application to medical data

Joubert, Georgina January 1994 (has links)
Bibliography: pages 121-126. / In this thesis, the various methods of variable selection which have been proposed in the statistical, epidemiological and medical literature for prediction and estimation problems in logistic regression will be described. The procedures will be applied to medical data sets. On the basis of the literature review as well as the applications to examples, strengths and weaknesses of the approaches will be identified. The procedures will be compared on the basis of the results obtained, their appropriateness for the specific aim of the analysis, and demands they place on the analyst and researcher, intellectually and computationally. In particular, certain selection procedures using bootstrap samples, which have not been used before, will be investigated, and the partial Gauss discrepancy will be extended to the case of logistic regression. Recommendations will be made as to which approaches are the most suitable or most practical in different situations. Most statistical texts deal with issues regarding prediction, whereas the epidemiological literature focuses on estimation. It is therefore hoped that the thesis will be a useful reference for those, statistically or epidemiologically trained, who have to deal with issues regarding variable selection in logistic regression. When fitting models in general, and logistic regression models in particular, it is standard practice to determine the goodness of fit of models, and to ascertain whether outliers or influential observations are present in a data set. These aspects will not be discussed in this thesis, although they were considered when fitting the models.
85

Implementation of numerical Fourier method for second order Taylor schemes

Mashalaba, Qaphela 29 January 2020 (has links)
The problem of pricing contingent claims in a complete market has received a significant amount of attention in literature since the seminal work of Black, Fischer and Scholes, Myron (1973). It was also in 1973 that the theory of backward stochastic differential equations (BSDEs) was developed by Bismut, Jean-Michel (1973), but it was much later in the literature that BSDEs developed links to contingent claim pricing. This dissertation is a thorough exposition of the survey paper Ruijter, Marjon J and Oosterlee, Cornelis W (2016) in which a highly accurate and efficient Fourier pricing technique compatible with BSDEs is developed and implemented. We prove our understanding of this technique by reproducing some of the numerical experiments and results in Ruijter, Marjon J and Oosterlee, Cornelis W (2016), and outlining some key implementationl considerations.
86

A post-crisis investigation in to the performance of GARCH-based historical & analytical value-at-risk on the FTSE

De Alessi, Alessando January 2013 (has links)
Includes abstract. / Includes bibliographical references. / This paper is an investigation into the performance of GARCH-based VaR models on the South African FTSE/JSE Top 40 Index. Specifically, this paper investigates whether stability has returned to the VaR measure following its poor performance during the latest global financial crisis (2007). GARCH models are used in both an analytic and historical approach for modeling 1%, 2.5% and 5% daily VaR for a three year backtest period (2010-2012). Four distributions are used: the normal, generalised error, t-distribution and the skewed t-distribution. A particular question asked by this paper, is whether the data from the latest financial crisis (2007) should be used in estimating VaR in a post-crisis market. To investigate this, all models are re-estimated using data that has the financial crisis and/or high volatility period removed, then the results across the two data sets are compared. The take away point from this research is that the volatility-clustering mechanism inherent in every GARCH model is capable of producing accurate VaR estimates in a post-downturn/lower-volatility market even when the data on which the model was estimated contains financial downturn/volatile data. There is strong evidence suggesting stability has returned to this measure - however caution remains over using over-simplified models.
87

Interaction between firm-level variables and stock betas : a South African perspective

Yang, Yanni January 2011 (has links)
Includes abstract. / Includes bibliographical references (leaves 42-44). / This paper aims to determine the existence of the interaction between firm-level variables and stock betas in the South African equity market and if existent, use this relationship to aid market participants in the investment process. This paper looks at the use of Kalman filter in estimating stock betas which vary over time. A brief overview of the Kalman filter method is provided. In particular, this paper examines the impact of sub-sector betas and firm-specific variables on stock betas over the full period under study and over two market regimes to determine if the impact is dependent on the direction of the market.
88

The effect of security return dispersion on performance measurement in a South African context

Gething, Bryce A January 2014 (has links)
Includes bibliographical references. / This work replicates a similar study performed by de Silva et al. (2001). Our study was performed on the South African market. De Silva et al. (2001) studied the effect of cross-sectional volatility (CSV) on fund managerial skill measurement. This lead to the conjecture that increased fund performance dispersion was primarily due to higher CSV, and not changes in informational efficiency or ranges in managerial talent. In this dissertation we firstly critique the CSV-adjusted alpha as a measure of fund performance and show that it can only be used as a means of normalising fund performance, yet reveals very little with regard to managerial talent. Since fund performance is intrinsically linked to CSV, we find it difficult to disentangle the effects of CSV and managerial talent dispersion. Adjusting for CSV therefore also implies adjustment for managerial talent, and we conclude with ideas for how a CSV-adjusted alpha may be used to assess manager talent.
89

Bid-Ask Spread Modelling in the South African Bond Market

Shaw, Matthew 11 February 2019 (has links)
Pitsillis and Taylor (2014) calculate bid-ask spread estimates of South African government bonds over a single year, using the models of De Jong and Rindi (2009) and Huang and Stoll (1997). This dissertation tests the effectiveness of both models by comparing the modelled equity spread estimates against the actual equity spread estimates. Furthermore, this dissertation investigates the stability of the De Jong and Rindi (2009) and Huang and Stoll (1997) models in the bond market by extending the spread estimate dataset to run annually over 5 years. The final section of this dissertation proposes a new method of estimating the bond spread through the use of a Kalman filter, as it can be used to leverage information from an onscreen market (albeit a different market) to imply bid-ask spread estimates in an off-screen market. The results indicate that the Huang and Stoll (1997) model consistently outperforms the De Jong and Rindi (2009) model. Furthermore, the yield estimate results of Pitsillis and Taylor (2014) align with the results obtained in this dissertation. The spread estimate results are stable over the 5-year period, indicating a strong provision of liquidity by the Primary Dealers.
90

Volatility derivatives in the Heston framework

Kriel, Hiltje January 2014 (has links)
Includes bibliographical references. / A volatility derivative is a financial contract where the payoff depends on the realized variance of a specified asset's returns. As volatility is in reality a stochastic variable, not deterministic as assumed in the Black-Scholes model, market participants may surely find volatility derivatives to be useful for hedging and speculation purposes. This study explores the construction and calibration of the Heston stochastic volatility model and the pricing of some volatility derivatives within this framework.

Page generated in 0.0924 seconds