• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1430
  • 530
  • 171
  • 160
  • 157
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Active fund management and crosssectional variance of returns

Chan, Ching Yee 16 February 2013 (has links)
In active portfolio management, fund managers seek to follow an investment strategy with the objective of outperforming an investment benchmark index. Opportunities to outperform a benchmark in active fund management is made possible through crosssectional dispersion of returns in the market. It is cross-sectional volatility of returns that allows fund managers to identify changing trends in market relationships and to take advantage of market opportunities.Quarterly active share and active return data of Domestic General Equity funds was used to determine whether the level of active share and active return has a correlation with volatility measures such as cross-sectional variance of returns or the South African Volatility Index (SAVI). The actively-managed funds’ outperformance of the benchmark index during periods of differing cross-sectional variance was also looked at. Lastly, the possibility of whether market volatility can be used to inform fund investment decisions was also examined.The findings in this study are that there is no significant relationship between the crosssectional variance of returns, active share and active returns. In measuring fund performance in times of differing cross-sectional dispersion and breaking the analysis period into such intervals rather than as a continuous time series, active funds outperform the benchmark index during periods of low and moderate cross-sectional variance. The SAVI can be used as a fairly accurate and readily available approximation of cross-sectional variance. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted
222

Coupled Sampling Methods For Filtering

Yu, Fangyuan 13 March 2022 (has links)
More often than not, we cannot directly measure many phenomena that are crucial to us. However, we usually have access to certain partial observations on the phenomena of interest as well as a mathematical model of them. The filtering problem seeks estimation of the phenomena given all the accumulated partial information. In this thesis, we study several topics concerning the numerical approximation of the filtering problem. First, we study the continuous-time filtering problem. Given high-frequency ob- servations in discrete-time, we perform double discretization of the non-linear filter to allow for filter estimation with particle filter. By using the multilevel strategy, given any ε > 0, our algorithm achieve an MSE level of O(ε2) with a cost of O(ε−3), while the particle filter requires a cost of O(ε−4). Second, we propose a de-bias scheme for the particle filter under the partially observed diffusion model. The novel scheme is free of innate particle filter bias and discretization bias, through a double randomization method of [14]. Our estimator is perfectly parallel and achieves a similar cost reduction to the multilevel particle filter. Third, we look at a high-dimensional linear Gaussian state-space model in con- tinuous time. We propose a novel multilevel estimator which requires a cost of O(ε−2 log(ε)2) compared to ensemble Kalman-Bucy filters (EnKBFs) which requiresO(ε−3) for an MSE target of O(ε2). Simulation results verify our theory for models of di- mension ∼ 106. Lastly, we consider the model estimation through learning an unknown parameter that characterizes the partially observed diffusions. We propose algorithms to provide unbiased estimates of the Hessian and the inverse Hessian, which allows second-order optimization parameter learning for the model.
223

Estimating the Variance of the Sample Median

Price, Robert M., Bonett, Douglas G. 01 January 2001 (has links)
The small-sample bias and root mean squared error of several distribution-free estimators of the variance of the sample median are examined. A new estimator is proposed that is easy to compute and tends to have the smallest bias and root mean squared error.
224

Pricing and hedging variance swaps using stochastic volatility models

Bopoto, Kudakwashe January 2019 (has links)
In this dissertation, the price of variance swaps under stochastic volatility models based on the work done by Barndorff-Nielsen and Shepard (2001) and Heston (1993) is discussed. The choice of these models is as a result of properties they possess which position them as an improvement to the traditional Black-Scholes (1973) model. Furthermore, the popularity of these models in literature makes them particularly attractive. A lot of work has been done in the area of pricing variance swaps since their inception in the late 1990’s. The growth in the number of variance contracts written came as a result of investors’ increasing need to be hedged against exposure to future variance fluctuations. The task at the core of this dissertation is to derive closed or semi-closed form expressions of the fair price of variance swaps under the two stochastic models. Although various researchers have shown that stochastic models produce close to market results, it is more desirable to obtain the fair price of variance derivatives using models under which no assumptions about the dynamics of the underlying asset are made. This is the work of a useful analytical formula derived by Demeterfi, Derman, Kamal and Zou (1999) in which the price of variance swaps is hedged through a finite portfolio of European call and put options of different strike prices. This scheme is practically explored in an example. Lastly, conclusions on pricing using each of the methodologies are given. / Dissertation (MSc)--University of Pretoria, 2019. / Mathematics and Applied Mathematics / MSc (Financial Engineering) / Unrestricted
225

Unbalanced Analysis of Variance Comparing Standard and Proposed Approximation Techniques for Estimating the Variance Components

Pugsley, James P. 01 May 1984 (has links)
This paper considers the estimation of the components of variation for a two-factor unbalanced nested design and compares standard techniques with proposed approximation procedures. Current procedures are complicated and assume the unbalanced sample size to be fixed. This paper tests some simpler techniques, assuming sample sizes are random variables. Monte Carlo techniques were used to generate data for testing of these new procedures.
226

Mesospheric Gravity Wave Climatology and Variances Over the Andes Mountains

Pugmire, Jonathan Rich 01 December 2018 (has links)
Look up! Travelling over your head in the air are waves. They are present all the time in the atmosphere all over the Earth. Now imagine throwing a small rock in a pond and watching the ripples spread out around it. The same thing happens in the atmosphere except the rock is a thunderstorm, the wind blowing over a mountain, or another disturbance. As the wave (known as a gravity wave) travels upwards the thinning air allows the wave to grow larger and larger. Eventually the gravity wave gets too large – and like waves on the beach – it crashes causing whitewater or turbulence. If you are in the shallow water when the ocean wave crashes or breaks, you would feel the energy and momentum from the wave as it pushes or even knocks you over. In the atmosphere, when waves break they transfer their energy and momentum to the background wind changing its speed and even direction. This affects the circulation of the atmosphere. These atmospheric waves are not generally visible to the naked eye but by using special instruments we can observe their effects on the wind, temperature, density, and pressure of the atmosphere. This dissertation discusses the use of a specialized camera to study gravity waves as they travel through layers of the atmosphere 50 miles above the Andes Mountains and change the temperature. First, we introduce the layers of the atmosphere, the techniques used for observing these waves, and the mathematical theory and properties of these gravity waves. We then discuss the camera, its properties, and its unique feature of acquiring temperatures in the middle layer of the atmosphere. We introduce the observatory high in the Andes Mountains and why it was selected. We will look at the nightly fluctuations (or willy-nillyness) and long-term trends from August 2009 until December 2017. We compare measurements from the camera with similar measurements obtained from a satellite taken at the same altitude and measurements from the same camera when it was used at a different location, over Hawaii. Next, we measure the amount of change in the temperature and compare it to a nearby location on the other side of the Andes Mountains. Finally, we look for a specific type of gravity wave caused by wind blowing over the mountains called a mountain wave and perform statistics of those observed events over a period of six years. By understanding the changes in atmospheric properties caused by gravity waves we can learn more about their possible sources. By knowing their sources, we can better understand how much energy is being transported in the atmosphere, which in turn helps with better weather and climate models. Even now –all of this is going on over your head!
227

On Applications of Semiparametric Methods

Li, Zhijian 01 October 2018 (has links)
No description available.
228

Generalized Semiparametric Approach to the Analysis of Variance

Pathiravasan, Chathurangi Heshani Karunapala 01 August 2019 (has links) (PDF)
The one-way analysis of variance (ANOVA) is mainly based on several assumptions and can be used to compare the means of two or more independent groups of a factor. To relax the normality assumption in one-way ANOVA, recent studies have considered exponential distortion or tilt of a reference distribution. The reason for the exponential distortion was not investigated before; thus the main objective of the study is to closely examine the reason behind it. In doing so, a new generalized semi-parametric approach for one-way ANOVA is introduced. The proposed method not only compares the means but also variances of any type of distributions. Simulation studies show that proposed method has favorable performance than classical ANOVA. The method is demonstrated on meteorological radar data and credit limit data. The asymptotic distribution of the proposed estimator was determined in order to test the hypothesis for equality of one sample multivariate distributions. The power comparison of one sample multivariate distributions reveals that there is a significant power improvement in the proposed chi-square test compared to the Hotelling's T-Square test for non normal distributions. A bootstrap paradigm is incorporated for testing equidistributions of multiple samples. As far as power comparison simulations for multiple large samples are considered, the proposed test outperforms other existing parametric, nonparametric and semi-parametric approaches for non normal distributions.
229

The Robustness of O'Brien's r Transformation to Non-Normality

Gordon, Carol J. (Carol Jean) 08 1900 (has links)
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.
230

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Hanzely, Filip 20 August 2020 (has links)
Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges. In this thesis, we deal with each of these sources of difficulty in a different way. To efficiently address the big data issue, we develop new methods which in each iteration examine a small random subset of the training data only. To handle the big model issue, we develop methods which in each iteration update a random subset of the model parameters only. Finally, to deal with ill-conditioned problems, we devise methods that incorporate either higher-order information or Nesterov’s acceleration/momentum. In all cases, randomness is viewed as a powerful algorithmic tool that we tune, both in theory and in experiments, to achieve the best results. Our algorithms have their primary application in training supervised machine learning models via regularized empirical risk minimization, which is the dominant paradigm for training such models. However, due to their generality, our methods can be applied in many other fields, including but not limited to data science, engineering, scientific computing, and statistics.

Page generated in 0.0363 seconds