• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1736
  • 413
  • 160
  • 72
  • 54
  • 54
  • 50
  • 50
  • 50
  • 50
  • 50
  • 48
  • 40
  • 37
  • 34
  • Tagged with
  • 3199
  • 435
  • 430
  • 379
  • 364
  • 304
  • 291
  • 263
  • 261
  • 243
  • 231
  • 229
  • 225
  • 216
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Population projection: a demographic procedural manual for planning practitioners

Hedeen, John Erik. January 1973 (has links)
Call number: LD2668 .P7 1973 H53
102

Can forward interest rates predict future spot rates in South Africa? A test of the pure expectations hypothesis and market efficiency in the South African government bond market

Loukakis, Andrea 04 July 2012 (has links)
The pure expectations hypothesis says that forward rates, implied off a yield curve, are unbiased predictors of future spot rates. Which implies forward rates, according to the pure expectations hypothesis, should provide reliable forecasts of future spot rates. This study set out to see if the theory behind the pure expectations hypothesis holds in a South African context. If it does hold, it can have an impact on real world applications such as bond trading strategies and the setting of monetary policy. To test the theory behind the pure expectations hypothesis, South African government bond data for the short end of the yield curve was used. Various regression tests were run. These regressions tested mainly for forward rate forecast accuracy, the relationship between forecast errors and changes in the spot rate, for the presence of liquidity premiums and to test for market efficiency. The results indicated that forecast accuracy and the relationship between forecast errors and changes in the spot rate were contrary to the theory behind the pure expectations hypothesis. A liquidity premium was found to exist and there appeared to be weak form market efficiency. These results led to a conclusion that there is very little evidence to support the theory behind the pure expectations hypothesis. This was mainly due to the presence of a liquidity premium. The pure expectations hypothesis does not seem to be of any significant use within real world applications.
103

Effective impact prediction: how accurate are predicted impacts in EIAs?

Molefe, Noella Madalo January 2017 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2017. / An Environmental Impact Assessment (EIA) is an instrument used to limit unexpected and negative effects of proposed developments on the environment. Much experience has been gained internationally but the lack of follow-up after the EIA is prepared is one of the major weak spots of the assessments. It is therefore very important to follow up on development projects and observe their effects on the environment after the go-ahead has been given, so that the EIA quality may be improved. There is often a significant difference between predicted impacts and actual impacts. Sometimes the predicted impacts do not occur, or new impacts which were not predicted in the Environmental Impacts Assessment Reports (EIRs) arise. The aim of this study was to assess the accuracy of the impacts predicted in the EIRs compiled for three large-scale Eskom projects currently under execution situated in the Mpumalanga, Limpopo and KwaZulu-Natal provinces by comparing them to the actual impacts that occurred on site. The EIA follow-up process was used to assess the influence that the EIA may have on large-scale projects and ultimately assess the effectiveness of the EIA process as a whole. A procedure developed by Wilson (1998) was used to follow up on the selected projects because the method allowed for comparisons between the actual and predicted impacts to be made and for discrepancies in the EIRs to be identified. Recent audit reports, aerial photographs and interviews were all used to identify actual impact occurrence. Of the impacts which actually occurred, 91% occurred as predicted (OP) and 9% occurred but were not predicted (ONP). The majority of impacts omitted from the reports were hydrological (27%) and air quality impacts (25%). These unexpected impacts were most probably overlooked because they are site-specific, temporary in nature and would not cause any significant environmental damage. Of all the impacts predicted in the reports, 85% were accurately predicted and 15% were not. The impacts inaccurately predicted were hydrological impacts (27%), flora and fauna impacts (7%) and 30% other impacts which included soil pollution, fires and loss of agricultural potential. The inaccuracies could be a result of Environmental Impact Assessment Practitioners (EAPs) predicting a large number of impacts with the hopes of lowering the risk of omitting impacts. However, sometimes the impacts predicted do not occur in reality. Overall it can be concluded that the impact prediction accuracy of the three EIRs compiled for Eskom exceeds previous studies conducted nationally. Eskom EIRs are highly accurate with regards to impact prediction with minor discrepancies which can easily be rectified. Key words: Environmental Impacts Assessment (EIA) Environmental Impacts Assessment Reports (EIRs), Environmental Impact Assessment Practitioners (EAPs), EIA follow-up, discrepancies. / LG2017
104

Virtual wind sensors: improving wind forecasting using big data analytics

Gray, Kevin Alan January 2016 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2016. / Wind sensors provide very accurate measurements, however it is not feasible to have a network of wind sensors large enough to provide these accurate readings everywhere. A “virtual” wind sensor uses existing weather forecasts, as well as historical weather station data to predict what readings a regular wind sensor would provide. This study attempts to develop a method using Big Data Analytics to predict wind readings for use in “virtual” wind sensors. The study uses Random Forests and linear regression to estimate wind direction and magnitude using various transformations of a Digital Elevation Model, as well as data from the European Centre for Medium-Range Weather Forecasts. The model is evaluated based on its accuracy when compared to existing high resolution weather station data, to show a slight improvement in the estimation of wind direction and magnitude over the forecast data. / LG2017
105

Modelling and control of birth and death processes

Getz, Wayne Marcus 29 January 2015 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy February 1976 / This thesis treats systems of ordinary differential equations that ar*? extracted from ch-_ Kolmogorov forward equations of a class of Markov processes, known generally as birth and death processes. In particular we extract and analyze systems of equations which describe the dynamic behaviour of the second-order moments of the probability distribution of population governed by birth and death processes. We show that these systems form an important class of stochastic population models and conclude that they are superior to those stochastic models derived by adding a noise term to a deterministic population model. We also show that these systems are readily used in population control studies, in which the cost of uncertainty in the population mean size is taken into account. The first chapter formulates the univariate linear birth and death process in its most general form. T i«- prvbo'. i: ity distribution for the constant parameter case is obtained exactly, which allows one to state, as special cases, results on the simple birth and death, Poisson, Pascal, Polya, Palm and Arley processes. Control of a popu= lation, modelled by the linear birth and death process, is considered next. Particular attention is paid to system performance indecee which take into account the cost associated with non-zero variance and the cost of improving initial estimates of the size of the popula” tion under control.
106

Essays in forecasting financial markets with predictive analytics techniques

Alroomi, Azzam J. M. A. H. January 2018 (has links)
This PhD dissertation comprises four essays on forecasting financial markets with unsupervised predictive analytics techniques, most notably time series extrapolation methods and artificial neural networks. Key objectives of the research were reproducibility and replicability, which are fundamental principles in management science and, as such, the implementation of all of the suggested algorithms has been fully automated and completely unsupervised in R. As with any predictive analytics exercise, computational intensiveness is a significant challenge and criterion of performance and, thus, both forecasting accuracy and uncertainty as well as computational times are reported in all essays. Multiple horizons, multiple methods and benchmarks and multiple metrics are employed as dictated by good practice in empirical forecasting exercises. The essays evolve in nature as each one is based on the previous one, testing one more condition as the essays progress, outlined in sequence as follows: which method wins overall in a very extensive evaluation over five frequencies (yearly, quarterly, monthly, weekly and daily data) over 18 time series of stocks with the biggest capitalization from the FTSE 100, over the last 20 years (first essay); the impact of horizon in this exercise and how this promotes different winners for different horizons (second essay); the impact of using uncertainty in the form of maximum-minimum values per period, despite still being interested in forecasting the mean expected value over the next period; and introducing a second variable capturing all other aspects of the behavioural nature of the financial environment – the trading volume – and evaluating whether this improves forecasting performance or not. The whole endeavour required the use of the High Performance Computing Wales (HPC Wales) for a significant amount of time, incurring computational costs that ultimately paid off in terms of increased forecasting accuracy for the AI approaches; the whole exercise for one series can be repeated on a fast laptop device (i7 with 16 GB of memory). Overall (forecasting) horses for (data) courses were once again proved to perform best, and the fact that one method cannot win under all conditions was once more evidenced. The introduction of uncertainty (in terms of range for every period), as well as volume as a second variable capturing environmental aspects, was beneficial with regard to forecasting accuracy and, overall, the research provided empirical evidence that predictive analytics approaches have a future in such a forecasting context. Given this was a predictive analytics exercise, focus was placed on forecasting levels (monetary values) and not log-returns; and out-of-sample forecasting accuracy, rather than causality, was a primary objective, thus multiple regression models were not considered as benchmarks. As in any empirical predicting analytics exercise, more time series, more artificial intelligence methods, more metrics and more data can be employed so as to allow for full generalization of the results, as long as all of these can be fully automated and forecast unsupervised in a freeware environment – in this thesis that being R.
107

A study to determine the number of annual entry opportunities in production agriculture for Kansas

Hyle, Dwight E. January 2010 (has links)
Digitized by Kansas Correctional Industries
108

NOVÉ MOŽNOSTI PLÁNOVÁNÍ PRO STŘEDNÍ A VELKÉ SPOLEČNOSTI

Přibyslavský, Jiří January 2007 (has links)
Cílem a zároveň obsahem disertační práce je rozvoj discipliny plánování a rozpočtování v podobě návrhu a aplikace metody pro podporu plánování, která bude vycházet ze současného teoretického i praktického stavu vědění v oblasti plánování. Kombinací daných teoretických i praktických znalostí vzniká zcela nová metodika pro využití možností plánování. Metodika byla nazvána 5M, podle pěti základních pilířů pro podporu plánování. Metoda 5M má své místo jak na akademické půdě díky shrnutí dnešních teoretických základů v oblasti plánování a jejich rozšířením o vlastní poznatky, tak v reálné praxi, kde podobný koncept stále chybí a mohl by být zajímavým vodítkem a alternativou k reorganizaci a redefinici firemních plánovacích a forecastovacích systémů.
109

Essays in financial econometrics and forecasting

Smetanina, Ekaterina January 2018 (has links)
This dissertation deals with issues of forecasting in financial markets. The first part of my dissertation is motivated by the observation that most parametric volatility models follow Engle's (1982) original idea of modelling the volatility of asset returns as a function of only past information. However, current returns are potentially quite informative for forecasting, yet are excluded from these models. The first and second chapters of this dissertation try to address this question from both a theoretical and an empirical perspective. The second part of this dissertation deals with the important issue of forecast evaluation and selection in unstable environments, where it is known that the existing methodology can generate spurious and potentially misleading results. In my third chapter, I develop a new methodology for forecast evaluation and selection in such an environment. In the first chapter, $\textit{Real-time GARCH}$, I propose a new parametric volatility model, which retains the simple structure of GARCH models, but models the volatility process as a mixture of past and current information as in the spirit of Stochastic Volatility (SV) models. This provides therefore a link between GARCH and SV models. I show that with this new model I am able to obtain better volatility forecasts than the standard GARCH-type models; improve the empirical fit of the data, especially in the tails of the distribution; and make the model faster in its adjustment to the new unconditional level of volatility. Further, the new model offers a much needed framework for specification testing as it nests the standard GARCH models. This chapter has been published in the $\textit{Journal of Financial Econometrics}$ (Smetanina E., 2017, Real-time GARCH, $\textit{Journal of Financial Econometrics}$, 15(4), 561-601.) In chapter 2, $\textit{Asymptotic Inference for Real-time GARCH(1,1) model}$, I investigate the asymptotic properties of the Gaussian Quasi-Maximum-Likelihood estimator (QMLE) for the Real-time GARCH(1,1) model, developed in the first chapter of this dissertation. I establish the ergodicity and $\beta$-mixing properties of the joint process for squared returns and the volatility process. I also prove strong consistency and asymptotic normality for the parameter vector at the usual $\sqrt{T}$ rate. Finally, I demonstrate how the developed theory can be viewed as a generalisation of the QMLE theory for the standard GARCH(1,1) model. In chapter 3, $\textit{Forecast Evaluation Tests in Unstable Environments}$, I develop a new methodology for forecast evaluation and selection in the situations where the relative performance between models changes over time in an unknown fashion. Out-of-sample tests are widely used for evaluating models' forecasts in economics and finance. Underlying these tests is often the assumption of constant relative performance between competing models, however this is invalid for many practical applications. In a world of changing relative performance, previous methodologies give rise to spurious and potentially misleading results, an example of which is the well-known ``splitting point problem''. I propose a new two-step methodology designed specifically for forecast evaluation in a world of changing relative performance. In the first step I estimate the time-varying mean and variance of the series for forecast loss differences, and in the second step I use these estimates to construct new rankings for models in a changing world. I show that the new tests have high power against a variety of fixed and local alternatives.
110

Analysts' forecasts and future stock return volatility: a firm-level analysis for NYSE Firms

Shan, Yaowen, School of Banking & finance, UNSW January 2006 (has links)
This study demonstrates that financial analysts significantly affect short-term stock prices, by examining how non-accounting information particularly contained in analysts' forecasts contributes to the fluctuation of future stock returns. If current non-accounting information of future earnings is more unfavourable or more volatile, we could observe a larger shift in the current stock return. The empirical evidence strongly supports these theoretical predictions that stem from the combination of the accounting version of Campbell-Shiller model (Campbell and Shiller (1988) and Vuolteenaho (2002)) and Ohlson????s information dynamics (1995). In addition, the results are also valid for measures of both systematic and idiosyncratic volatilities.

Page generated in 0.078 seconds