Spelling suggestions: "subject:"forecasting "" "subject:"forecasting's ""
671 |
A share trading strategy : the JSE using 50 and 200 day moving averagesBurlo, Adrian Vincent 27 August 2012 (has links)
M.B.A / The aim of this dissertation is to determine if there is any evidence that supports a "50" and a "200" day moving average share trading strategy to select, buy and sell shares quoted on the Johannesburg Securities Exchange (JSE) Main Board, in order to determine if a "50" and a "200" day moving average share trading strategy will be appropriate to use, in order to make share trading profits in excess of the return generated by the JSE Overall Index. 1.4 0 .ACTIFVES o To evaluate fundamental analysis in respect of the quality of information (mainly at a company level) available to investors as the basis on which decisions to buy and sell shares are made. o To evaluate previous research undertaken in technical analysis with respect to the use and application of moving averages as a trading strategy in making share selections as well as buy and sell decisions. 14 Analyse historic price data on individual, randomly selected shares from the total population of all main board (1.6.5) listed shares quoted on the Johannesburg
|
672 |
Regularized Numerical Algorithms For Stable Parameter Estimation In Epidemiology And Implications For ForecastingDeCamp, Linda 08 August 2017 (has links)
When an emerging outbreak occurs, stable parameter estimation and reliable projections of future incidence cases using limited (early) data can play an important role in optimal allocation of resources and in the development of effective public health intervention programs. However, the inverse parameter identification problem is ill-posed and cannot be solved with classical tools of computational mathematics. In this dissertation, various regularization methods are employed to incorporate stability in parameter estimation algorithms. The recovered parameters are then used to generate future incident curves as well as the carrying capacity of the epidemic and the turning point of the outbreak.
For the nonlinear generalized Richards model of disease progression, we develop a novel iteratively regularized Gauss-Newton-type algorithm to reconstruct major characteristics of an emerging infection. This problem-oriented numerical scheme takes full advantage of a priori information available for our specific application in order to stabilize the iterative process. Another important aspect of our research is a reliable estimation of time-dependent transmission rate in a compartmental SEIR disease model. To that end, the ODE-constrained minimization problem is reduced to a linear Volterra integral equation of the first kind, and a combination of regularizing filters is employed to approximate the unknown transmission parameter in a stable manner. To justify our theoretical findings, extensive numerical experiments have been conducted with both synthetic and real data for various infectious diseases.
|
673 |
The consolidation of forecests with regression modelsVenter, Daniel Jacobus Lodewyk January 2014 (has links)
The primary objective of this study was to develop a dashboard for the consolidation of multiple forecasts utilising a range of multiple linear regression models. The term dashboard is used to describe with a single word the characteristics of the forecasts consolidation application that was developed to provide the required functionalities via a graphical user interface structured as a series of interlinked screens. Microsoft Excel© was used as the platform to develop the dashboard named ConFoRM (acronym for Consolidate Forecasts with Regression Models). The major steps of the consolidation process incorporated in ConFoRM are: 1. Input historical data. Select appropriate analysis and holdout samples. 3. Specify regression models to be considered as candidates for the final model to be used for the consolidation of forecasts. 4. Perform regression analysis and holdout analysis for each of the models specified in step 3. 5. Perform post-holdout testing to assess the performance of the model with best holdout validation results on out-of-sample data. 6. Consolidate forecasts. Two data transformations are available: the removal of growth and time-periods effect from the time series; a translation of the time series by subtracting ̅i, the mean of all the forecasts for data record i, from the variable being predicted and its related forecasts for each data record I. The pre-defined regression models available for ordinary least square linear regression models (LRM) are: a. A set of k simple LRM’s, one for each of the k forecasts; b. A multiple LRM that includes all the forecasts: c. A multiple LRM that includes all the forecasts and as many of the first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard with the interactions included in the model to be those with the highest individual correlation with the variable being predicted; d. A multiple LRM that includes as many of the forecasts and first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard: with the forecasts and interactions included in the model to be those with the highest individual correlation with the variable being predicted; e. A simple LRM with the predictor variable being the mean of the forecasts: f. A set of simple LRM’s with the predictor variable in each case being the weighted mean of the forecasts with different formulas for the weights Also available is an ad hoc user specified model in terms of the forecasts and the predictor variables generated by the dashboard for the pre-defined models. Provision is made in the regression analysis for both of forward entry and backward removal regression. Weighted least squares (WLS) regression can be performed optionally based on the age of forecasts with smaller weight for older forecasts.
|
674 |
Shopping centre location analysis : sales volume estimatingCurrie, David Gordon January 1973 (has links)
This thesis is concerned with that part of retail location analysis which involves estimating the sales volume potential for a proposed shopping centre. It examines the practised methods and available models employed in the prediction of potential sales volumes.
A survey of the literature dealing with techniques of sales volume estimation revealed that the theory behind sales volume estimating was somewhat disjointed, since the models and methods available emphasized different approaches and factors, and ignored or inadequately accounted for others. Furthermore, it was apparent that predictive accuracy was far from satisfactory with the presently available tools of analysis. It was felt that the problem revolved around the assumptions and factors inherent or absent in each model or method.
Since estimating a potential sales volume for a proposed centre involved estimating the number of consumers who will patronize that centre, it becomes obvious that an accurate estimate of expected consumer patronage necessitates an understanding of the factors and influences which motivate consumers in their choice of a particular retail outlet in which to purchase desired merchandise. It was felt that by examining these determinants of consumer behaviour, some light could be shed on those factors which are inadequately recognized or represented in the various methods and models examined in this thesis.
This thesis, then, first examines the validity and limitations of the many arguments, assumptions, concepts, and factors considered to be important in a discussion of the determinants of consumer patronage behaviour. It then examines the various models and methods in order to a) determine how adequately they recognize and incorporate these arguments, assumptions, concepts, and factors in their formulae or procedures, and b) evaluate their ability to produce theoretically sound, consistent predictions.
The models and methods are found to be largely incapable of accurate and consistent predictions owing to their oversimplified and imprecise construction. Inadequately represented consumer patronage factors are presented which, if they were more explicitly recognized, would tend to improve the predictive capabilities of the models and methods. These factors are shown to be additional factors of attraction and resistance which influence the consumer in his choice of a shopping destination.
The main conclusion presented is that if these factors were more precisely defined and quantified, and more explicitly recognized in the formulae, either through restructuring the parameters or through expanding the number of variables in the formulae, the descriptive and predictive capabilities of these models and methods might be improved with a corresponding decrease in the necessity for subjective judgment. / Business, Sauder School of / Graduate
|
675 |
Canada 1980 methodology, trends, and forecastMcCombs, Arnold Martin January 1967 (has links)
The basic objective of this thesis is to identify some of the basic trends tending to shape the Canadian economy. The procedure followed was to examine economic theory and previous forecasting studies to determine methodological principles and apply these principles
to estimate the possible future course of the Canadian economy between 1965 and 1980.
No comprehensive economic theory appears to be presently developed to explain and therefore to form a complete basis for predicting
the economic growth of a nation. In an effort to make economic
theory manageable, many variables affecting economic growth and development such as those of sociology tend to be ignored in quantitative terms. Together with these unquantifiable variables, it is not known how many non-economic factors affect economic growth. It would seem to be these many unknown factors that tend to cause errors
in the results of long range economic forecasts.
Economic growth, defined as the expansion of a nation's capacity
to produce, in an already advanced industrial economy, is heavily dependent on the quantity and quality of the nation's labour force, natural resources, real capital, and the technological level in the society.
These basic determinants are tempered by the sociological, institutional,
and consumption trends or factors within the economy.
Although many articles have been written on various aspects of economic growth, the present state of knowledge does not appear to be appreciably past the theorizing stage. As no complete theory of economic
growth and development appears to exist, the long range economic forecaster may gain some insights from economic theory but depend very much on his own resources to make various forecasts.
The most common method to determine output appears to necessitate
a population forecast from which a labour force estimate is made and then with assumptions regarding per-man productivity, an estimate for total output can be made.
Sophisticated population and labour force forecasts tend to divide
the population into age and sex specific cohorts and then analyze the trends within each of these cohorts. The methodology used in this thesis was based on broad estimates for various trends per thousand population. Due mainly to an expected high birth rate in Canada, the population is anticipated to increase at about 3.8 percent per year to about 25,800,000 by 1980. Of this figure, about 10,000,000 are expected
to make up the labour force. The two significant trends expected in the labour force are a large influx of young people and a
greater participation of women in the labour force.
In this thesis, the total output was separated into agriculture, government and public administration, and commercial non-agricultural sectors. This enabled the analysis of the trends in the work force, productivity, and output in each sector to be examined.
The significant trends in output expected are an increase in per-man productivity, but a declining labour force in agriculture, a rather constant productivity per man, but an increase in the total labour force in the government and public administration sector, and an increase
in both the labour force and productivity per-man in the commercial
non-agricultural sector. The real increase in output of the combined sectors is estimated to approximate 4.6 percent per year between 1965 and 1980 for the Canadian economy.
With the total output estimated, an estimate was made as to the division of the output between capital accumulation, government expenditures,
consumer expenditures, imports and exports. It was found that the division of the output between these broad sectors tended to be rather stable in relation to the gross national product. Because of this stability, future estimates for the broad categorical spending were based mainly on simple trend projections. From the historical spending patterns,
it would appear difficult to justify any drastic changes in the basic spending patterns. / Business, Sauder School of / Graduate
|
676 |
An empirical test of the theory of random walks in stock market prices : the moving average strategyYip, Garry Craig January 1971 (has links)
This study investigates the independence assumption of the theory of random walks in stock market prices through the simulation of the moving average strategy. In the process of doing so, three related questions are examined: (1) Does the past relative volatility of a stock furnish a useful indication of its future behavior? (2) Is the performance of the decision rule improved by applying it to those securities which are likely to be highly volatile? (3) Does positive dependence in successive monthly price changes exist?
The purpose of Test No. 1 was to gauge the tendency for a stock's relative volatility to remain constant over two adjacent intervals of time. As measured by the coefficient of variation, the volatility of each of the 200 securities was computed over the 1936 to 1945 and 1946 to 1955 decades. In order to evaluate the strength of the relationship between these paired observations, a rank correlation analysis was performed. The results indicated a substantial difference in relative volatility for each security over the two ten-year periods.
In Test No. 2 a different experimental design was employed to determine whether the relative volatility of a stock tended to remain within a definite range over time. According to their volatility in the 1936 to 1945 period, the 200 securities were divided into ten groups.
Portfolio No. 1 contained the twenty most volatile securities while Portfolio No. 2 consisted of the next twenty most volatile, etc. An average coefficient of variation was calculated for each group over the periods, 1936 to 1945 and 1946 to 1955. The rank correlation analysis on these ten paired observations revealed that the most volatile securities, as a group, tended to remain the most volatile.
Test No. 3 consisted of the application of the moving average strategy (for long positions only) to forty series of month-end prices covering the interval, 1956 to 1966. These securities had demonstrated a high relative volatility over the previous decade and, on the basis of the findings reported in Test No. 2, it was forecasted that they would be the most volatile of the sample of 200 in the period under investigation.
Four different moving averages ranging from three to six months, and thirteen different thresholds ranging from 2 to 50 per cent were simulated. The results of the simulation showed the moving average strategy to be much inferior to the two buy-and-hold models. Every threshold regardless of the length of the moving average yielded a negative
return. In addition, the losses per threshold were spread throughout the majority of stocks. Altogether, therefore, considerable evidence was found in favour of the random walk theory of stock price behavior. / Business, Sauder School of / Graduate
|
677 |
Short-Term Irradiance Forecasting Using an Irradiance Monitoring Network, Satellite Imagery, and Data AssimilationLorenzo, Antonio Tomas, Lorenzo, Antonio Tomas January 2017 (has links)
Solar and other renewable power sources are becoming an integral part of the electrical grid in the United States. In the Southwest US, solar and wind power plants already serve over 20% of the electrical load during the daytime on sunny days in the Spring. While solar power produces fewer emissions and has a lower carbon footprint than burning fossil fuels, solar power is only generated during the daytime and it is variable due to clouds blocking the sun. Electric utilities that are required to maintain a reliable electricity supply benefit from anticipating the schedule of power output from solar power plants. Forecasting the irradiance reaching the ground, the primary input to a solar power forecast, can help utilities understand and respond to the variability. This dissertation will explore techniques to forecast irradiance that make use of data from a network of sensors deployed throughout Tucson, AZ. The design and deployment of inexpensive sensors used in the network will be described. We will present a forecasting technique that uses data from the sensor network and outperforms a reference persistence forecast for one minute to two hours in the future. We will analyze the errors of this technique in depth and suggest ways to interpret these errors. Then, we will describe a data assimilation technique, optimal interpolation, that combines estimates of irradiance derived from satellite images with data from the sensor network to improve the satellite estimates. These improved satellite estimates form the base of future work that will explore generating forecasts while continuously assimilating new data.
|
678 |
Adaptive learning for applied macroeconomicsGalimberti, Jaqueson Kingeski January 2013 (has links)
The literature on bounded rationality and learning in macroeconomics has often used recursive algorithms to depict the evolution of agents' beliefs over time. In this thesis we assess this practice from an applied perspective, focusing on the use of such algorithms for the computation of forecasts of macroeconomic variables. Our analysis develops around three issues we find to have been previously neglected in the literature: (i) the initialization of the learning algorithms; (ii) the determination and calibration of the learning gains, which are key parameters of the algorithms' specifications; and, (iii) the choice of a representative learning mechanism. In order to approach these issues we establish an estimation framework under which we unify the two main algorithms considered in this literature, namely the least squares and the stochastic gradient algorithms. We then propose an evaluation framework that mimics the real-time process of expectation formation through learning-to-forecast exercises. To analyze the quality of the forecasts associated to the learning approach, we evaluate their forecasting accuracy and resemblance to surveys, these latter taken as proxy for agents' expectations. In spite of taking these two criteria as mutually desirable, it is not clear whether they are compatible with each other: whilst forecasting accuracy represents the goal of optimizing agents, resemblance to surveys is indicative of actual agents behavior. We carry out these exercises using real-time quarterly data on US inflation and output growth covering a broad post-WWII period of time. Our main contribution is to show that a proper assessment of the adaptive learning approach requires going beyond the previous views in the literature about these issues. For the initialization of the learning algorithms we argue that such initial estimates need to be coherent with the ongoing learning process that was already in place at the beginning of our sample of data. We find that the previous initialization methods in the literature are vulnerable to this requirement, and propose a new smoothing-based method that is not prone to this critic. Regarding the learning gains, we distinguish between two possible rationales to its determination: as a choice of agents; or, as a primitive parameter of agents learning-to-forecast behavior. Our results provide strong evidence in favor of the gain as a primitive approach, hence favoring the use of surveys data for their calibration. In the third issue, about the choice of a representative algorithm, we challenge the view that learning should be represented by only one of the above algorithms; on the basis of our two evaluation criteria, our results suggest that using a single algorithm represents a misspecification. That motivate us to propose the use of hybrid forms of the LS and SG algorithms, for which we find favorable evidence as representatives of how agents learn. Finally, our analysis concludes with an optimistic assessment on the plausibility of adaptive learning, though conditioned to an appropriate treatment of the above issues. We hope our results provide some guidance on that respect.
|
679 |
Housing demand : an empirical intertemporal modelSchwann, Gregory Michael January 1987 (has links)
I develop an empirical model of housing demand which is based as closely as possible on a theoretical
intertemporal model of consumer demand. In the empirical model, intertemporal behavior by households is incorporated in two ways. First, a household's expected length of occupancy in a dwelling is a parameter in the model; thus, households are able to choose when to move. Second, a household's decision to move and its choice of dwelling are based on the same intertemporal utility function. The parameters of the utility function are estimated using a switching regresion model in which the decision to move and the choice of housing quantity are jointly determined. The model has four other features: (1) a characteristics approach to housing demand is taken, (2) the transaction costs of changing dwellings are incorporated in the model, (3) sample data on household
mortgages are employed in computing the user cost of owned dwellings, and (4) demographic variables are incorporated systematically into the household utility function.
Rosen's two step proceedure is used to estimate the model. Cragg's technique for estimating regressions in the presence of heteroscedasticity of unknown form is used to estimate the hedonic regressions in step one of the proceedure. In the second step, the switching regression model, is estimated by maximum likelihood.
A micro data set of 2,513 Canadian households is used in the estimations.
The stage one hedonic regressions indicate that urban housing markets are not in long run equilibrium,
that the errors of the hedonic regressions are heteroscedastic, and that simple functional forms for hedonic regressions may perform as well as more complex forms. The stage two estimates
establish that a tight link between the theoretical and empirical models of housing demand produces a better model. My results show that conventional static models of housing demand are misspecified. They indicate that households have vastly different planned lengths of dwelling occupancy.
They also indicate that housing demand is determined to a great extent by demographic factors. / Arts, Faculty of / Vancouver School of Economics / Graduate
|
680 |
Psychopathy and recidivism in adolescence: a ten-year retrospective follow-upGretton, Heather Margaret 11 1900 (has links)
Violent and aggressive behavior is a subset of antisocial behavior that is of particular
concern to the criminal justice system and to the general public. A challenge facing mental
health professionals and the criminal justice system is to assess—with a reasonable degree of
accuracy—the likelihood that a young offender will recidivate and to arrange appropriate
interventions. Because of its psychometric properties and high predictive validity, the Hare
Psychopathy Checklist-Revised (PCL-R) is being incorporated into risk assessment batteries
for use with adults. The purpose of the study was to extend the risk paradigm to adolescent
offenders, investigating the predictive validity of the Psychopathy Checklist: Youth Version
(PCL:YV) from adolescence to adulthood. Subjects were 157 admissions, ages 12-18, referred
to Youth Court Services for psychological or psychiatric assessment. Archival data were used
to complete retrospectively the PCL:YV and to code criminal history and demographic data on
each of the subjects. Follow-up criminal record data were collected, with an average follow-up
time of ten years. Over the follow-up period psychopaths demonstrated a greater risk for
committing violent offences than nonpsychopaths. They committed violent offences at a higher
rate, earlier following their release from custody, and were more likely to escape from custody
than nonpsychopaths. Further, results indicate that PCL:YV score, a difference in performance
- verbal intellectual functioning (P > V Index), and history of self-harm contributed
significantly to the prediction of violent outcome, over and above the contribution of a
combination of criminal-history and demographic variables. Finally, background and
demographic characteristics were compared between violent and nonviolent psychopaths.
Findings are discussed in the context of current conceptualizations of psychopathy and
adolescent antisocial behavior. / Arts, Faculty of / Psychology, Department of / Graduate
|
Page generated in 0.0845 seconds