Spelling suggestions: "subject:"forecasting "" "subject:"forecasting's ""
261 |
Bankruptcy : a proportional hazard approachBetton, Sandra Ann January 1987 (has links)
The recent dramatic increase in the corporate bankruptcy rate, coupled with a similar rate of increase in the bank failure rate, has re-awakened investor, lender and government interest in the area of bankruptcy prediction. Bankruptcy prediction models are of particular value to a firm's current and future creditors who often do not have the benefit of an actively traded market in the firm's securities from which to make inferences about the debtor's viability. The models commonly used by many experts in an endeavour to predict the possibility of disaster are outlined in this paper.
The proportional hazard model, pioneered by Cox [1972], assumes that the hazard function, the risk of failure, given failure has not already occurred, is a function of various explanatory variables and estimated coefficients multiplied by an arbitrary and unknown function of time. The Cox Proportional Hazard model is usually applied in medical studies; but, has recently been applied to the bank failure question [Lane, Looney & Wansley, 1986]. The model performed well in the narrowly defined, highly regulated, banking industry. The principal advantage of this approach is that the model incorporates both the survival times observed and any censoring of data thereby using more of the available information in the analysis. Unlike many bankruptcy prediction models, such as logit and probit based regression models, the Cox model estimates the probability distribution of survival times. The proportional hazard model would, therefore, appear to offer a useful addition to the more traditional bankruptcy prediction models mentioned above.
This paper evaluates the applicability of the Cox proportional hazard model in the more diverse industrial environment. In order to test this model, a sample of 109 firms was selected from the Compustat Industrial and Research Industrial data tapes. Forty one of these firms filed petitions under the various bankruptcy acts applicable between 1972 and 1985 and were matched to 67 firms which had not filed petitions for bankruptcy during the same period. In view of the dramatic changes in the bankruptcy regulatory environment caused by the Bankruptcy reform act of 1978, the legal framework of the bankruptcy process was also examined.
The performance of the estimated Cox model was then evaluated by comparing its classification and descriptive capabilities to those of an estimated discriminant analysis based model. The results of this study indicate that while the classification capability of the Cox model was less than that of discriminant analysis, the model provides additional information beyond that available from the discriminant analysis. / Business, Sauder School of / Graduate
|
262 |
Geostatistics applied to forecasting metal pricesFaulkner, Reginald Lloyd January 1988 (has links)
The objective of this thesis was to investigate the effectiveness of kriging as a predictor of future prices for copper, lead and zinc on the London Metal Exchange. The annual average metal prices from 1884 to 1986 were deflated into constant price series with reference to a base of 1984 prices. Analysis of the data showed that the requirement of stationarity was satisfied if the price series were divided into three distinct time periods viz. 1884 to 1917; 1918 to 1953; 1954 to 1986. For copper each of the three time periods were studied in detail, but for lead and zinc only the most recent period was included in this thesis. Spherical models gave a good fit to the experimental semi-variograms computed for each metal-time period and were used to predict future prices by ordinary kriging. Universal Kriging was applied to the most recent time period for each metal by fitting a polynomial curve to the price-time series, computing experimental semi-variograms from the residuals and then fitting spherical models which were used to predict future prices. Within the most recent price-time series, a further subdivision was made by taking that portion of the period from the highest price to 1986. Experimental semi-variograms from the residuals from fitted polynomial curves showed pure nugget effect and consequently extrapolation of the polynomial was used as the price predictor. The kriged and extrapolated future price estimates were compared to future prices estimated by a simple random walk using residual sums of squared differences.
For four of the five time series analyzed, ordinary kriging produced the best future price estimates. For copper from 1918 to 1953 , the simple random walk was marginally better than ordinary kriging. This was probably due to the low price variability in this period resulting from the Great Depression and government price controls associated with the Second World War and the Korean War. Specific forecasts for 1985 and 1986 were most accurate for copper and lead by universal kriging and most accurate for zinc by ordinary kriging.
The results are encouraging and future investigations should include: applying other kriging methods
: analyzing daily and monthly prices : comparing results with more sophisticated time series analysis techniques. / Applied Science, Faculty of / Mining Engineering, Keevil Institute of / Graduate
|
263 |
Application of the Kalman filter to iceberg motion forecatingSimon, Christophe January 1990 (has links)
The objective of this study is to develop an application of the Kalman filter for filtering and forecasting iceberg positions and velocities in order to calculate the risk of impact against a fixed structure or stationary vessel.
Existing physical and probabilistic models are reviewed. Physical models are essentially
based on the response of the iceberg to the forces acting on it.
Statistical models forecast the motion of the iceberg based on past observations of the trajectory. A probabilistic iceberg trajectory model is used in this study so that uncertainties
in the trajectory forecast can be explicitly included. The technique of Kalman filtering is described and applied to forecast future positions and velocities of an ice feature,
including uncertainties.The trajectory forecast combined with a risk calculation, yields the probability of impact and the probability distribution of the time until impact.
Numerical simulations are provided to demonstrate and validate the procedure. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate
|
264 |
Psychopathy, criminal history, and recidivismHemphill, James Franklin 11 1900 (has links)
This dissertation has three main parts. In the first part, the construct of psychopathy is
described, its theoretical relevance for predicting recidivism is examined, and the literature
on The Hare Psychopathy Checklist-Revised (PCL-R; Hare, 1980, 1991) and recidivism is
briefly reviewed. The association between psychopathy and recidivism (general, violent) was
examined in five samples (N > 800 inmates) of provincial and federal male inmates who
were incarcerated in British Columbia between 1964 and 1995. Results were consistent
across samples and across measures and indicated that psychopathy was positively associated
with recidivism. These findings indicate that psychopathy is important for identifying
inmates who are at risk to be reconvicted.
In the second part of the dissertation, a comprehensive and empirically-based set of
crime categories was developed. Crimes were sorted into 200 descriptive categories and then
collapsed into broader categories using frequency counts and factor analysis. Results
indicated that the four most frequently occurring crime categories (break and enter, fraud,
theft, possession of illegal property) accounted for more than half of all convictions, whereas
the remaining 25 crime categories accountedfor less than half of all convictions.
In the third part of the dissertation, PCL-R scores, frequency counts for the crime
categories, and basic demographic variables, were entered into a stepwise discriminant
function analysis to predict general recidivism (yes, no) and into another discriminant
function analysis to predict violent recidivism. The percentage of general recidivists who
were correctly classified (81.3%) was similar in magnitude to the base rate of general
recidivism (81.1%). In terms of violent recidivism, five variables (PCL-R scores, two age variables, previous convictions for robbery and for assault) emerged as important predictors.
Scores on each of these five predictors were assigned weights, and the weights were summed
together to form a violence risk score. Higher scores on the violence risk scale identified
inmates who were at higher risk to be convicted of violent recidivism. Scores on the risk
instrument correctly classified 62.2% of inmates into violent (yes, no) recidivism groups.
These results held-up under cross-validation; in an independent sample of 124 inmates,
64.5% of inmates were correctly classified. The findings indicate that the violence risk scale
has promise as a measure for identifying inmates who are at risk to be convicted of future
violence. / Arts, Faculty of / Psychology, Department of / Graduate
|
265 |
DEEP LEARNING FOR CRIME PREDICTIONUnknown Date (has links)
In this research, we propose to use deep learning to predict crimes in small neighborhoods (regions) of a city, by using historical crime data collected from the past. The motivation of crime predictions is that if we can predict the number crimes that will occur in a certain week then the city officials and law enforcement can prepare resources and manpower more effectively. Due to inherent connections between geographic regions and crime activities, the crime numbers in different regions (with respect to different time periods) are often correlated. Such correlation brings challenges and opportunities to employ deep learning to learn features from historical data for accurate prediction of the future crime numbers for each neighborhood. To leverage crime correlations between different regions, we convert crime data into a heat map, to show the intensity of crime numbers and the geographical distributions. After that, we design a deep learning framework to learn from such heat map for prediction.
In our study, we look at the crime reported in twenty different neighbourhoods in Vancouver, Canada over a twenty week period and predict the total crime count that will occur in the future. We will look at the number of crimes per week that have occurred in the span of ten weeks and predict the crime count for the following weeks.
The location of where the crimes occur is extracted from a database and plotted onto a heat map. The model we are using to predict the crime count consists of a CNN (Convolutional Neural Network) and a LSTM (Long-Short Term Memory) network attached to the CNN. The purpose of the CNN is to train the model spatially and understand where crimes occur in the images. The LSTM is used to train the model temporally and help us understand which week the crimes occur in time. By feeding the model heat map images of crime hot spots into the CNN and LSTM network, we will be able to predict the crime count and the most likely locations of the crimes for future weeks. / Includes bibliography. / Thesis (MS)--Florida Atlantic University, 2021. / FAU Electronic Theses and Dissertations Collection
|
266 |
Integration experiments using an energy conserving diabatic model in isentropic coordinates.Marx, Lawrence. January 1977 (has links)
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1977 / Bibliography : leaves 125-126. / M.S. / M.S. Massachusetts Institute of Technology, Department of Meteorology
|
267 |
From on-air to online : an integrated framework for television viewership predictionTai, Samson Kin Hon 05 December 2018 (has links)
The unprecedented expansion of online video has dramatically changed patterns of media consumption. More and more viewers are watching serialized drama on television across multiple digital television platforms, leading advertisers to leverage multiple channels for buying and television broadcasters to refine forecasting models for near-term television ratings of program content to provide more accurate and timelier predictions. Using traditional television ratings prediction methods with data from a single source is no longer appropriate. An integrated television ratings model incorporating data from multiple sources is proposed in this study to tackle the complexities of the consumption of television content by today's audiences in a multi-platform television environment. Using 611 episodes from 26 unique local drama serials aired in prime-time slots from December 2014 to August 2016, with viewership data obtained from traditional television and catch-up television platforms and associated online word of mouth (WOM) data from a popular local social media site, I tested five hypotheses derived from the theories of status quo bias, the Zeigarnik effect and the WOM effect. The dataset was subjected to regression analysis, and the results supported four of the five hypotheses with following findings. First, significant relationships were found between the television ratings of a drama serial episode, the ratings of its preceding episode, the volume of WOM and online catch-up viewing. Second, viewership via the online catch-up television platform complemented rather than cannibalized viewership of a subsequent episode of a prime-time television drama broadcast on traditional television, with a positive impact on television ratings.
|
268 |
FORTEL, a telecommunications forecasting systemBlack, James Leslie 01 January 1981 (has links)
The telecommunications industry is undergoing a metamorphic change. New technologies have made possible a vast panorama of new services. Electronic newspapers, videotelephones, and computer-mediated conferences are but a few of the possibilities. In contrast to technology, most of the other factors affecting the industry (and its infrastructure) have not changed. Government regulations, for example, are based on the 1934 Communications Act. These factors (e.g., consumer behavior, corporate philosophies, energy, and competition) are beginning to change. The Congress is considering a rewrite of the 1934 Communications Act. Competition has become a dominant factor (note that the industry by and large has been a government-regulated monopoly). The outcome of these changes will determine which services will be offered and the extent to which they will be demanded. The problem is to determine which changes will actually occur and, then, forecast their effect. The problem is further complicated by the fact that those individuals and institutions involved in making changes do not know which changes to make. For them, a forecasting system is needed to evaluate the outcome of various changes. A review of the literature was conducted to determine the extent to which the problem was solved. Several forecasts have been generated, but none address the problem in its entirety. Some have examined, in great detail, one or two of the pertinent factors, but ignored the others. Other forecasts have been more global. These assume certain changes will be made. Forecasts are then developed. The problem is that users do not have the option of changing the assumptions. Moreover, most of these forecasters have not specified their assumptions. Following the literature search, an overview of the FORTEL System is given. FORTEL is designed to forecast service and terminal equipment demand. The system synthesizes forecasters' assumptions about industry change with quantitative demand models. The system also provides the forecaster with the ability to change assumptions and evaluate the outcome; i.e., test alternative scenarios. There are three major components: model, user assumptions, and the computer program. The model component consists of four demand models. For each market (residential and business), there are service and terminal models. User assumptions reflect the forecaster's outlook on the future of the telecommunications industry. The assumptions are divided into five categories. The general category reflects assumptions about the industry in general, the economy, societal change, and energy. Technology assumptions deal with the future of a variety of new (and existing) technologies. There are several assumptions about services and terminals. These cover a broad range of factors which will directly affect implementation and demand of a particular service or terminal. The final category includes assumptions about the forecasting technique (quantitative techniques) to be used. The computer program is the synthesizing component of the system. It combines the user assumptions with the models to produce the forecast. It also contains the scenario testing facit of the system. Following a detailed description of each component, an output comparison is given. The author has selected three different sets of user assumptions. The forecasts generated from these sets are compared. The three range from a pessimistic outlook (economic recession and energy crisis) to a very prosperous outlook (exponential growth). The thesis concludes with a chapter on follow-on research and a conclusion.
|
269 |
Time series analysis using fractal theory and online ensemble classifiers with application to stock portfolio optimizationLunga, Wadzanai Dalton 10 October 2007 (has links)
Neural Network method is a technique that is heavily researched and used in applications
within the engineering field for various purposes ranging from process
control to biomedical applications. The success of Neural Networks (NN) in engineering
applications, e.g. object tracking and face recognition has motivated its
application to the finance industry. In the financial industry, time series data is
used to model economic variables. As a result, finance researchers, portfolio managers
and stockbrokers have taken interest in applying NN to model non-linear
problems they face in their practice. NN facilitates the approach of predicting
stocks due to its ability to accurately and intuitively learn complex patterns and
characterizes these patterns as simple equations. In this research, a methodology
that uses fractal theory and NN framework to model the stock market behavior
is proposed and developed. The time series analysis is carried out using the
proposed approach with application to modelling the Dow Jones Average Index’s
future directional movement. A methodology to establish self-similarity of time
series and long memory effects that result in classifying the time series signal as
persistent, random or non-persistent using the rescaled range analysis technique is
developed. A linear regression technique is used for the estimation of the required
parameters and an incremental online NN algorithm is implemented to predict
the directional movement of the stock. An iterative fractal analysis technique is
used to select the required signal intervals using the approximated parameters.
The selected data is later combined to form a signal of interest and then pass it
to the ensemble of classifiers. The classifiers are modelled using a neural network
based algorithm. The performance of the final algorithm is measured based on
accuracy of predicting the direction of movement and also on the algorithm’s
confidence in its decision-making. The improvement within the final algorithm
is easily assessed by comparing results from two different models in which the
first model is implemented without fractal analysis and the second model is implemented
with the aid of a strong fractal analysis technique. The results of the
first NN model were published in the Lecture Notes in Computer Science 2006
by Springer. The second NN model incorporated a fractal theory technique.
The results from this model shows a great deal of improvement when classifying
the next day’s stock direction of movement. A summary of these results were
submitted to the Australian Joint Conference on Artificial Intelligence 2006 for
publishing. Limitations on the sample size, including problems encountered with
the proposed approach are also outlined in the next sections. This document also
outlines recommendations that can be implemented as further steps to advance
and improve the proposed approach for future work.
|
270 |
Development of statistical downscaling methods for the daily precipitation process at a local sitePharasi, Sid. January 2006 (has links)
No description available.
|
Page generated in 0.0542 seconds