Spelling suggestions: "subject:"timeseries prediction"" "subject:"comedyseries prediction""
1 |
Minimum description length, regularisation and multi-modal dataVan der Rest, John C. January 1995 (has links)
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
|
2 |
Approximating differentiable relationships between delay embedded dynamical systems with radial basis functionsPotts, Michael Alan Sherred January 1996 (has links)
This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.
|
3 |
Neural networks for machine fault diagnosis and life span predictionTse, Peter W. January 1997 (has links)
No description available.
|
4 |
Weather Radar image Based Forecasting using Joint Series PredictionKattekola, Sravanthi 17 December 2010 (has links)
Accurate rainfall forecasting using weather radar imagery has always been a crucial and predominant task in the field of meteorology [1], [2], [3] and [4]. Competitive Radial Basis Function Neural Networks (CRBFNN) [5] is one of the methods used for weather radar image based forecasting. Recently, an alternative CRBFNN based approach [6] was introduced to model the precipitation events. The difference between the techniques presented in [5] and [6] is in the approach used to model the rainfall image. Overall, it was shown that the modified CRBFNN approach [6] is more computationally efficient compared to the CRBFNN approach [5]. However, both techniques [5] and [6] share the same prediction stage. In this thesis, a different GRBFNN approach is presented for forecasting Gaussian envelope parameters. The proposed method investigates the concept of parameter dependency among Gaussian envelopes. Experimental results are also presented to illustrate the advantage of parameters prediction over the independent series prediction.
|
5 |
Forecasting Global Temperature Variations by Neural NetworksMiyano, Takaya, Girosi, Federico 01 August 1994 (has links)
Global temperature variations between 1861 and 1984 are forecast usingsregularization networks, multilayer perceptrons and linearsautoregression. The regularization network, optimized by stochasticsgradient descent associated with colored noise, gives the bestsforecasts. For all the models, prediction errors noticeably increasesafter 1965. These results are consistent with the hypothesis that thesclimate dynamics is characterized by low-dimensional chaos and thatsthe it may have changed at some point after 1965, which is alsosconsistent with the recent idea of climate change.s
|
6 |
Self-Learning Prediciton System for Optimisation of Workload Managememt in a Mainframe Operating SystemBensch, Michael, Brugger, Dominik, Rosenstiel, Wolfgang, Bogdan, Martin, Spruth, Wilhelm 06 November 2018 (has links)
We present a framework for extraction and prediction of online workload data from a workload manager of a mainframe operating system. To boost overall system performance, the prediction will be corporated
into the workload manager to take preventive action before a bottleneck develops. Model and feature selection automatically create a prediction model based on given training data, thereby keeping the system
flexible. We tailor data extraction, preprocessing and training to this specific task, keeping in mind the nonstationarity of business processes. Using error measures suited to our task, we show that our approach is promising. To conclude, we discuss our first results and give an outlook on future work.
|
7 |
Time series prediction using supervised learning and tools from chaos theoryEdmonds, Andrew Nicola January 1996 (has links)
In this work methods for performing time series prediction on complex real world time series are examined. In particular series exhibiting non-linear or chaotic behaviour are selected for analysis. A range of methodologies based on Takens' embedding theorem are considered and compared with more conventional methods. A novel combination of methods for determining the optimal embedding parameters are employed and tried out with multivariate financial time series data and with a complex series derived from an experiment in biotechnology. The results show that this combination of techniques provide accurate results while improving dramatically the time required to produce predictions and analyses, and eliminating a range of parameters that had hitherto been fixed empirically. The architecture and methodology of the prediction software developed is described along with design decisions and their justification. Sensitivity analyses are employed to justify the use of this combination of methods, and comparisons are made with more conventional predictive techniques and trivial predictors showing the superiority of the results generated by the work detailed in this thesis.
|
8 |
Machine Learning for Decision-Support in Distributed NetworksSetati, Makgopa Gareth 14 November 2006 (has links)
Student Number : 9801145J -
MSc dissertation -
School of Electrical and Information Engineering -
Faculty of Engineering / In this document, a paper is presented that reports on the optimisation of a system that assists in time series prediction. Daily closing prices of a stock are used as the time series under which the system is being optimised. Concepts of machine learning, Artificial Neural Networks, Genetic Algorithms, and Agent-Based Modeling are used as tools for this task. Neural networks serve as the prediction engine and genetic algorithms are used for optimisation tasks as well as the simulation of a multi-agent based trading environment. The simulated trading environment is used to ascertain and optimise the best data, in terms of quality, to use as inputs to the neural network. The results achieved were positive and a large portion of this work concentrates on the refinement of the predictive capability. From this study it is concluded that AI methods bring a sound scientific approach to time series prediction, regardless of the phenomena that is being predicted.
|
9 |
A forecasting of indices and corresponding investment decision making applicationPatel, Pretesh Bhoola 01 March 2007 (has links)
Student Number : 9702018F -
MSc(Eng) Dissertation -
School of Electrical and Information Engineering -
Faculty of Engineering and the Built Environment / Due to the volatile nature of the world economies, investing is crucial in ensuring an individual is prepared for future
financial necessities. This research proposes an application, which employs computational intelligent methods that could
assist investors in making financial decisions. This system consists of 2 components. The Forecasting Component (FC) is
employed to predict the closing index price performance. Based on these predictions, the Stock Quantity Selection
Component (SQSC) recommends the investor to purchase stocks, hold the current investment position or sell stocks in
possession. The development of the FC module involved the creation of Multi-Layer Perceptron (MLP) as well as Radial
Basis Function (RBF) neural network classifiers. TCategorizes that these networks classify are based on a profitable trading
strategy that outperforms the long-term “Buy and hold” trading strategy. The Dow Jones Industrial Average, Johannesburg
Stock Exchange (JSE) All Share, Nasdaq 100 and the Nikkei 225 Stock Average indices are considered. TIt has been
determined that the MLP neural network architecture is particularly suited in the prediction of closing index price
performance. Accuracies of 72%, 68%, 69% and 64% were obtained for the prediction of closing price performance of the
Dow Jones Industrial Average, JSE All Share, Nasdaq 100 and Nikkei 225 Stock Average indices, respectively. TThree
designs of the Stock Quantity Selection Component were implemented and compared in terms of their complexity as well as
scalability. TComplexity is defined as the number of classifiers employed by the design. Scalability is defined as the ability of
the design to accommodate the classification of additional investment recommendations. TDesigns that utilized 1, 4 and 16
classifiers, respectively, were developed. These designs were implemented using MLP neural networks, RBF neural
networks, Fuzzy Inference Systems as well as Adaptive Neuro-Fuzzy Inference Systems. The design that employed 4
classifiers achieved low complexity and high scalability. As a result, this design is most appropriate for the application of
concern. It has also been determined that the neural network architecture as well as the Fuzzy Inference System
implementation of this design performed equally well.
|
10 |
Predictive Maintenance of NOx Sensor using Deep Learning : Time series prediction with encoder-decoder LSTMKumbala, Bharadwaj Reddy January 2019 (has links)
In automotive industry there is a growing need for predicting the failure of a component, to achieve the cost saving and customer satisfaction. As failure in a component leads to the work breakdown for the customer. This paper describes an effort in making a prediction failure monitoring model for NOx sensor in trucks. It is a component that used to measure the level of nitrogen oxide emission from the truck. The NOx sensor has chosen because its failure leads to the slowdown of engine efficiency and it is fragile and costly to replace. The data from a good and contaminated NOx sensor which is collated from the test rigs is used the input to the model. This work in this paper shows approach of complementing the Deep Learning models with Machine Learning algorithm to achieve the results. In this work LSTMs are used to detect the gain in NOx sensor and Encoder-Decoder LSTM is used to predict the variables. On top of it Multiple Linear Regression model is used to achieve the end results. The performance of the monitoring model is promising. The approach described in this paper is a general model and not specific to this component, but also can be used for other sensors too as it has a universal kind of approach.
|
Page generated in 0.0851 seconds