Return to search

Projection of a Markov Process with Neural Networks

In this work we have examined an application from the insurance industry. We first reformulate it into a problem of projecting a markov process. We then develop a method of carrying out the projection many steps into the future by using a combination of neural networks trained using a maximum entropy principle. This methodology improves on current industry standard solution in four key areas: variance, bias, confidence level estimation, and the use of inhomogeneous data. The neural network aspects of the methodology include the use of a generalization error estimate that does not rely on a validation set. We also develop our own approximation to the hessian matrix, which seems to be significantly better than assuming it to be diagonal and much faster than calculating it exactly. This hessian is used in the network pruning algorithm. The parameters of a conditional probability distribution were generated by a neural network, which was trained to maximize the log-likelihood plus a regularization term. In preparing the data for training the neural networks we have devised a scheme to decorrelate input dimensions completely, even non-linear correlations, which should be of general interest in its own right. The results we found indicate that the bias inherent in the current industry-standard projection technique is very significant. This work may be the only accurate measurement made of this important source of error.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:kth-183498
Date January 2001
CreatorsFolkesson, John
PublisherKTH, Centrum för Autonoma System, CAS, KTH, Datorseende och robotik, CVAP
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.3011 seconds