101 |
動態徑向基底函數網路與混沌預測 / Dynamical Radial Basis Function Networks and Chaotic Forecasting蔡炎龍, Tsai, Yen Lung Unknown Date (has links)
在許多的研究和應用之中都需要預測的技巧。本論文中, 我們建構了一個
新的神經網路模式--動態徑向基底函數 (dynamical radial basis
function) 網路 (DRBF網路) , 並且用這種模式的神經網路作為「函數近
似子」(function approximator) 去處理預測上的問題。另外我們也設計
幾種不同的學習演算法以測試DRBF網路的功能。 / The forecasting technique is important for many researches and
applications. In this paper, we shall construct a new model of
neural networks -- the dynamical radial basis function (DRBF)
networks and use the DRBF networks as "function approximators"
to solve some forecasting problems. Different learning
algorithms are used to test the capability of DRBF networks.
|
102 |
Predicting Stock Price IndexGao, Zhiyuan, Qi, Likai January 2010 (has links)
<p>This study is based on three models, Markov model, Hidden Markov model and the Radial basis function neural network. A number of work has been done before about application of these three models to the stock market. Though, individual researchers have developed their own techniques to design and test the Radial basis function neural network. This paper aims to show the different ways and precision of applying these three models to predict price processes of the stock market. By comparing the same group of data, authors get different results. Based on Markov model, authors find a tendency of stock market in future and, the Hidden Markov model behaves better in the financial market. When the fluctuation of the stock price index is not drastic, the Radial basis function neural network has a nice prediction.</p>
|
103 |
Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function ClassifiersSchoelkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V. 01 December 1996 (has links)
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
|
104 |
Mesh free methods for differential models in financial mathematicsSidahmed, Abdelmgid Osman Mohammed January 2011 (has links)
Many problems in financial world are being modeled by means of differential equation. These problems are time dependent, highly nonlinear, stochastic and heavily depend on the previous history of time. A variety of financial products exists in the market, such as forwards, futures, swaps and options. Our main focus in this thesis is to use the numerical analysis tools to solve some option pricing problems. Depending upon the inter-relationship of the financial derivatives, the dimension of the associated problem increases drastically and hence conventional methods (for example, the finite difference methods or finite element methods) for solving them do not provide satisfactory results. To resolve this issue, we use a special class of numerical methods, namely, the mesh free methods. These methods are often better suited to cope with changes in the geometry of the domain of interest than classical discretization techniques. In this thesis, we apply these methods to solve problems that price standard and non-standard options. We then extend the proposed approach to solve Heston' volatility model. The methods in each of these cases are analyzed for stability and thorough comparative numerical results are provided.
|
105 |
Predicting Stock Price IndexGao, Zhiyuan, Qi, Likai January 2010 (has links)
This study is based on three models, Markov model, Hidden Markov model and the Radial basis function neural network. A number of work has been done before about application of these three models to the stock market. Though, individual researchers have developed their own techniques to design and test the Radial basis function neural network. This paper aims to show the different ways and precision of applying these three models to predict price processes of the stock market. By comparing the same group of data, authors get different results. Based on Markov model, authors find a tendency of stock market in future and, the Hidden Markov model behaves better in the financial market. When the fluctuation of the stock price index is not drastic, the Radial basis function neural network has a nice prediction.
|
106 |
Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental ModelsRazavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance.
The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied.
One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified.
As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model.
Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
|
107 |
Predictor development for controlling real-time applications over the InternetKommaraju, Mallik 25 April 2007 (has links)
Over the past decade there has been a growing demand for interactive multimedia
applications deployed over public IP networks. To achieve acceptable Quality of Ser-
vice (QoS) without significantly modifying the existing infrastructure, the end-to-end
applications need to optimize their behavior and adapt according to network char-
acteristics. Most existing application optimization techniques are based on reactive
strategies, i.e. reacting to occurrences of congestion. We propose the use of predic-
tive control to address the problem in an anticipatory manner. This research deals
with developing models to predict end-to-end single flow characteristics of Wide Area
Networks (WANs).
A novel signal, in the form of single flow packet accumulation, is proposed for
feedback purposes. This thesis presents a variety of effective predictors for the above
signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse
Basis Functions (SBF). The study consists of three sections. We first develop time-
series models to predict the accumulation signal. Since encoder bit-rate is the most
logical and generic control input, a statistical analysis is conducted to analyze the
effect of input bit-rate on end-to-end delay and the accumulation signal. Finally,
models are developed using this bit-rate as an input to predict the resulting accu-
mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR)
along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu-
racy with increasing accumulation levels showed AR to be better in some cases. The
study on effect of bit-rate revealed that bit-rate may not be a good control input on
all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF
were used to develop models to predict the accumulation signal using bit-rate as a
modeling input. ARX and RBF models were found to give comparable accuracy, with
RBF being slightly better.
|
108 |
Komplexität und Stabilität von kernbasierten Rekonstruktionsmethoden / Complexity and Stability of Kernel-based ReconstructionsMüller, Stefan 21 January 2009 (has links)
No description available.
|
109 |
Tiesioginio sklidimo neuroninių tinklų sistemų lyginamoji analizė / Feedforward neural network systems a comparative analysisIgnatavičienė, Ieva 01 August 2012 (has links)
Pagrindinis darbo tikslas – atlikti kelių tiesioginio sklidimo neuroninių tinklų sistemų lyginamąją analizę siekiant įvertinti jų funkcionalumą.
Šiame darbe apžvelgiama: biologinio ir dirbtinio neuronų modeliai, neuroninių tinklų klasifikacija pagal jungimo konstrukciją (tiesioginio sklidimo ir rekurentiniai neuroniniai tinklai), dirbtinių neuroninių tinklų mokymo strategijos (mokymas su mokytoju, mokymas be mokytojo, hibridinis mokymas). Analizuojami pagrindiniai tiesioginio sklidimo neuroninių tinklų metodai: vienasluoksnis perceptronas, daugiasluoksnis perceptronas realizuotas „klaidos skleidimo atgal” algoritmu, radialinių bazinių funkcijų neuroninis tinklas.
Buvo nagrinėjama 14 skirtingų tiesioginio sklidimo neuroninių tinklų sistemos. Programos buvo suklasifikuotos pagal kainą, tiesioginio sklidimo neuroninių tinklo mokymo metodų taikymą, galimybę vartotojui keisti parametrus prieš apmokant tinklą ir techninį programos įvertinimą. Programos buvo įvertintos dešimtbalėje vertinimo sistemoje pagal mokymo metodų įvairumą, parametrų keitimo galimybes, programos stabilumą, kokybę, bei kainos ir kokybės santykį. Aukščiausiu balu įvertinta „Matlab” programa (10 balų), o prasčiausiai – „Sharky NN” (2 balai).
Detalesnei analizei pasirinktos keturios programos („Matlab“, „DTREG“, „PathFinder“, „Cortex“), kurios buvo įvertintos aukščiausiais balais, galėjo apmokyti tiesioginio sklidimo neuroninį tinklą daugiasluoksnio perceptrono metodu ir bent dvi radialinių bazinių funkcijų... [toliau žr. visą tekstą] / The main aim – to perform a comparative analysis of several feedforward neural system networks in order to identify its functionality.
The work presents both: biological and artificial neural models, also classification of neural networks, according to connections’ construction (of feedforward and recurrent neural networks), studying strategies of artificial neural networks (with a trainer, without a trainer, hybrid). The main methods of feedforward neural networks: one-layer perceptron, multilayer perceptron, implemented upon “error feedback” algorithm, also a neural network of radial base functions have been considered.
The work has included 14 different feedforward neural system networks, classified according its price, application of study methods of feedforward neural networks, also a customer’s possibility to change parameters before paying for the network and a technical evaluation of a program. The programs have been evaluated from 1 point to 10 points according to the following: variety of training systems, possibility to change parameters, stability, quality and ratio of price and quality. The highest evaluation has been awarded to “Matlab” (10 points), the lowest – to “Sharky NN” (2 points).
Four programs (”Matlab“, “DTREG“, “PathFinder“,”Cortex“) have been selected for a detail analysis. The best evaluated programs have been able to train feedforward neural networks using multilayer perceptron method, also at least two radial base function networks. “Matlab“ and... [to full text]
|
110 |
Mesh free methods for differential models in financial mathematicsSidahmed, Abdelmgid Osman Mohammed January 2011 (has links)
Many problems in financial world are being modeled by means of differential equation. These problems are time dependent, highly nonlinear, stochastic and heavily depend on the previous history of time. A variety of financial products exists in the market, such as forwards, futures, swaps and options. Our main focus in this thesis is to use the numerical analysis tools to solve some option pricing problems. Depending upon the inter-relationship of the financial derivatives, the dimension of the associated problem increases drastically and hence conventional methods (for example, the finite difference methods or finite element methods) for solving them do not provide satisfactory results. To resolve this issue, we use a special class of numerical methods, namely, the mesh free methods. These methods are often better suited to cope with changes in the geometry of the domain of interest than classical discretization techniques. In this thesis, we apply these methods to solve problems that price standard and non-standard options. We then extend the proposed approach to solve Heston' volatility model. The methods in each of these cases are analyzed for stability and thorough comparative numerical results are provided.
|
Page generated in 0.0584 seconds