Spelling suggestions: "subject:"nongaussian distribution"" "subject:"andgaussian distribution""
51 |
RBFNN-based Minimum Entropy Filtering for a Class of Stochastic Nonlinear SystemsYin, X., Zhang, Qichun, Wang, H., Ding, Z. 03 October 2019 (has links)
Yes / This paper presents a novel minimum entropy filter design for a class of stochastic nonlinear systems which are subjected to non-Gaussian noises. Motivated by stochastic distribution control, an output entropy model is developed using RBF neural network while the parameters of the model can be identified by the collected data. Based upon the presented model, the filtering problem has been investigated while the system dynamics have been represented. As the model output is the entropy of the estimation error, the optimal nonlinear filter is obtained based on the Lyapunov design which makes the model output minimum. Moreover, the entropy assignment problem has been discussed as an extension of the presented approach. To verify the presented design procedure, a numerical example is given which illustrates the effectiveness of the presented algorithm. The contributions of this paper can be included as 1) an output entropy model is presented using neural network; 2) a nonlinear filter design algorithm is developed as the main result and 3) a solution of entropy assignment problem is obtained which is an extension of the presented framework.
|
52 |
Abordagem clássica e bayesiana para os modelos de séries temporais da família GARMA com aplicações para dados contínuosCascone, Marcos Henrique 24 March 2011 (has links)
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1
3603.pdf: 602959 bytes, checksum: 3078931e73ff3d01b4122cbac2c7f0a0 (MD5)
Previous issue date: 2011-03-24 / Financiadora de Estudos e Projetos / In this work, the aim was to analyze in the classic and bayesian context, the GARMA model with three different continuous distributions: Gaussian, Inverse Gaussian and Gamma. We analyzed the performance and the goodness of fit of the three models, as well as the performance of the coverage percentile. In the classic analyze we consider the maximum likelihood estimator and by simulation study, we verified the consistency, the bias and de mean square error of the models. To the bayesian approach we proposed a non-informative prior distribution for the parameters of the model, resulting in a posterior distribution, which we found the bayesian estimatives for the parameters. This study still was not found in the literature. So, we can observe that the bayesian inference showed a good quality in the analysis of the serie, which can be comprove with the last section of this work. This, consist in the analyze of a real data set corresponding in the rate of tuberculosis cases in metropolitan area of Sao Paulo. The results show that, either the classical and bayesian approach, are good alternatives to describe the behavior of the real time serie. / Neste trabalho, o objetivo foi analisar no contexto clássico e bayesiano, o modelo GARMA com três distribuições contínuas: Gaussiana (Normal), Inversa Gaussiana e Gama, e também o desempenho e a qualidade do ajuste dos modelos de interesse, bem como o desempenho dos percentis de cobertura para eles. Para o estudo clássico foi considerado os estimadores de máxima verossimilhança e por meio de simulação verificou-se a consistência, o viés e o erro quadrático médio dos mesmos. Para a abordagem bayesiana é proposta uma distribuição a priori não informativa para os parâmetros dos modelos resultando em uma distribuição a posteriori, o qual a partir daí pode-se encontrar as estimativas bayesianas para os parâmetros, sendo que este estudo ainda não foi encontrado na literatura. Com isso pode-se observar que a inferência bayesiana mostrou boa eficiência no processo de análise da série, o que pode ser comprovado também com a última etapa do trabalho. Esta, consiste na análise de um conjunto de dados reais correspondente a taxa de casos de tuberculose na região metropolitana de São Paulo. Os resultados mostram que, tanto o estudo clássico quanto o bayesiano, são capazes de descrever bem o comportamento da série.
|
53 |
On the calibration of Lévy option pricing models / Izak Jacobus Henning VisagieVisagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option
prices observed in some market. This means that we choose the parameters of the option
pricing models such that the prices calculated using the models correspond as closely as
possible to these option prices. We demonstrate the ability of relatively simple Lévy option
pricing models to nearly perfectly replicate option prices observed in nancial markets.
We speci cally consider calibrating option pricing models to barrier option prices and
we demonstrate that the option prices obtained under one model can be very accurately
replicated using another. Various types of calibration are considered in the thesis.
We calibrate a wide range of Lévy option pricing models to option price data. We con-
sider exponential Lévy models under which the log-return process of the stock is assumed
to follow a Lévy process. We also consider linear Lévy models; under these models the
stock price itself follows a Lévy process. Further, we consider time changed models. Under
these models time does not pass at a constant rate, but follows some non-decreasing Lévy
process. We model the passage of time using the lognormal, Pareto and gamma processes.
In the context of time changed models we consider linear as well as exponential models.
The normal inverse Gaussian (N IG) model plays an important role in the thesis.
The numerical problems associated with the N IG distribution are explored and we
propose ways of circumventing these problems. Parameter estimation for this distribution
is discussed in detail.
Changes of measure play a central role in option pricing. We discuss two well-known
changes of measure; the Esscher transform and the mean correcting martingale measure.
We also propose a generalisation of the latter and we consider the use of the resulting
measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
|
54 |
On the calibration of Lévy option pricing models / Izak Jacobus Henning VisagieVisagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option
prices observed in some market. This means that we choose the parameters of the option
pricing models such that the prices calculated using the models correspond as closely as
possible to these option prices. We demonstrate the ability of relatively simple Lévy option
pricing models to nearly perfectly replicate option prices observed in nancial markets.
We speci cally consider calibrating option pricing models to barrier option prices and
we demonstrate that the option prices obtained under one model can be very accurately
replicated using another. Various types of calibration are considered in the thesis.
We calibrate a wide range of Lévy option pricing models to option price data. We con-
sider exponential Lévy models under which the log-return process of the stock is assumed
to follow a Lévy process. We also consider linear Lévy models; under these models the
stock price itself follows a Lévy process. Further, we consider time changed models. Under
these models time does not pass at a constant rate, but follows some non-decreasing Lévy
process. We model the passage of time using the lognormal, Pareto and gamma processes.
In the context of time changed models we consider linear as well as exponential models.
The normal inverse Gaussian (N IG) model plays an important role in the thesis.
The numerical problems associated with the N IG distribution are explored and we
propose ways of circumventing these problems. Parameter estimation for this distribution
is discussed in detail.
Changes of measure play a central role in option pricing. We discuss two well-known
changes of measure; the Esscher transform and the mean correcting martingale measure.
We also propose a generalisation of the latter and we consider the use of the resulting
measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
|
55 |
Classification in high dimensional feature spaces / by H.O. van DykVan Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries).
Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
|
56 |
Classification in high dimensional feature spaces / by H.O. van DykVan Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries).
Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
|
57 |
Aplicação da distribuição espectral normal em leito fluidizado gas-solidoParise, Maria Regina 14 September 2007 (has links)
Orientador: Osvaldir PecoraTaranto / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-10T20:17:23Z (GMT). No. of bitstreams: 1
Parise_MariaRegina_D.pdf: 3017112 bytes, checksum: f14db15e7314590127e2a9a2ff621c9c (MD5)
Previous issue date: 2007 / Resumo: A defluidização parcial ou total de partículas sólidas é um fenômeno indesejável em aplicações industriais envolvendo operações com leito fluidizado. Se as mudanças na fluidodinâmica do leito forem detectadas a tempo, pode-se evitar-la com o aumento da velocidade do gás de fluidização e/ou, em alguns casos, mediante a alteração da vazão de sólidos alimentados no sistema. A utilização de uma técnica que permita rapidamente identificar a região onde o leito está tendendo à defluidização é de suma importância, pois dessa maneira pode-se atuar no processo impedindo que haja perda de eficiência ou até mesmo a necessidade de parar a produção. Este trabalho teve como objetivo o desenvolvimento de uma metodologia capaz de identificar essa região em leito fluidizado gás-sólido, através de medidas de flutuação de pressão analisadas utilizando a transformada de Fourier juntamente com a Distribuição Exponencial Gaussiana. Para a verificação da metodologia proposta foram realizados ensaios experimentais utilizando areia e celulose microcristalina, alterando-se a altura do leito e o diâmetro médio das partículas. Os resultados mostraram que o método identifica claramente a região onde o leito está tendendo à defluidização, e tem grande potencial em aplicações industriais para o controle on line de processos em leito fluidizado gás-sólido. Além disso, a metodologia pode ser importante para detectar mudança de regime para a relação altura do leito e diâmetro da coluna de fluidização (H/D) superiores à unidade. Adicionalmente, foram realizados ensaios de secagem utilizando celulose microcristalina visando à possibilidade da identificação do ponto crítico (teor de umidade existente no final do período de taxa constante) utilizando as seguintes técnicas: metodologia proposta neste trabalho, freqüência dominante e desvio padrão da flutuação de pressão. Observou-se que para o sólido utilizado não foi possível detectar o ponto crítico através desses três tipos de análise. No entanto, a metodologia proposta pode ser utilizada na identificação do momento que a secagem não se encontra na fluidização requerida e está tendendo à defluidização / Abstract: The partial or complete bed defluidization is an undesired phenomenon in industrial application involving fluidized bed operations. If the changes in the hydrodynamic of the fluidized bed are detected early enough, it may be prevented by increasing the gas velocity and/or, in some cases, changing the solid feed in the system. The use of a technique that can quickly identify the region where the bed is tending to the defluidization is very important, because one can act in the process avoiding loss of efficiency or even the necessity of shutting down the process. This work was as objective the development a methodology capable of identifying this region in gas-solid fluidized bed, by pressure fluctuation measurements analyzed using Fourier Transform and exponential Gaussian distribution. In order to verify the methodology proposed, experimental tests were carried out using microcrystalline cellulose and sand, where the fixed bed height and particle mean diameter were varied. Results showed that the method clearly identifies the region where the bed is tending to the defluidization, and it has great potential on industrial applications to on line process control in gas-solid fluidized bed. Also, the methodology can be important to detect regime changes at bed aspect ratios (H/D) higher than unit. Additionally, experimental drying test were carried out using microcrystalline cellulose particles to verify the possibility of identification of the critical drying point (the moisture content related to the end of the constant rate period) by the following techniques: the methodology proposed in the present work, the dominant frequency and standard deviation of pressure fluctuations. It was observed that for the solid material used it was not possible to detect the critical drying point. However, the methodology proposed can be used on identification the moment that the drying is not in the desired fluidization regime and it is tending to defluidization / Doutorado / Engenharia de Processos / Doutor em Engenharia Química
|
58 |
On the modeling of asset returns and calibration of European option pricing modelsRobbertse, Johannes Lodewickes 07 July 2008 (has links)
Prof. F. Lombard
|
59 |
Conflict detection and resolution for autonomous vehiclesVan Daalen, Corne Edwin 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Autonomous vehicles have recently received much attention from researchers. The prospect of
safe and reliable autonomous vehicles for general, unregulated environments promises several
advantages over human-controlled vehicles, including increased efficiency, reliability and capability
with the associated decrease in danger to humans and reduction in operating costs. A
critical requirement for the safe operation of fully autonomous vehicles is their ability to avoid
collisions with obstacles and other vehicles. In addition, they are often required to maintain a
minimum separation from obstacles and other vehicles, which is called conflict avoidance. The
research presented in thesis focuses on methods for effective conflict avoidance.
Existing conflict avoidance methods either make limiting assumptions or cannot execute in
real-time due to computational complexity. This thesis proposes methods for real-time conflict
avoidance in uncertain, cluttered and dynamic environments. These methods fall into the
category of non-cooperative conflict avoidance. They allow very general vehicle and environment
models, with the only notable assumption being that the position and velocity states of the
vehicle and obstacles have a jointly Gaussian probability distribution.
Conflict avoidance for fully autonomous vehicles consists of three functions, namely modelling
and identification of the environment, conflict detection and conflict resolution. We
present an architecture for such a system that ensures stable operation.
The first part of this thesis comprises the development of a novel and efficient probabilistic
conflict detection method. This method processes the predicted vehicle and environment states
to compute the probability of conflict for the prediction period. During the method derivation,
we introduce the concept of the flow of probability through the boundary of the conflict region,
which enables us to significantly reduce the complexity of the problem. The method also assumes
Gaussian distributed states and defines a tight upper bound to the conflict probability, both
of which further reduce the problem complexity, and then uses adaptive numerical integration
for efficient evaluation. We present the results of two simulation examples which show that the
proposed method can calculate in real-time the probability of conflict for complex and cluttered
environments and complex vehicle maneuvers, offering a significant improvement over existing
methods.
The second part of this thesis adapts existing kinodynamic motion planning algorithms
for conflict resolution in uncertain, dynamic and cluttered environments. We use probabilistic
roadmap methods and suggest three changes to them, namely using probabilistic conflict detection
methods, sampling the state-time space instead of the state space and batch generation of
samples. In addition, we propose a robust and adaptive way to choose the size of the sampling
space using a maximum least connection cost bound. We then put all these changes together in
a proposed motion planner for conflict resolution. We present the results of two simulation examples
which show that the proposed motion planner can only find a feasible path in real-time
for simple and uncluttered environments. However, the manner in which we handle uncertainty
and the sampling space bounds offer significant contributions to the conflict resolution field / AFRIKAANSE OPSOMMING: Outonome voertuie het die afgelope tyd heelwat aandag van navorsers geniet. Die vooruitsig
van veilige en betroubare outonome voertuie vir algemene en ongereguleerde omgewings beloof
verskeie voordele bo menslik-beheerde voertuie en sluit hoër effektiwiteit, betroubaarheid
en vermoëns asook die gepaardgaande veiligheid vir mense en laer bedryfskoste in. ’n Belangrike
vereiste vir die veilige bedryf van volledig outonome voertuie is hul vermoë om botsings
met hindernisse en ander voertuie te vermy. Daar word ook dikwels van hulle vereis om ’n
minimum skeidingsafstand tussen hulle en die hindernisse of ander voertuie te handhaaf – dit
word konflikvermyding genoem. Die navorsing in hierdie tesis fokus op metodes vir effektiewe
konflikvermyding.
Bestaande konflikvermydingsmetodes maak óf beperkende aannames óf voer te stadig uit as
gevolg van bewerkingskompleksiteit. Hierdie tesis stel metodes voor vir intydse konflikvermyding
in onsekere en dinamiese omgewings wat ook baie hindernisse bevat. Die voorgestelde
metodes val in die klas van nie-samewerkende konflikvermydingsmetodes. Hulle kan algemene
voertuig- en omgewingsmodelle hanteer en hul enigste noemenswaardige aanname is dat die
posisie- en snelheidstoestande van die voertuig en hindernisse Gaussiese waarskynliksheidverspreidings
toon.
Konflikvermyding vir volledig outonome voertuie bestaan uit drie stappe, naamlik modellering
en identifikasie van die omgewing, konflikdeteksie en konflikresolusie. Ons bied ’n
argitektuur vir so ’n stelsel aan wat stabiele werking verseker.
Die eerste deel van die tesis beskryf die ontwikkeling van ’n oorspronklike en doeltreffende
metode vir waarskynliksheid-konflikdeteksie. Die metode gebruik die voorspelde toestande van
die voertuig en omgewing en bereken die waarskynlikheid van konflik vir die betrokke voorspellingsperiode.
In die afleiding van die metode definiëer ons die konsep van waarskynliksheidvloei
oor die grens van die konflikdomein. Dit stel ons in staat om die kompleksiteit van die
probleem beduidend te verminder. Die metode aanvaar ook Gaussiese waarskynlikheidsverspreiding
van toestande en definiëer ’n nou bogrens tot die waarskynlikheid van konflik om
die kompleksiteit van die probleem verder te verminder. Laastens gebruik die metode aanpasbare
integrasiemetodes vir vinnige berekening van die waarskynlikheid van konflik. Die eerste
deel van die tesis sluit af met twee simulasies wat aantoon dat die voorgestelde konflikdeteksiemetode
in staat is om die waarskynlikheid van konflik intyds te bereken, selfs vir komplekse
omgewings en voertuigbewegings. Die metode lewer dus ’n beduidende bydrae tot die veld van
konflikdeteksie.
Die tweede deel van die tesis pas bestaande kinodinamiese beplanningsalgoritmes aan vir
konflikresolusie in komplekse omgewings. Ons stel drie veranderings voor, naamlik die gebruik
van waarskynliksheid-konflikdeteksiemetodes, die byvoeg van ’n tyd-dimensie in die monsterruimte
en die generasie van meervoudige monsters. Ons stel ook ’n robuuste en aanpasbare
manier voor om die grootte van die monsterruimte te kies. Al die voorafgaande voorstelle word
saamgevoeg in ’n beplanner vir konflikresolusie. Die tweede deel van die tesis sluit af met twee
simulasies wat aantoon dat die voorgestelde beplanner slegs intyds ’n oplossing kan vind vir
eenvoudige omgewings. Die manier hoe die beplanner onsekerheid hanteer en die begrensing
van die monsterruimte lewer egter waardevolle bydraes tot die veld van konflikresolusie
|
60 |
Effective and efficient estimation of distribution algorithms for permutation and scheduling problemsAyodele, Mayowa January 2018 (has links)
Estimation of Distribution Algorithm (EDA) is a branch of evolutionary computation that learn a probabilistic model of good solutions. Probabilistic models are used to represent relationships between solution variables which may give useful, human-understandable insights into real-world problems. Also, developing an effective PM has been shown to significantly reduce function evaluations needed to reach good solutions. This is also useful for real-world problems because their representations are often complex needing more computation to arrive at good solutions. In particular, many real-world problems are naturally represented as permutations and have expensive evaluation functions. EDAs can, however, be computationally expensive when models are too complex. There has therefore been much recent work on developing suitable EDAs for permutation representation. EDAs can now produce state-of-the-art performance on some permutation benchmark problems. However, models are still complex and computationally expensive making them hard to apply to real-world problems. This study investigates some limitations of EDAs in solving permutation and scheduling problems. The focus of this thesis is on addressing redundancies in the Random Key representation, preserving diversity in EDA, simplifying the complexity attributed to the use of multiple local improvement procedures and transferring knowledge from solving a benchmark project scheduling problem to a similar real-world problem. In this thesis, we achieve state-of-the-art performance on the Permutation Flowshop Scheduling Problem benchmarks as well as significantly reducing both the computational effort required to build the probabilistic model and the number of function evaluations. We also achieve competitive results on project scheduling benchmarks. Methods adapted for solving a real-world project scheduling problem presents significant improvements.
|
Page generated in 0.1218 seconds