• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 24
  • 14
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

INTEGRATED ANALYSIS OF TEMPORAL AND MORPHOLOGICAL FEATURES USING MACHINE LEARNING TECHNIQUES FOR REAL TIME DIAGNOSIS OF ARRHYTHMIA AND IRREGULAR BEATS

Gawde, Purva R. 06 December 2018 (has links)
No description available.
52

Interference Analysis and Mitigation in a Cellular Network with Femtocells

Dalal, Avani 26 September 2011 (has links)
No description available.
53

RBFNN-based Minimum Entropy Filtering for a Class of Stochastic Nonlinear Systems

Yin, X., Zhang, Qichun, Wang, H., Ding, Z. 03 October 2019 (has links)
Yes / This paper presents a novel minimum entropy filter design for a class of stochastic nonlinear systems which are subjected to non-Gaussian noises. Motivated by stochastic distribution control, an output entropy model is developed using RBF neural network while the parameters of the model can be identified by the collected data. Based upon the presented model, the filtering problem has been investigated while the system dynamics have been represented. As the model output is the entropy of the estimation error, the optimal nonlinear filter is obtained based on the Lyapunov design which makes the model output minimum. Moreover, the entropy assignment problem has been discussed as an extension of the presented approach. To verify the presented design procedure, a numerical example is given which illustrates the effectiveness of the presented algorithm. The contributions of this paper can be included as 1) an output entropy model is presented using neural network; 2) a nonlinear filter design algorithm is developed as the main result and 3) a solution of entropy assignment problem is obtained which is an extension of the presented framework.
54

Abordagem clássica e bayesiana para os modelos de séries temporais da família GARMA com aplicações para dados contínuos

Cascone, Marcos Henrique 24 March 2011 (has links)
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 3603.pdf: 602959 bytes, checksum: 3078931e73ff3d01b4122cbac2c7f0a0 (MD5) Previous issue date: 2011-03-24 / Financiadora de Estudos e Projetos / In this work, the aim was to analyze in the classic and bayesian context, the GARMA model with three different continuous distributions: Gaussian, Inverse Gaussian and Gamma. We analyzed the performance and the goodness of fit of the three models, as well as the performance of the coverage percentile. In the classic analyze we consider the maximum likelihood estimator and by simulation study, we verified the consistency, the bias and de mean square error of the models. To the bayesian approach we proposed a non-informative prior distribution for the parameters of the model, resulting in a posterior distribution, which we found the bayesian estimatives for the parameters. This study still was not found in the literature. So, we can observe that the bayesian inference showed a good quality in the analysis of the serie, which can be comprove with the last section of this work. This, consist in the analyze of a real data set corresponding in the rate of tuberculosis cases in metropolitan area of Sao Paulo. The results show that, either the classical and bayesian approach, are good alternatives to describe the behavior of the real time serie. / Neste trabalho, o objetivo foi analisar no contexto clássico e bayesiano, o modelo GARMA com três distribuições contínuas: Gaussiana (Normal), Inversa Gaussiana e Gama, e também o desempenho e a qualidade do ajuste dos modelos de interesse, bem como o desempenho dos percentis de cobertura para eles. Para o estudo clássico foi considerado os estimadores de máxima verossimilhança e por meio de simulação verificou-se a consistência, o viés e o erro quadrático médio dos mesmos. Para a abordagem bayesiana é proposta uma distribuição a priori não informativa para os parâmetros dos modelos resultando em uma distribuição a posteriori, o qual a partir daí pode-se encontrar as estimativas bayesianas para os parâmetros, sendo que este estudo ainda não foi encontrado na literatura. Com isso pode-se observar que a inferência bayesiana mostrou boa eficiência no processo de análise da série, o que pode ser comprovado também com a última etapa do trabalho. Esta, consiste na análise de um conjunto de dados reais correspondente a taxa de casos de tuberculose na região metropolitana de São Paulo. Os resultados mostram que, tanto o estudo clássico quanto o bayesiano, são capazes de descrever bem o comportamento da série.
55

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
56

On the calibration of Lévy option pricing models / Izak Jacobus Henning Visagie

Visagie, Izak Jacobus Henning January 2015 (has links)
In this thesis we consider the calibration of models based on Lévy processes to option prices observed in some market. This means that we choose the parameters of the option pricing models such that the prices calculated using the models correspond as closely as possible to these option prices. We demonstrate the ability of relatively simple Lévy option pricing models to nearly perfectly replicate option prices observed in nancial markets. We speci cally consider calibrating option pricing models to barrier option prices and we demonstrate that the option prices obtained under one model can be very accurately replicated using another. Various types of calibration are considered in the thesis. We calibrate a wide range of Lévy option pricing models to option price data. We con- sider exponential Lévy models under which the log-return process of the stock is assumed to follow a Lévy process. We also consider linear Lévy models; under these models the stock price itself follows a Lévy process. Further, we consider time changed models. Under these models time does not pass at a constant rate, but follows some non-decreasing Lévy process. We model the passage of time using the lognormal, Pareto and gamma processes. In the context of time changed models we consider linear as well as exponential models. The normal inverse Gaussian (N IG) model plays an important role in the thesis. The numerical problems associated with the N IG distribution are explored and we propose ways of circumventing these problems. Parameter estimation for this distribution is discussed in detail. Changes of measure play a central role in option pricing. We discuss two well-known changes of measure; the Esscher transform and the mean correcting martingale measure. We also propose a generalisation of the latter and we consider the use of the resulting measure in the calculation of arbitrage free option prices under exponential Lévy models. / PhD (Risk Analysis), North-West University, Potchefstroom Campus, 2015
57

Classification in high dimensional feature spaces / by H.O. van Dyk

Van Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries). Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
58

Classification in high dimensional feature spaces / by H.O. van Dyk

Van Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries). Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
59

Aplicação da distribuição espectral normal em leito fluidizado gas-solido

Parise, Maria Regina 14 September 2007 (has links)
Orientador: Osvaldir PecoraTaranto / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-10T20:17:23Z (GMT). No. of bitstreams: 1 Parise_MariaRegina_D.pdf: 3017112 bytes, checksum: f14db15e7314590127e2a9a2ff621c9c (MD5) Previous issue date: 2007 / Resumo: A defluidização parcial ou total de partículas sólidas é um fenômeno indesejável em aplicações industriais envolvendo operações com leito fluidizado. Se as mudanças na fluidodinâmica do leito forem detectadas a tempo, pode-se evitar-la com o aumento da velocidade do gás de fluidização e/ou, em alguns casos, mediante a alteração da vazão de sólidos alimentados no sistema. A utilização de uma técnica que permita rapidamente identificar a região onde o leito está tendendo à defluidização é de suma importância, pois dessa maneira pode-se atuar no processo impedindo que haja perda de eficiência ou até mesmo a necessidade de parar a produção. Este trabalho teve como objetivo o desenvolvimento de uma metodologia capaz de identificar essa região em leito fluidizado gás-sólido, através de medidas de flutuação de pressão analisadas utilizando a transformada de Fourier juntamente com a Distribuição Exponencial Gaussiana. Para a verificação da metodologia proposta foram realizados ensaios experimentais utilizando areia e celulose microcristalina, alterando-se a altura do leito e o diâmetro médio das partículas. Os resultados mostraram que o método identifica claramente a região onde o leito está tendendo à defluidização, e tem grande potencial em aplicações industriais para o controle on line de processos em leito fluidizado gás-sólido. Além disso, a metodologia pode ser importante para detectar mudança de regime para a relação altura do leito e diâmetro da coluna de fluidização (H/D) superiores à unidade. Adicionalmente, foram realizados ensaios de secagem utilizando celulose microcristalina visando à possibilidade da identificação do ponto crítico (teor de umidade existente no final do período de taxa constante) utilizando as seguintes técnicas: metodologia proposta neste trabalho, freqüência dominante e desvio padrão da flutuação de pressão. Observou-se que para o sólido utilizado não foi possível detectar o ponto crítico através desses três tipos de análise. No entanto, a metodologia proposta pode ser utilizada na identificação do momento que a secagem não se encontra na fluidização requerida e está tendendo à defluidização / Abstract: The partial or complete bed defluidization is an undesired phenomenon in industrial application involving fluidized bed operations. If the changes in the hydrodynamic of the fluidized bed are detected early enough, it may be prevented by increasing the gas velocity and/or, in some cases, changing the solid feed in the system. The use of a technique that can quickly identify the region where the bed is tending to the defluidization is very important, because one can act in the process avoiding loss of efficiency or even the necessity of shutting down the process. This work was as objective the development a methodology capable of identifying this region in gas-solid fluidized bed, by pressure fluctuation measurements analyzed using Fourier Transform and exponential Gaussian distribution. In order to verify the methodology proposed, experimental tests were carried out using microcrystalline cellulose and sand, where the fixed bed height and particle mean diameter were varied. Results showed that the method clearly identifies the region where the bed is tending to the defluidization, and it has great potential on industrial applications to on line process control in gas-solid fluidized bed. Also, the methodology can be important to detect regime changes at bed aspect ratios (H/D) higher than unit. Additionally, experimental drying test were carried out using microcrystalline cellulose particles to verify the possibility of identification of the critical drying point (the moisture content related to the end of the constant rate period) by the following techniques: the methodology proposed in the present work, the dominant frequency and standard deviation of pressure fluctuations. It was observed that for the solid material used it was not possible to detect the critical drying point. However, the methodology proposed can be used on identification the moment that the drying is not in the desired fluidization regime and it is tending to defluidization / Doutorado / Engenharia de Processos / Doutor em Engenharia Química
60

On the modeling of asset returns and calibration of European option pricing models

Robbertse, Johannes Lodewickes 07 July 2008 (has links)
Prof. F. Lombard

Page generated in 0.0741 seconds