• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 37
  • 23
  • 17
  • 9
  • 7
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 69
  • 65
  • 63
  • 49
  • 40
  • 39
  • 38
  • 30
  • 30
  • 28
  • 27
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Bayesian estimation of discrete signals with local dependencies. / Estimation bayésienne de signaux discrets à dépendances locales

Majidi, Mohammad Hassan 24 June 2014 (has links)
L'objectif de cette thèse est d'étudier le problème de la détection de données dans le système de communication sans fil, à la fois pour le cas de l'information d'état de canal parfaite et imparfaite au niveau du récepteur. Comme on le sait, la complexité de MLSE est exponentielle en la mémoire de canal et la cardinalité de l'alphabet symbole est rapidement ingérable, ce qui force à recourir à des approches sousoptimales. Par conséquent, en premier lieu, nous proposons une nouvelle égalisation itérative lorsque le canal est inconnu à l'émetteur et parfaitement connu au niveau du récepteur. Ce récepteur est basé sur une approche de continuation, et exploite l'idée d'approcher une fonction originale de coût d'optimisation par une suite de fonctions plus dociles et donc de réduire la complexité de calcul au récepteur.En second lieu, en vue de la détection de données sous un canal dynamique linéaire, lorsque le canal est inconnu au niveau du récepteur, le récepteur doit être en mesure d'effectuer conjointement l'égalisation et l'estimation de canal. De cette manière, on formule une représentation de modèle état-espace combiné du système de communication. Par cette représentation, nous pouvons utiliser le filltre de Kalman comme le meilleur estimateur des paramètres du canal. Le but de cette section est de motiver de façon rigoureuse la mise en place du filltre de Kalman dans l'estimation des sequences de Markov par des canaux dynamiques Gaussien. Par la présente, nous interprétons et explicitons les approximations sous-jacentes dans les approaches heuristiques.Enfin, si nous considérons une approche plus générale pour le canal dynamique non linéaire, nous ne pouvons pas utiliser le filtre de Kalman comme le meilleur estimateur. Ici, nous utilisons des modèles commutation d’espace-état (SSSM) comme modèles espace-état non linéaires. Ce modèle combine le modèle de Markov caché (HMM) et le modèle espace-état linéaire (LSSM). Pour l'estimation de canal et la detection de données, l'approche espérance et maximisation (EM) est utilisée comme approche naturelle. De cette façon, le filtre de Kalman étendu (EKF) et les filtres à particules sont évités. / The aim of this thesis is to study the problem of data detection in wireless communication system, for both case of perfect and imperfect channel state information at the receiver. As well known, the complexity of MLSE being exponential in the channel memory and in the symbol alphabet cardinality is quickly unmanageable and forces to resort to sub-optimal approaches. Therefore, first we propose a new iterative equalizer when the channel is unknown at the transmitter and perfectly known at the receiver. This receiver is based on continuation approach, and exploits the idea of approaching an original optimization cost function by a sequence of more tractable functions and thus reduce the receiver's computational complexity. Second, in order to data detection under linear dynamic channel, when the channel is unknown at the receiver, the receiver must be able to perform joint equalization and channel estimation. In this way, we formulate a combined state-space model representation of the communication system. By this representation, we can use the Kalman filter as the best estimator for the channel parameters. The aim in this section is to motivate rigorously the introduction of the Kalman filter in the estimation of Markov sequences through Gaussian dynamical channels. By this we interpret and make clearer the underlying approximations in the heuristic approaches. Finally, if we consider more general approach for non linear dynamic channel, we can not use the Kalman filter as the best estimator. Here, we use switching state-space model (SSSM) as non linear state-space model. This model combines the hidden Markov model (HMM) and linear state-space model (LSSM). In order to channel estimation and data detection, the expectation and maximization (EM) procedure is used as the natural approach. In this way extended Kalman filter (EKF) and particle filters are avoided.
192

含遺失值之列聯表最大概似估計量及模式的探討 / Maximum Likelihood Estimation in Contingency Tables with Missing Data

黃珮菁, Huang, Pei-Ching Unknown Date (has links)
在處理具遺失值之類別資料時,傳統的方法是將資料捨棄,但是這通常不是明智之舉,這些遺失某些分類訊息的資料通常還是可以提供其它重要的訊息,尤其當這類型資料的個數佔大多數時,將其捨棄可能使得估計的變異數增加,甚至影響最後的決策。如何將這些遺失某些訊息的資料納入考慮,作出完整的分析是最近幾十年間頗為重要的課題。本文主要整理了五種分析這類型資料的方法,分別為單樣本方法、多樣本方法、概似方程式因式分解法、EM演算法,以上四種方法可使用在資料遺失呈隨機分佈的條件成立下來進行分析。第五種則為樣本遺失不呈隨機分佈之分析方法。 / Traditionally, the simple way to deal with observations for which some of the variables are missing so that they cannot cross-classified into a contingency table simply excludes them from any analysis. However, it is generally agreed that such a practice would usually affect both the accuracy and the precision of the results. The purpose of the study is to bring together some of the sound alternatives available in the literature, and provide a comprehensive review. Four methods for handling data missing at random are discussed, they are single-sample method, multiple-sample method, factorization of the likelihood method, and EM algorithm. In addition, one way of handling data missing not at random is also reviewed.
193

遺漏值存在時羅吉斯迴歸模式分析之研究 / Logistic Regression Analysis with Missing Value

劉昌明, Liu, Chang Ming Unknown Date (has links)
194

狀態轉換跳躍相關模型下選擇權定價:股價指數選擇權之實證 / Option pricing under regime-switching jump model with dependent jump sizes: evidence from stock index option

李家慶, Lee, Jia-Ching Unknown Date (has links)
Black and Scholes (1973)對於報酬率提出以B-S模型配適,但B-S模型無法有效解釋報酬率不對稱高狹峰、波動度微笑、波動度叢聚、長記憶性的性質。Merton (1976)認為不尋常的訊息來臨會影響股價不連續跳躍,因此發展B-S模型加入不連續跳躍風險項的跳躍擴散模型,該模型可同時描述報酬率不對稱高狹峰和波動度微笑兩性質。Charles, Fuh and Lin (2011)加以考慮市場狀態提出狀態轉換跳躍模型,除了保留跳躍擴散模型可描述報酬率不對稱高狹峰和波動度微笑,更可以敘述報酬率的波動度叢聚和長記憶性。本文進一步拓展狀態轉換跳躍模型,考慮不連續跳躍風險項的帄均數與市場狀態相關,提出狀態轉換跳躍相關模型。並以道瓊工業指數與S&P 500指數1999年至2010年股價指數資料,採用EM和SEM分別估計參數與估計參數共變異數矩陣。使用概似比檢定結果顯示狀態轉換跳躍相關模型比狀態轉換跳躍獨立模型更適合描述股價指數報酬率。並驗證狀態轉換跳躍相關模型也可同時描述報酬率不對稱高狹峰、波動度微笑、波動度叢聚、長記憶性。最後利用Esscher轉換法計算股價指數選擇權定價公式,以敏感度分析模型參數對於定價結果的影響,並且市場驗證顯示狀態轉換跳躍相關模型會有最小的定價誤差。 / Black and Scholes (1973) proposed B-S model to fit asset return, but B-S model can’t effectively explain some asset return properties, such as leptokurtic, volatility smile, volatility clustering and long memory. Merton (1976) develop jump diffusion model (JDM) that consider abnormal information of market will affect the stock price, and this model can explain leptokurtic and volatility smile of asset return at the same time. Charles, Fuh and Lin (2011) extended the JDM and proposed regime-switching jump independent model (RSJIM) that consider jump rate is related to market states. RSJIM not only retains JDM properties but describes volatility clustering and long memory. In this paper, we extend RSJIM to regime-switching jump dependent model (RSJDM) which consider jump size and jump rate are both related to market states. We use EM and SEM algorithm to estimate parameters and covariance matrix, and use LR test to compare RSJIM and RSJDM. By using 1999 to 2010 Dow-Jones industrial average index and S&P 500 index as empirical evidence, RSJDM can explain index return properties said before. Finally, we calculate index option price formulation by Esscher transformation and do sensitivity analysis and market validation which give the smallest error of option prices by RSJDM.
195

Modélisation des bi-grappes et sélection des variables pour des données de grande dimension : application aux données d’expression génétique

Chekouo Tekougang, Thierry 08 1900 (has links)
Le regroupement des données est une méthode classique pour analyser les matrices d'expression génétiques. Lorsque le regroupement est appliqué sur les lignes (gènes), chaque colonne (conditions expérimentales) appartient à toutes les grappes obtenues. Cependant, il est souvent observé que des sous-groupes de gènes sont seulement co-régulés (i.e. avec les expressions similaires) sous un sous-groupe de conditions. Ainsi, les techniques de bi-regroupement ont été proposées pour révéler ces sous-matrices des gènes et conditions. Un bi-regroupement est donc un regroupement simultané des lignes et des colonnes d'une matrice de données. La plupart des algorithmes de bi-regroupement proposés dans la littérature n'ont pas de fondement statistique. Cependant, il est intéressant de porter une attention sur les modèles sous-jacents à ces algorithmes et de développer des modèles statistiques permettant d'obtenir des bi-grappes significatives. Dans cette thèse, nous faisons une revue de littérature sur les algorithmes qui semblent être les plus populaires. Nous groupons ces algorithmes en fonction du type d'homogénéité dans la bi-grappe et du type d'imbrication que l'on peut rencontrer. Nous mettons en lumière les modèles statistiques qui peuvent justifier ces algorithmes. Il s'avère que certaines techniques peuvent être justifiées dans un contexte bayésien. Nous développons une extension du modèle à carreaux (plaid) de bi-regroupement dans un cadre bayésien et nous proposons une mesure de la complexité du bi-regroupement. Le critère d'information de déviance (DIC) est utilisé pour choisir le nombre de bi-grappes. Les études sur les données d'expression génétiques et les données simulées ont produit des résultats satisfaisants. À notre connaissance, les algorithmes de bi-regroupement supposent que les gènes et les conditions expérimentales sont des entités indépendantes. Ces algorithmes n'incorporent pas de l'information biologique a priori que l'on peut avoir sur les gènes et les conditions. Nous introduisons un nouveau modèle bayésien à carreaux pour les données d'expression génétique qui intègre les connaissances biologiques et prend en compte l'interaction par paires entre les gènes et entre les conditions à travers un champ de Gibbs. La dépendance entre ces entités est faite à partir des graphes relationnels, l'un pour les gènes et l'autre pour les conditions. Le graphe des gènes et celui des conditions sont construits par les k-voisins les plus proches et permet de définir la distribution a priori des étiquettes comme des modèles auto-logistiques. Les similarités des gènes se calculent en utilisant l'ontologie des gènes (GO). L'estimation est faite par une procédure hybride qui mixe les MCMC avec une variante de l'algorithme de Wang-Landau. Les expériences sur les données simulées et réelles montrent la performance de notre approche. Il est à noter qu'il peut exister plusieurs variables de bruit dans les données à micro-puces, c'est-à-dire des variables qui ne sont pas capables de discriminer les groupes. Ces variables peuvent masquer la vraie structure du regroupement. Nous proposons un modèle inspiré de celui à carreaux qui, simultanément retrouve la vraie structure de regroupement et identifie les variables discriminantes. Ce problème est traité en utilisant un vecteur latent binaire, donc l'estimation est obtenue via l'algorithme EM de Monte Carlo. L'importance échantillonnale est utilisée pour réduire le coût computationnel de l'échantillonnage Monte Carlo à chaque étape de l'algorithme EM. Nous proposons un nouveau modèle pour résoudre le problème. Il suppose une superposition additive des grappes, c'est-à-dire qu'une observation peut être expliquée par plus d'une seule grappe. Les exemples numériques démontrent l'utilité de nos méthodes en terme de sélection de variables et de regroupement. / Clustering is a classical method to analyse gene expression data. When applied to the rows (e.g. genes), each column belongs to all clusters. However, it is often observed that the genes of a subset of genes are co-regulated and co-expressed in a subset of conditions, but behave almost independently under other conditions. For these reasons, biclustering techniques have been proposed to look for sub-matrices of a data matrix. Biclustering is a simultaneous clustering of rows and columns of a data matrix. Most of the biclustering algorithms proposed in the literature have no statistical foundation. It is interesting to pay attention to the underlying models of these algorithms and develop statistical models to obtain significant biclusters. In this thesis, we review some biclustering algorithms that seem to be most popular. We group these algorithms in accordance to the type of homogeneity in the bicluster and the type of overlapping that may be encountered. We shed light on statistical models that can justify these algorithms. It turns out that some techniques can be justified in a Bayesian framework. We develop an extension of the biclustering plaid model in a Bayesian framework and we propose a measure of complexity for biclustering. The deviance information criterion (DIC) is used to select the number of biclusters. Studies on gene expression data and simulated data give satisfactory results. To our knowledge, the biclustering algorithms assume that genes and experimental conditions are independent entities. These algorithms do not incorporate prior biological information that could be available on genes and conditions. We introduce a new Bayesian plaid model for gene expression data which integrates biological knowledge and takes into account the pairwise interactions between genes and between conditions via a Gibbs field. Dependence between these entities is made from relational graphs, one for genes and another for conditions. The graph of the genes and conditions is constructed by the k-nearest neighbors and allows to define a priori distribution of labels as auto-logistic models. The similarities of genes are calculated using gene ontology (GO). To estimate the parameters, we adopt a hybrid procedure that mixes MCMC with a variant of the Wang-Landau algorithm. Experiments on simulated and real data show the performance of our approach. It should be noted that there may be several variables of noise in microarray data. These variables may mask the true structure of the clustering. Inspired by the plaid model, we propose a model that simultaneously finds the true clustering structure and identifies discriminating variables. We propose a new model to solve the problem. It assumes that an observation can be explained by more than one cluster. This problem is addressed by using a binary latent vector, so the estimation is obtained via the Monte Carlo EM algorithm. Importance Sampling is used to reduce the computational cost of the Monte Carlo sampling at each step of the EM algorithm. Numerical examples demonstrate the usefulness of these methods in terms of variable selection and clustering. / Les simulations ont été implémentées avec le programme Java.
196

跳躍相關風險下狀態轉換模型之選擇權定價:股價指數選擇權實證分析 / Option pricing of a stock index under regime switching model with dependent jump size risks: empirical analysis of the stock index option

林琮偉, Lin, Tsung Wei Unknown Date (has links)
本文使用Esscher轉換法推導狀態轉換模型、跳躍獨立風險下狀狀態轉換模型及跳躍相關風險下狀態轉換模型的選擇權定價公式。藉由1999年至2011年道瓊工業指數真實市場資料使用EM演算法估計模型參數並使用概似比檢定得到跳躍相關風險下狀態轉換模型最適合描述報酬率資料。接著進行敏感度分析得知,高波動狀態的機率、報酬率的整體波動度及跳躍頻率三者與買權呈現正相關。最後由市場驗證可知,跳躍相關風險下狀態轉換模型在價平及價外的定價誤差皆是最小,在價平的定價誤差則略高於跳躍獨立風險下狀態轉換模型。 / In this paper, we derive regime switching model, regime switching model with independent jump and regime switching model with dependent jump by Esscher transformation. We use the data from 1999 to 2011 Dow-Jones industrial average index market price to estimate the parameter by EM algorithm. Then we use likelihood ratio test to obtain that regime switching model with dependent jump is the best model to depict return data. Moreover, we do sensitivity analysis and find the result that the probability of the higher volatility state , the overall volatility of rate of return , and the jump frequency are positively correlated with call option value. Finally, we enhance the empirical value of regime switching model with dependent jump by means of calculating the price error.
197

Analyse statistique de données fonctionnelles à structures complexes

Adjogou, Adjobo Folly Dzigbodi 05 1900 (has links)
No description available.
198

Família composta Poisson-Truncada: propriedades e aplicações

ARAÚJO, Raphaela Lima Belchior de 31 July 2015 (has links)
Submitted by Haroudo Xavier Filho (haroudo.xavierfo@ufpe.br) on 2016-04-05T14:28:43Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5) / Made available in DSpace on 2016-04-05T14:28:43Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5) Previous issue date: 2015-07-31 / CAPES / Este trabalho analisa propriedades da família de distribuições de probabilidade Composta N e propõe a sub-família Composta Poisson-Truncada como um meio de compor distribuições de probabilidade. Suas propriedades foram estudadas e uma nova distribuição foi investigada: a distribuição Composta Poisson-Truncada Normal. Esta distribuição possui três parâmetros e tem uma flexibilidade para modelar dados multimodais. Demonstramos que sua densidade é dada por uma mistura infinita de densidades normais em que os pesos são dados pela função de massa de probabilidade da Poisson-Truncada. Dentre as propriedades exploradas desta distribuição estão a função característica e expressões para o cálculo dos momentos. Foram analisados três métodos de estimação para os parâmetros da distribuição Composta Poisson-Truncada Normal, sendo eles, o método dos momentos, o da função característica empírica (FCE) e o método de máxima verossimilhança (MV) via algoritmo EM. Simulações comparando estes três métodos foram realizadas e, por fim, para ilustrar o potencial da distribuição proposta, resultados numéricos com modelagem de dados reais são apresentados. / This work analyzes properties of the Compound N family of probability distributions and proposes the sub-family Compound Poisson-Truncated as a means of composing probability distributions. Its properties were studied and a new distribution was investigated: the Compound Poisson-Truncated Normal distribution. This distribution has three parameters and has the flexibility to model multimodal data. We demonstrated that its density is given by an infinite mixture of normal densities where in the weights are given by the Poisson-Truncated probability mass function. Among the explored properties of this distribution are the characteristic function end expressions for the calculation of moments. Three estimation methods were analyzed for the parameters of the Compound Poisson-Truncated Normal distribution, namely, the method of moments, the empirical characteristic function (ECF) and the method of maximum likelihood (ML) by EM algorithm. Simulations comparing these three methods were performed and, finally, to illustrate the potential of the proposed distribution numerical results with real data modeling are presented.
199

Essays on multivariate generalized Birnbaum-Saunders methods

MARCHANT FUENTES, Carolina Ivonne 31 October 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-04-26T17:07:37Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5) / Made available in DSpace on 2017-04-26T17:07:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5) Previous issue date: 2016-10-31 / CAPES; BOLSA DO CHILE. / In the last decades, univariate Birnbaum-Saunders models have received considerable attention in the literature. These models have been widely studied and applied to fatigue, but they have also been applied to other areas of the knowledge. In such areas, it is often necessary to model several variables simultaneously. If these variables are correlated, individual analyses for each variable can lead to erroneous results. Multivariate regression models are a useful tool of the multivariate analysis, which takes into account the correlation between variables. In addition, diagnostic analysis is an important aspect to be considered in the statistical modeling. Furthermore, multivariate quality control charts are powerful and simple visual tools to determine whether a multivariate process is in control or out of control. A multivariate control chart shows how several variables jointly affect a process. First, we propose, derive and characterize multivariate generalized logarithmic Birnbaum-Saunders distributions. Also, we propose new multivariate generalized Birnbaum-Saunders regression models. We use the method of maximum likelihood estimation to estimate their parameters through the expectation-maximization algorithm. We carry out a simulation study to evaluate the performance of the corresponding estimators based on the Monte Carlo method. We validate the proposed models with a regression analysis of real-world multivariate fatigue data. Second, we conduct a diagnostic analysis for multivariate generalized Birnbaum-Saunders regression models. We consider the Mahalanobis distance as a global influence measure to detect multivariate outliers and use it for evaluating the adequacy of the distributional assumption. Moreover, we consider the local influence method and study how a perturbation may impact on the estimation of model parameters. We implement the obtained results in the R software, which are illustrated with real-world multivariate biomaterials data. Third and finally, we develop a robust methodology based on multivariate quality control charts for generalized Birnbaum-Saunders distributions with the Hotelling statistic. We use the parametric bootstrap method to obtain the distribution of this statistic. A Monte Carlo simulation study is conducted to evaluate the proposed methodology, which reports its performance to provide earlier alerts of out-of-control conditions. An illustration with air quality real-world data of Santiago-Chile is provided. This illustration shows that the proposed methodology can be useful for alerting episodes of extreme air pollution. / Nas últimas décadas, o modelo Birnbaum-Saunders univariado recebeu considerável atenção na literatura. Esse modelo tem sido amplamente estudado e aplicado inicialmente à modelagem de fadiga de materiais. Com o passar dos anos surgiram trabalhos com aplicações em outras áreas do conhecimento. Em muitas das aplicações é necessário modelar diversas variáveis simultaneamente incorporando a correlação entre elas. Os modelos de regressão multivariados são uma ferramenta útil de análise multivariada, que leva em conta a correlação entre as variáveis de resposta. A análise de diagnóstico é um aspecto importante a ser considerado no modelo estatístico e verifica as suposições adotadas como também sua sensibilidade. Além disso, os gráficos de controle de qualidade multivariados são ferramentas visuais eficientes e simples para determinar se um processo multivariado está ou não fora de controle. Este gráfico mostra como diversas variáveis afetam conjuntamente um processo. Primeiro, propomos, derivamos e caracterizamos as distribuições Birnbaum-Saunders generalizadas logarítmicas multivariadas. Em seguida, propomos um modelo de regressão Birnbaum-Saunders generalizado multivariado. Métodos para estimação dos parâmetros do modelo, tal como o método de máxima verossimilhança baseado no algoritmo EM, foram desenvolvidos. Estudos de simulação de Monte Carlo foram realizados para avaliar o desempenho dos estimadores propostos. Segundo, realizamos uma análise de diagnóstico para modelos de regressão Birnbaum-Saunders generalizados multivariados. Consideramos a distância de Mahalanobis como medida de influência global de detecção de outliers multivariados utilizando-a para avaliar a adequacidade do modelo. Além disso, desenvolvemos medidas de diagnósticos baseadas em influência local sob alguns esquemas de perturbações. Implementamos a metodologia apresentada no software R, e ilustramos com dados reais multivariados de biomateriais. Terceiro, e finalmente, desenvolvemos uma metodologia robusta baseada em gráficos de controle de qualidade multivariados para a distribuição Birnbaum-Saunders generalizada usando a estatística de Hotelling. Baseado no método bootstrap paramétrico encontramos aproximações da distribuição desta estatística e obtivemos limites de controle para o gráfico proposto. Realizamos um estudo de simulação de Monte Carlo para avaliar a metodologia proposta indicando seu bom desempenho para fornecer alertas precoces de processos fora de controle. Uma ilustração com dados reais de qualidade do ar de Santiago-Chile é fornecida. Essa ilustração mostra que a metodologia proposta pode ser útil para alertar sobre episódios de poluição extrema do ar, evitando efeitos adversos na saúde humana.
200

Imputação de dados faltantes via algoritmo EM e rede neural MLP com o método de estimativa de máxima verossimilhança para aumentar a acurácia das estimativas

Ribeiro, Elisalvo Alves 14 August 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Database with missing values it is an occurrence often found in the real world, beiging of this problem caused by several reasons (equipment failure that transmits and stores the data, handler failure, failure who provides information, etc.). This may make the data inconsistent and unable to be analyzed, leading to very skewed conclusions. This dissertation aims to explore the use of Multilayer Perceptron Artificial Neural Network (ANN MLP), with new activation functions, considering two approaches (single imputation and multiple imputation). First, we propose the use of Maximum Likelihood Estimation Method (MLE) in each network neuron activation function, against the approach currently used, which is without the use of such a method or when is used only in the cost function (network output). It is then analyzed the results of these approaches compared with the Expectation Maximization algorithm (EM) is that the state of the art to treat missing data. The results indicate that when using the Artificial Neural Network MLP with Maximum Likelihood Estimation Method, both in all neurons and only in the output function, lead the an imputation with lower error. These experimental results, evaluated by metrics such as MAE (Mean Absolute Error) and RMSE (Root Mean Square Error), showed that the better results in most experiments occured when using the MLP RNA addressed in this dissertation to single imputation and multiple. / Base de dados com valores faltantes é uma ocorrência frequentemente encontrada no mundo real, sendo as causas deste problema são originadas por motivos diversos (falha no equipamento que transmite e armazena os dados, falha do manipulador, falha de quem fornece a informação, etc.). Tal situação pode tornar os dados inconsistentes e inaptos de serem analisados, conduzindo às conclusões muito enviesadas. Esta dissertação tem como objetivo explorar o emprego de Redes Neurais Artificiais Multilayer Perceptron (RNA MLP), com novas funções de ativação, considerando duas abordagens (imputação única e imputação múltipla). Primeiramente, é proposto o uso do Método de Estimativa de Máxima Verossimilhança (EMV) na função de ativação de cada neurônio da rede, em contrapartida à abordagem utilizada atualmente, que é sem o uso de tal método, ou quando o utiliza é apenas na função de custo (na saída da rede). Em seguida, são analisados os resultados destas abordagens em comparação com o algoritmo Expectation Maximization (EM) que é o estado da arte para tratar dados faltantes. Os resultados obtidos indicam que ao utilizar a Rede Neural Artificial MLP com o Método de Estimativa de Máxima Verossimilhança, tanto em todos os neurônios como apenas na função de saída, conduzem a uma imputação com menor erro. Os resultados experimentais foram avaliados via algumas métricas, sendo as principais o MAE (Mean Absolute Error) e RMSE (Root Mean Square Error), as quais apresentaram melhores resultados na maioria dos experimentos quando se utiliza a RNA MLP abordada neste trabalho para fazer imputação única e múltipla.

Page generated in 0.0521 seconds