• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 16
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Estratégias para tratamento de variáveis com dados faltantes durante o desenvolvimento de modelos preditivos / Strategies for treatment of variables with missing data during the development of predictive models

Fernando Assunção 09 May 2012 (has links)
Modelos preditivos têm sido cada vez mais utilizados pelo mercado a fim de auxiliarem as empresas na mitigação de riscos, expansão de carteiras, retenção de clientes, prevenção a fraudes, entre outros objetivos. Entretanto, durante o desenvolvimento destes modelos é comum existirem, dentre as variáveis preditivas, algumas que possuem dados não preenchidos (missings), sendo necessário assim adotar algum procedimento para tratamento destas variáveis. Dado este cenário, este estudo tem o objetivo de discutir metodologias de tratamento de dados faltantes em modelos preditivos, incentivando o uso de algumas delas já conhecidas pelo meio acadêmico, só que não utilizadas pelo mercado. Para isso, este trabalho descreve sete metodologias. Todas elas foram submetidas a uma aplicação empírica utilizando uma base de dados referente ao desenvolvimento de um modelo de Credit Score. Sobre esta base foram desenvolvidos sete modelos (um para cada metodologia descrita) e seus resultados foram avaliados e comparados através de índices de desempenho amplamente utilizados pelo mercado (KS, Gini, ROC e Curva de Aprovação). Nesta aplicação, as técnicas que apresentaram melhor desempenho foram a que tratam os dados faltantes como uma categoria à parte (técnica já utilizada pelo mercado) e a metodologia que consiste em agrupar os dados faltantes na categoria conceitualmente mais semelhante. Já a que apresentou o pior desempenho foi a metodologia que simplesmente não utiliza a variável com dados faltantes, outro procedimento comumente visto no mercado. / Predictive models have been increasingly used by the market in order to assist companies in risk mitigation, portfolio growth, customer retention, fraud prevention, among others. During the model development, however, it is usual to have, among the predictive variables, some who have data not filled in (missing values), thus it is necessary to adopt a procedure to treat these variables. Given this scenario, the aim of this study is to discuss frameworks to deal with missing data in predictive models, encouraging the use of some already known by academia that are still not used by the market. This paper describes seven methods, which were submitted to an empirical application using a Credit Score data set. Each framework described resulted in a predictive model developed and the results were evaluated and compared through a series of widely used performance metrics (KS, Gini, ROC curve, Approval curve). In this application, the frameworks that presented better performance were the ones that treated missing data as a separate category (technique already used by the market) and the framework which consists of grouping the missing data in the category most similar conceptually. The worst performance framework otherwise was the one that simply ignored the variable containing missing values, another procedure commonly used by the market.
12

Análise do número de grupos em bases de dados incompletas utilizando agrupamentos nebulosos e reamostragem Bootstrap / Analysis the number of clusters present in incomplete datasets using a combination of the fuzzy clustering and resampling bootstrapping

Selma Terezinha Milagre 18 July 2008 (has links)
A técnica de agrupamento de dados é amplamente utilizada em análise exploratória, a qual é frequentemente necessária em diversas áreas de pesquisa tais como medicina, biologia e estatística, para avaliar potenciais hipóteses a serem utilizadas em estudos subseqüentes. Em bases de dados reais, a ocorrência de dados incompletos, nos quais os valores de um ou mais atributos do dado são desconhecidos, é bastante comum. Este trabalho apresenta um método capaz de identificar o número de grupos presentes em bases de dados incompletas, utilizando a combinação das técnicas de agrupamentos nebulosos e reamostragem bootstrap. A qualidade da classificação é baseada em medidas de comparação tradicionais como F1, Classificação Cruzada, Hubert e outras. Os estudos foram feitos em oito bases de dados. As quatro primeiras são bases de dados artificiais, a quinta e a sexta são a wine e íris. A sétima e oitava bases são formadas por uma coleção brasileira de 119 estirpes de Bradyrhizobium. Para avaliar toda informação sem introduzir estimativas, fez-se a modificação do algoritmo Fuzzy C-Means (FCM) utilizando-se um vetor de índices de atributos, os quais indicam onde o valor de um atributo é observado ou não, modificando-se ento, os cálculos do centro e distância ao centro. As simulações foram feitas de 2 até 8 grupos utilizando-se 100 sub-amostras. Os percentuais de valores faltando utilizados foram 2%, 5%, 10%, 20% e 30%. Os resultados deste trabalho demonstraram que nosso método é capaz de identificar participações relevantes, até em presença de altos índices de dados incompletos, sem a necessidade de se fazer nenhuma suposição sobre a base de dados. As medidas Hubert e índice randômico ajustado encontraram os melhores resultados experimentais. / Clustering in exploratory data analysis is often necessary in several areas of the survey such as medicine, biology and statistics, to evaluate potential hypotheses for subsequent studies. In real datasets the occurrence of incompleteness, where the values of some of the attributes are unknown, is very common. This work presents a method capable to identifying the number of clusters present in incomplete datasets, using a combination of the fuzzy clustering and resampling (bootstrapping). The quality of classification is based on the traditional measures, like F1, Cross-Classification, Hubert and others. The studies were made on eigth datasets. The first four are artificial datasets, the fifth and sixth are the wine and iris datasets. The seventh and eighth databases are composed of the brazilian collection of 119 Bradyrhizobium strains. To evaluate all information without introducing estimates, a modification of the Fuzzy C-Means (FCM) algorithm was developed using an index vector of attributes, which indicates whether an attribute value is observed or not, and changing the center and distance calculations. The simulations were made from 2 to 8 clusters using 100 sub-samples. The percentages of the missing values used were 2%, 5%, 10%, 20% and 30%. Even lacking data and with no special requirements of the database, the results of this work demonstrate that the proposed method is capable to identifying relevant partitions. The best experimental results were found using Hubert and corrected randomness measures.
13

Substituição de valores ausentes: uma abordagem baseada em um algoritmo evolutivo para agrupamento de dados / Missing value substitution: an approach based on evolutionary algorithm for clustering data

Jonathan de Andrade Silva 29 April 2010 (has links)
A substituição de valores ausentes, também conhecida como imputação, é uma importante tarefa para a preparação dos dados em aplicações de mineração de dados. Este trabalho propõe e avalia um algoritmo para substituição de valores ausentes baseado em um algoritmo evolutivo para agrupamento de dados. Este algoritmo baseia-se na suposição de que grupos (previamente desconhecidos) de dados podem prover informações úteis para o processo de imputação. Para avaliar experimentalmente o algoritmo proposto, simulações de valores ausentes foram realizadas em seis bases de dados, para problemas de classificação, com a aplicação de dois mecanismos amplamente usados em experimentos controlados: MCAR e MAR. Os algoritmos de imputação têm sido tradicionalmente avaliados por algumas medidas de capacidade de predição. Entretanto, essas tradicionais medidas de avaliação não estimam a influência dos métodos de imputação na etapa final em tarefas de modelagem (e.g., em classificação). Este trabalho descreve resultados experimentais obtidos sob a perspectiva de predição e inserção de tendências (viés) em problemas de classificação. Os resultados de diferentes cenários nos quais o algoritmo proposto, apresenta em geral, desempenho semelhante a outros seis algoritmos de imputação reportados na literatura. Finalmente, as análises estatísticas reportadas sugerem que melhores resultados de predição não implicam necessariamente em menor viés na classificação / The substitution of missing values, also called imputation, is an important data preparation task for data mining applications. This work proposes and evaluates an algorithm for missing values imputation that is based on an evolutionary algorithm for clustering. This algorithm is based on the assumption that clusters of (partially unknown) data can provide useful information for the imputation process. In order to experimentally assess the proposed method, simulations of missing values were performed on six classification datasets, with two missingness mechanisms widely used in practice: MCAR and MAR. Imputation algorithms have been traditionally assessed by some measures of prediction capability. However, this traditionall approach does not allow inferring the influence of imputed values in the ultimate modeling tasks (e.g., in classification). This work describes the experimental results obtained from the prediction and insertion bias perspectives in classification problems. The results illustrate different scenarios in which the proposed algorithm performs similarly to other six imputation algorithms reported in the literature. Finally, statistical analyses suggest that best prediction results do not necessarily imply in less classification bias
14

Evaluation verschiedener Imputationsverfahren zur Aufbereitung großer Datenbestände am Beispiel der SrV-Studie von 2013

Meister, Romy 09 March 2016 (has links)
Missing values are a serious problem in surveys. The literature suggests to replace these with realistic values using imputation methods. This master thesis examines four different imputation techniques concerning their ability for handling missing data. Therefore, mean imputation, conditional mean imputation, Expectation-Maximization algorithm and Markov-Chain-Monte-Carlo method are presented. In addition, the three first mentioned methods were simulated by using a large real data set. To analyse the quality of these techniques a metric variable of the original data set was chosen to generate some missing values considering different percentages of missingness and common missing data mechanism. After the replacement of the simulated missing values, several statistical parameters, like quantiles, arithmetic mean and variance of all completed data sets were calculated in order to compare them with the parameters from the original data set. The results, that have been established by empiric data analysis, show that the Expectation-Maximization algorithm estimates all considered statistical parameters of the complete data set far better than the other analysed imputation methods, although the assumption of a multivariate normal distribution could not be achieved. It is found, that the mean as well as the conditional mean imputation produce statistically significant estimator for the arithmetic mean under the supposition of missing completely at random, whereas other parameters as the variance do not show the estimated effects. Generally, the accuracy of all estimators from the three imputation methods decreases with increasing percentage of missingness. The results lead to the conclusion that the Expectation-Maximization algorithm should be preferred over the mean and the conditional mean imputation.
15

The use of weights to account for non-response and drop-out

Höfler, Michael, Pfister, Hildegard, Lieb, Roselind, Wittchen, Hans-Ulrich 19 February 2013 (has links) (PDF)
Background: Empirical studies in psychiatric research and other fields often show substantially high refusal and drop-out rates. Non-participation and drop-out may introduce a bias whose magnitude depends on how strongly its determinants are related to the respective parameter of interest. Methods: When most information is missing, the standard approach is to estimate each respondent’s probability of participating and assign each respondent a weight that is inversely proportional to this probability. This paper contains a review of the major ideas and principles regarding the computation of statistical weights and the analysis of weighted data. Results: A short software review for weighted data is provided and the use of statistical weights is illustrated through data from the EDSP (Early Developmental Stages of Psychopathology) Study. The results show that disregarding different sampling and response probabilities can have a major impact on estimated odds ratios. Conclusions: The benefit of using statistical weights in reducing sampling bias should be balanced against increased variances in the weighted parameter estimates.
16

Identifying Induced Bias in Machine Learning

Chowdhury Mohammad Rakin Haider (18414885) 22 April 2024 (has links)
<p dir="ltr">The last decade has witnessed an unprecedented rise in the application of machine learning in high-stake automated decision-making systems such as hiring, policing, bail sentencing, medical screening, etc. The long-lasting impact of these intelligent systems on human life has drawn attention to their fairness implications. A majority of subsequent studies targeted the existing historically unfair decision labels in the training data as the primary source of bias and strived toward either removing them from the dataset (de-biasing) or avoiding learning discriminatory patterns from them during training. In this thesis, we show label bias is not a necessary condition for unfair outcomes from a machine learning model. We develop theoretical and empirical evidence showing that biased model outcomes can be introduced by a range of different data properties and components of the machine learning development pipeline.</p><p dir="ltr">In this thesis, we first prove that machine learning models are expected to introduce bias even when the training data doesn’t include label bias. We use the proof-by-construction technique in our formal analysis. We demonstrate that machine learning models, trained to optimize for joint accuracy, introduce bias even when the underlying training data is free from label bias but might include other forms of disparity. We identify two data properties that led to the introduction of bias in machine learning. They are the group-wise disparity in the feature predictivity and the group-wise disparity in the rates of missing values. The experimental results suggest that a wide range of classifiers trained on synthetic or real-world datasets are prone to introducing bias under feature disparity and missing value disparity independently from or in conjunction with the label bias. We further analyze the trade-off between fairness and established techniques to improve the generalization of machine learning models such as adversarial training, increasing model complexity, etc. We report that adversarial training sacrifices fairness to achieve robustness against noisy (typically adversarial) samples. We propose a fair re-weighted adversarial training method to improve the fairness of the adversarially trained models while sacrificing minimal adversarial robustness. Finally, we observe that although increasing model complexity typically improves generalization accuracy, it doesn’t linearly improve the disparities in the prediction rates.</p><p dir="ltr">This thesis unveils a vital limitation of machine learning that has yet to receive significant attention in FairML literature. Conventional FairML literature reduces the ML fairness task to as simple as de-biasing or avoiding learning discriminatory patterns. However, the reality is far away from it. Starting from deciding on which features collect up to algorithmic choices such as optimizing robustness can act as a source of bias in model predictions. It calls for detailed investigations on the fairness implications of machine learning development practices. In addition, identifying sources of bias can facilitate pre-deployment fairness audits of machine learning driven automated decision-making systems.</p>
17

Imputação de dados em experimentos multiambientais: novos algoritmos utilizando a decomposição por valores singulares / Data imputation in multi-environment trials: new algorithms using the singular value decomposition

Alarcon, Sergio Arciniegas 02 February 2016 (has links)
As análises biplot que utilizam os modelos de efeitos principais aditivos com inter- ação multiplicativa (AMMI) requerem matrizes de dados completas, mas, frequentemente os ensaios multiambientais apresentam dados faltantes. Nesta tese são propostas novas metodologias de imputação simples e múltipla que podem ser usadas para analisar da- dos desbalanceados em experimentos com interação genótipo por ambiente (G×E). A primeira, é uma nova extensão do método de validação cruzada por autovetor (Bro et al, 2008). A segunda, corresponde a um novo algoritmo não-paramétrico obtido por meio de modificações no método de imputação simples desenvolvido por Yan (2013). Também é incluído um estudo que considera sistemas de imputação recentemente relatados na literatura e os compara com o procedimento clássico recomendado para imputação em ensaios (G×E), ou seja, a combinação do algoritmo de Esperança-Maximização com os modelos AMMI ou EM-AMMI. Por último, são fornecidas generalizações da imputação simples descrita por Arciniegas-Alarcón et al. (2010) que mistura regressão com aproximação de posto inferior de uma matriz. Todas as metodologias têm como base a decomposição por valores singulares (DVS), portanto, são livres de pressuposições distribucionais ou estruturais. Para determinar o desempenho dos novos esquemas de imputação foram realizadas simulações baseadas em conjuntos de dados reais de diferentes espécies, com valores re- tirados aleatoriamente em diferentes porcentagens e a qualidade das imputações avaliada com distintas estatísticas. Concluiu-se que a DVS constitui uma ferramenta útil e flexível na construção de técnicas eficientes que contornem o problema de perda de informação em matrizes experimentais. / The biplot analysis using the additive main effects and multiplicative interaction models (AMMI) require complete data matrix, but often multi-environments trials have missing values. This thesis proposed new methods of single and multiple imputation that can be used to analyze unbalanced data in experiments with genotype by environment interaction (G×E). The first is a new extension of the cross-validation method by eigenvector (Bro et al., 2008). The second, corresponds to a new non-parametric algorithm obtained through modifications of the simple imputation method developed by Yan (2013). Also is included a study that considers imputation systems recently reported in the literature and compares them with the classic procedure recommended for imputation in trials (G×E), it means, the combination of the Expectation-Maximization (EM) algorithm with the additive main effects and multiplicative interaction (AMMI) model or EM-AMMI. Finally, are supplied generalizations of simple imputation described by Arciniegas-Alarcón et al. (2010) that combines regression with lower-rank approximation of a matrix. All methodologies are based on singular value decomposition (SVD), so, are free of any distributional or structural assumptions. In order to determine the performance of the new imputation schemes were performed simulations based on real data set of different species, with values deleted randomly at different percentages and the quality of the imputations was evaluated using different statistics. It was concluded that SVD provides a useful and flexible tool for the construction of efficient techniques that circumvent the problem of missing data in experimental matrices.
18

Analysis of Longitudinal Surveys with Missing Responses

Carrillo Garcia, Ivan Adolfo January 2008 (has links)
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. The National Longitudinal Survey of Children and Youth (NLSCY), a large scale survey with a complex sampling design and conducted by Statistics Canada, follows a large group of children and youth over time and collects measurement on various indicators related to their educational, behavioral and psychological development. One of the major objectives of the study is to explore how such development is related to or affected by familial, environmental and economical factors. The generalized estimating equation approach, sometimes better known as the GEE method, is the most popular statistical inference tool for longitudinal studies. The vast majority of existing literature on the GEE method, however, uses the method for non-survey settings; and issues related to complex sampling designs are ignored. This thesis develops methods for the analysis of longitudinal surveys when the response variable contains missing values. Our methods are built within the GEE framework, with a major focus on using the GEE method when missing responses are handled through hot-deck imputation. We first argue why, and further show how, the survey weights can be incorporated into the so-called Pseudo GEE method under a joint randomization framework. The consistency of the resulting Pseudo GEE estimators with complete responses is established under the proposed framework. The main focus of this research is to extend the proposed pseudo GEE method to cover cases where the missing responses are imputed through the hot-deck method. Both weighted and unweighted hot-deck imputation procedures are considered. The consistency of the pseudo GEE estimators under imputation for missing responses is established for both procedures. Linearization variance estimators are developed for the pseudo GEE estimators under the assumption that the finite population sampling fraction is small or negligible, a scenario often held for large scale population surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study. The results show that the pseudo GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous response and binary response.
19

Analysis of Longitudinal Surveys with Missing Responses

Carrillo Garcia, Ivan Adolfo January 2008 (has links)
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. The National Longitudinal Survey of Children and Youth (NLSCY), a large scale survey with a complex sampling design and conducted by Statistics Canada, follows a large group of children and youth over time and collects measurement on various indicators related to their educational, behavioral and psychological development. One of the major objectives of the study is to explore how such development is related to or affected by familial, environmental and economical factors. The generalized estimating equation approach, sometimes better known as the GEE method, is the most popular statistical inference tool for longitudinal studies. The vast majority of existing literature on the GEE method, however, uses the method for non-survey settings; and issues related to complex sampling designs are ignored. This thesis develops methods for the analysis of longitudinal surveys when the response variable contains missing values. Our methods are built within the GEE framework, with a major focus on using the GEE method when missing responses are handled through hot-deck imputation. We first argue why, and further show how, the survey weights can be incorporated into the so-called Pseudo GEE method under a joint randomization framework. The consistency of the resulting Pseudo GEE estimators with complete responses is established under the proposed framework. The main focus of this research is to extend the proposed pseudo GEE method to cover cases where the missing responses are imputed through the hot-deck method. Both weighted and unweighted hot-deck imputation procedures are considered. The consistency of the pseudo GEE estimators under imputation for missing responses is established for both procedures. Linearization variance estimators are developed for the pseudo GEE estimators under the assumption that the finite population sampling fraction is small or negligible, a scenario often held for large scale population surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study. The results show that the pseudo GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous response and binary response.
20

Σχεδιασμός και υλοποίηση πολυκριτηριακής υβριδικής μεθόδου ταξινόμησης βιολογικών δεδομένων με χρήση εξελικτικών αλγορίθμων και νευρωνικών δικτύων

Σκρεπετός, Δημήτριος 09 October 2014 (has links)
Δύσκολα προβλήματα ταξινόμησης από τον χώρο της Βιοπληροφορικής όπως η πρόβλεψη των microRNA γονιδιών και η πρόβλεψη των πρωτεϊνικών αλληλεπιδράσεων (Protein- Protein Interactions) απαιτούν ισχυρούς ταξινομητές οι οποίοι θα πρέπει να έχουν καλή ακρίβεια ταξινόμησης, να χειρίζονται ελλιπείς τιμές, να είναι ερμηνεύσιμοι, και να μην πάσχουν από το πρόβλημα ανισορροπίας κλάσεων. Ένας ευρέως χρησιμοποιούμενος ταξινομητής είναι τα νευρωνικά δίκτυα, τα οποία ωστόσο χρειάζονται προσδιορισμό της αρχιτεκτονικής τους και των λοιπών παραμέτρων τους, ενώ και οι αλγόριθμοι εκμάθησής τους συνήθως συγκλίνουν σε τοπικά ελάχιστα. Για τους λόγους αυτούς, προτείνεται μία πολυκριτηριακή εξελικτική μέθοδος η οποία βασίζεται στους εξελικτικούς αλγορίθμους ώστε να βελτιστοποιήσει πολλά από τα προαναφερθέντα κριτήρια απόδοσης των νευρωνικών δικτύων, να βρει επίσης την βέλτιση αρχιτεκτονική καθώς και ένα ολικό ελάχιστο για τα συναπτικά τους βάρη. Στην συνέχεια, από τον πληθυσμό που προκύπτει χρησιμοποιούμε το σύνολό του ώστε να επιτύχουμε την ταξινόμηση. / Hard classification problems of the area of Bioinformatics, like microRNA prediction and PPI prediction, demand powerful classifiers which must have good prediction accuracy, handle missing values, be interpretable, and not suffer from the class imbalance problem. One wide used classifier is neural networks, which need definition of their architecture and their other parameters, while their training algorithms usually converge to local minima. For those reasons, we suggest a multi-objective evolutionary method, which is based to evolutionary algorithms in order to optimise many of the aforementioned criteria of the performance of a neural network, and also find the optimised architecture and a global minimum for its weights. Then, from the ensuing population, we use it as an ensemble classifier in order to perform the classification.

Page generated in 0.0813 seconds