• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 13
  • 13
  • 8
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Classification and Sequential Pattern Mining From Uncertain Datasets

Hooshsadat, Metanat Unknown Date
No description available.
2

Association Rule Based Classification

Palanisamy, Senthil Kumar 03 May 2006 (has links)
In this thesis, we focused on the construction of classification models based on association rules. Although association rules have been predominantly used for data exploration and description, the interest in using them for prediction has rapidly increased in the data mining community. In order to mine only rules that can be used for classification, we modified the well known association rule mining algorithm Apriori to handle user-defined input constraints. We considered constraints that require the presence/absence of particular items, or that limit the number of items, in the antecedents and/or the consequents of the rules. We developed a characterization of those itemsets that will potentially form rules that satisfy the given constraints. This characterization allows us to prune during itemset construction itemsets such that neither they nor any of their supersets will form valid rules. This improves the time performance of itemset construction. Using this characterization, we implemented a classification system based on association rules and compared the performance of several model construction methods, including CBA, and several model deployment modes to make predictions. Although the data mining community has dealt only with the classification of single-valued attributes, there are several domains in which the classification target is set-valued. Hence, we enhanced our classification system with a novel approach to handle the prediction of set-valued class attributes. Since the traditional classification accuracy measure is inappropriate in this context, we developed an evaluation method for set-valued classification based on the E-Measure. Furthermore, we enhanced our algorithm by not relying on the typical support/confidence framework, and instead mining for the best possible rules above a user-defined minimum confidence and within a desired range for the number of rules. This avoids long mining times that might produce large collections of rules with low predictive power. For this purpose, we developed a heuristic function to determine an initial minimum support and then adjusted it using a binary search strategy until a number of rules within the given range was obtained. We implemented all of our techniques described above in WEKA, an open source suite of machine learning algorithms. We used several datasets from the UCI Machine Learning Repository to test and evaluate our techniques.
3

Associative classification, linguistic entity relationship extraction, and description-logic representation of biomedical knowledge applied to MEDLINE

Rak, Rafal 11 1900 (has links)
MEDLINE, a large and constantly increasing collection of biomedical article references, has been the source of numerous investigations related to textual information retrieval and knowledge capture, including article categorization, bibliometric analysis, semantic query answering, and biological concept recognition and relationship extraction. This dissertation discusses the design and development of novel methods that contribute to the tasks of document categorization and relationship extraction. The two investigations result in a fast tool for building descriptive models capable of categorizing documents to multiple labels and a highly effective method able to extract broad range of relationships between entities embedded in text. Additionally, an application that aims at representing the extracted knowledge in a strictly defined but highly expressive structure of ontology is presented. The classification of documents is based on an idea of building association rules that consist of frequent patterns of words appearing in documents and classes these patterns are likely to be assigned to. The process of building the models is based on a tree enumeration technique and dataset projection. The resulting algorithm offers two different tree traversing strategies, breadth-first and depth-first. The classification scenario involves the use of two alternative thresholding strategies based on either the document-independent confidence of the rules or a similarity measure between a rule and a document. The presented classification tool is shown to perform faster than other methods and is the first associative-classification solution to incorporate multiple classes and the information about recurrence of words in documents. The extraction of relations between entities embedded in text involves the utilization of the output of a constituent parser and a set of manually developed tree-like patterns. Both serve as the input of a novel algorithm that solves the newly formulated problem of constrained constituent tree inclusion with regular expression matching. The proposed relation extraction method is demonstrated to be parser-independent and outperforms in terms of effectiveness dependency-parser-based and machine-learning-based solutions. The extracted knowledge is further embedded in an existing ontology, which together with the structure-driven modification of the ontology results in a comprehensible, inference-consistent knowledge base constituting a tangible representation of knowledge and a potential component of applications such as semantically enhanced query answering systems.
4

Associative classification, linguistic entity relationship extraction, and description-logic representation of biomedical knowledge applied to MEDLINE

Rak, Rafal Unknown Date
No description available.
5

System Complexity Reduction via Feature Selection

January 2011 (has links)
abstract: This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree models. Associative classifiers can achieve high accuracy, but the combination of many rules is difficult to interpret. Rule condition subset selection (RCSS) methods for associative classification are considered. RCSS aims to prune the rule conditions into a subset via feature selection. The subset then can be summarized into rule-based classifiers. Experiments show that classifiers after RCSS can substantially improve the classification interpretability without loss of accuracy. An ensemble feature selection method is proposed to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). The method is compared to a Bayesian local structure learning algorithm and to alternative feature selection methods in the causal structure learning problem. Feature selection is also used to enhance the interpretability of time series classification. Existing time series classification algorithms (such as nearest-neighbor with dynamic time warping measures) are accurate but difficult to interpret. This research leverages the time-ordering of the data to extract features, and generates an effective and efficient classifier referred to as a time series forest (TSF). The computational complexity of TSF is only linear in the length of time series, and interpretable features can be extracted. These features can be further reduced, and summarized for even better interpretability. Lastly, two variable importance measures are proposed to reduce the feature selection bias in tree-based ensemble models. It is well known that bias can occur when predictor attributes have different numbers of values. Two methods are proposed to solve the bias problem. One uses an out-of-bag sampling method called OOBForest, and the other, based on the new concept of a partial permutation test, is called a pForest. Experimental results show the existing methods are not always reliable for multi-valued predictors, while the proposed methods have advantages. / Dissertation/Thesis / Ph.D. Industrial Engineering 2011
6

Amélioration des procédures adaptatives pour l'apprentissage supervisé des données réelles / Improving adaptive methods of supervised learning for real data

Bahri, Emna 08 December 2010 (has links)
L'apprentissage automatique doit faire face à différentes difficultés lorsqu'il est confronté aux particularités des données réelles. En effet, ces données sont généralement complexes, volumineuses, de nature hétérogène, de sources variées, souvent acquises automatiquement. Parmi les difficultés les plus connues, on citera les problèmes liés à la sensibilité des algorithmes aux données bruitées et le traitement des données lorsque la variable de classe est déséquilibrée. Le dépassement de ces problèmes constitue un véritable enjeu pour améliorer l'efficacité du processus d'apprentissage face à des données réelles. Nous avons choisi dans cette thèse de réfléchir à des procédures adaptatives du type boosting qui soient efficaces en présence de bruit ou en présence de données déséquilibrées.Nous nous sommes intéressés, d’abord, au contrôle du bruit lorsque l'on utilise le boosting. En effet, les procédures de boosting ont beaucoup contribué à améliorer l'efficacité des procédures de prédiction en data mining, sauf en présence de données bruitées. Dans ce cas, un double problème se pose : le sur-apprentissage des exemples bruités et la détérioration de la vitesse de convergence du boosting. Face à ce double problème, nous proposons AdaBoost-Hybride, une adaptation de l’algorithme Adaboost fondée sur le lissage des résultats des hypothèses antérieures du boosting, qui a donné des résultats expérimentaux très satisfaisants.Ensuite, nous nous sommes intéressés à un autre problème ardu, celui de la prédiction lorsque la distribution de la classe est déséquilibrée. C'est ainsi que nous proposons une méthode adaptative du type boosting fondée sur la classification associative qui a l’intérêt de permettre la focalisation sur des petits groupes de cas, ce qui est bien adapté aux données déséquilibrées. Cette méthode repose sur 3 contributions : FCP-Growth-P, un algorithme supervisé de génération des itemsets de classe fréquents dérivé de FP-Growth dans lequel est introduit une condition d'élagage fondée sur les contre-exemples pour la spécification des règles, W-CARP une méthode de classification associative qui a pour but de donner des résultats au moins équivalents à ceux des approches existantes pour un temps d'exécution beaucoup plus réduit, enfin CARBoost, une méthode de classification associative adaptative qui utilise W-CARP comme classifieur faible. Dans un chapitre applicatif spécifique consacré à la détection d’intrusion, nous avons confronté les résultats de AdaBoost-Hybride et de CARBoost à ceux des méthodes de référence (données KDD Cup 99). / Machine learning often overlooks various difficulties when confronted real data. Indeed, these data are generally complex, voluminous, and heterogeneous, due to the variety of sources. Among these problems, the most well known concern the sensitivity of the algorithms to noise and unbalanced data. Overcoming these problems is a real challenge to improve the effectiveness of the learning process against real data. In this thesis, we have chosen to improve adaptive procedures (boosting) that are less effective in the presence of noise or with unbalanced data.First, we are interested in robustifying Boosting against noise. Most boosting procedures have contributed greatly to improve the predictive power of classifiers in data mining, but they are prone to noisy data. In this case, two problems arise, (1) the over-fitting due to the noisy examples and (2) the decrease of convergence rate of boosting. Against these two problems, we propose AdaBoost-Hybrid, an adaptation of the Adaboost algorithm that takes into account mistakes made in all the previous iteration. Experimental results are very promising.Then, we are interested in another difficult problem, the prediction when the class is unbalanced. Thus, we propose an adaptive method based on boosted associative classification. The interest of using associations rules is allowing the focus on small groups of cases, which is well suited for unbalanced data. This method relies on 3 contributions: (1) FCP-Growth-P, a supervised algorithm for extracting class frequent itemsets, derived from FP-Growth by introducing the condition of pruning based on counter-examples to specify rules, (2) W-CARP associative classification method which aims to give results at least equivalent to those of existing approaches but in a faster manner, (3) CARBoost, a classification method that uses adaptive associative W-CARP as weak classifier. Finally, in a chapter devoted to the specific application of intrusion’s detection, we compared the results of AdaBoost-Hybrid and CARBoost to those of reference methods (data KDD Cup 99).
7

Finding Patterns in Vehicle Diagnostic Trouble Codes : A data mining study applying associative classification

Fransson, Moa, Fåhraeus, Lisa January 2015 (has links)
In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information.
8

Théorie des fonctions de croyance : application des outils de data mining pour le traitement des données imparfaites / Belief function theory : application of data mining tools for imperfect data treatment

Samet, Ahmed 03 December 2014 (has links)
Notre travail s'inscrit dans l'intersection de deux disciplines qui sont la Théorie des Fonctions de Croyance (TFC) et la fouille de données. L'interaction pouvant exister entre la TFC et la fouille de données est étudiée sous deux volets.La première interaction souligne l'apport des règles associatives génériques au sein de la TFC. Nous nous sommes intéressés au problème de fusion de sources non fiables dont la principale conséquence est l'apparition de conflit lors de la combinaison. Une approche de gestion de conflit reposant sur les règles d'association génériques appelé ACM a été proposée.La deuxième interaction s'intéresse aux bases de données imparfaites en particulier les bases de données évidentielles. Les informations, représentées par des fonctions de masse, sont étudiées afin d'extraire des connaissances cachées par le biais des outils de fouille de données. L'extraction des informations pertinentes et cachées de la base se fait grâce à la redéfinition de la mesure du support et de la confiance. Ces mesures introduites ont été les fondements d'un nouveau classifieur associatif que nous avons appelé EDMA. / This thesis explores the relation between two domains which are the Belief Function Theory (BFT) and data mining. Two main interactions between those domain have been pointed out.The first interaction studies the contribution of the generic associative rules in the BFT. We were interested in managing conflict in case of fusing conflictual information sources. A new approach for conflict management based on generic association rules has been proposed called ACM.The second interation studies imperfect databases such as evidential databases. Those kind of databases, where information is represented by belief functions, are studied in order to extract hidden knowledges using data mining tools. The extraction of those knowledges was possible thanks to a new definition to the support and the confidence measures. Those measures were integrated into a new evidential associative classifier called EDMA.
9

Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule base

Sowan, Bilal Ibrahim January 2011 (has links)
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
10

Análise de desempenho dos algoritmos Apriori e Fuzzy Apriori na extração de regras de associação aplicados a um Sistema de Detecção de Intrusos. / Performance analysis of algorithms Apriori and Fuzzy Apriori in association rules mining applied to a System for Intrusion Detection.

Ricardo Ferreira Vieira de Castro 20 February 2014 (has links)
A extração de regras de associação (ARM - Association Rule Mining) de dados quantitativos tem sido pesquisa de grande interesse na área de mineração de dados. Com o crescente aumento das bases de dados, há um grande investimento na área de pesquisa na criação de algoritmos para melhorar o desempenho relacionado a quantidade de regras, sua relevância e a performance computacional. O algoritmo APRIORI, tradicionalmente usado na extração de regras de associação, foi criado originalmente para trabalhar com atributos categóricos. Geralmente, para usá-lo com atributos contínuos, ou quantitativos, é necessário transformar os atributos contínuos, discretizando-os e, portanto, criando categorias a partir dos intervalos discretos. Os métodos mais tradicionais de discretização produzem intervalos com fronteiras sharp, que podem subestimar ou superestimar elementos próximos dos limites das partições, e portanto levar a uma representação imprecisa de semântica. Uma maneira de tratar este problema é criar partições soft, com limites suavizados. Neste trabalho é utilizada uma partição fuzzy das variáveis contínuas, que baseia-se na teoria dos conjuntos fuzzy e transforma os atributos quantitativos em partições de termos linguísticos. Os algoritmos de mineração de regras de associação fuzzy (FARM - Fuzzy Association Rule Mining) trabalham com este princípio e, neste trabalho, o algoritmo FUZZYAPRIORI, que pertence a esta categoria, é utilizado. As regras extraídas são expressas em termos linguísticos, o que é mais natural e interpretável pelo raciocício humano. Os algoritmos APRIORI tradicional e FUZZYAPRIORI são comparado, através de classificadores associativos, baseados em regras extraídas por estes algoritmos. Estes classificadores foram aplicados em uma base de dados relativa a registros de conexões TCP/IP que destina-se à criação de um Sistema de Detecção de Intrusos. / The mining of association rules of quantitative data has been of great research interest in the area of data mining. With the increasing size of databases, there is a large investment in research in creating algorithms to improve performance related to the amount of rules, its relevance and computational performance. The APRIORI algorithm, traditionally used in the extraction of association rules, was originally created to work with categorical attributes. In order to use continuous attributes, it is necessary to transform the continuous attributes, through discretization, into categorical attributes, where each categorie corresponds to a discrete interval. The more traditional discretization methods produce intervals with sharp boundaries, which may underestimate or overestimate elements near the boundaries of the partitions, therefore inducing an inaccurate semantical representation. One way to address this problem is to create soft partitions with smoothed boundaries. In this work, a fuzzy partition of continuous variables, which is based on fuzzy set theory is used. The algorithms for mining fuzzy association rules (FARM - Fuzzy Association Rule Mining) work with this principle, and, in this work, the FUZZYAPRIORI algorithm is used. In this dissertation, we compare the traditional APRIORI and the FUZZYAPRIORI, through classification results of associative classifiers based on rules extracted by these algorithms. These classifiers were applied to a database of records relating to TCP / IP connections that aims to create an Intrusion Detection System.

Page generated in 0.7232 seconds