• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 14
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

SQL Implementation of Value Reduction with Multiset Decision Tables

Chen, Chen 16 May 2014 (has links)
No description available.
12

UNSUPERVISED DATA MINING BY RECURSIVE PARTITIONING

HE, AIJING 16 September 2002 (has links)
No description available.
13

Concept Approximations

Meschke, Christian 05 June 2012 (has links) (PDF)
In this thesis, we present a lattice theoretical approach to the field of approximations. Given a pair consisting of a kernel system and a closure system on an underlying lattice, one receives a lattice of approximations. We describe the theory of these lattices of approximations. Furthermore, we put a special focus on the case of concept lattices. As it turns out, approximation of formal concepts can be interpreted as traces, which are preconcepts in a subcontext. / In der vorliegenden Arbeit beschreiben wir einen verbandstheoretischen Zugang zum Thema Approximieren. Ausgehend von einem Kern- und einem Hüllensystem auf einem vollständigen Verband erhält man einen Approximationsverband. Wir beschreiben die Theorie dieser Approximationsverbände. Des Weiteren liegt dabei ein Hauptaugenmerk auf dem Fall zugrundeliegender Begriffsverbände. Wie sich nämlich herausstellt, lassen sich Approximationen formaler Begriffe als Spuren auffassen, welche diese in einem vorgegebenen Teilkontext hinterlassen.
14

Concept Approximations: Approximative Notions for Concept Lattices

Meschke, Christian 13 April 2012 (has links)
In this thesis, we present a lattice theoretical approach to the field of approximations. Given a pair consisting of a kernel system and a closure system on an underlying lattice, one receives a lattice of approximations. We describe the theory of these lattices of approximations. Furthermore, we put a special focus on the case of concept lattices. As it turns out, approximation of formal concepts can be interpreted as traces, which are preconcepts in a subcontext.:Preface 1. Preliminaries 2. Approximations in Complete Lattices 3. Concept Approximations 4. Rough Sets List of Symbols Index Bibliography / In der vorliegenden Arbeit beschreiben wir einen verbandstheoretischen Zugang zum Thema Approximieren. Ausgehend von einem Kern- und einem Hüllensystem auf einem vollständigen Verband erhält man einen Approximationsverband. Wir beschreiben die Theorie dieser Approximationsverbände. Des Weiteren liegt dabei ein Hauptaugenmerk auf dem Fall zugrundeliegender Begriffsverbände. Wie sich nämlich herausstellt, lassen sich Approximationen formaler Begriffe als Spuren auffassen, welche diese in einem vorgegebenen Teilkontext hinterlassen.:Preface 1. Preliminaries 2. Approximations in Complete Lattices 3. Concept Approximations 4. Rough Sets List of Symbols Index Bibliography
15

Classification Models in Clinical Decision Making

Gil-Herrera, Eleazar 01 January 2013 (has links)
In this dissertation, we present a collection of manuscripts describing the development of prognostic models designed to assist clinical decision making. This work is motivated by limitations of commonly used techniques to produce accessible prognostic models with easily interpretable and clinically credible results. Such limitations hinder prognostic model widespread utilization in medical practice. Our methodology is based on Rough Set Theory (RST) as a mathematical tool for clinical data anal- ysis. We focus on developing rule-based prognostic models for end-of life care decision making in an effort to improve the hospice referral process. The development of the prognostic models is demonstrated using a retrospective data set of 9,103 terminally ill patients containing physiological characteristics, diagnostic information and neurological function values. We develop four RST-based prognostic models and compare them with commonly used classification techniques including logistic regression, support vector machines, random forest and decision trees in terms of characteristics related to clinical credibility such as accessibility and accuracy. RST based models show comparable accuracy with other methodologies while providing accessible models with a structure that facilitates clinical interpretation. They offer both more insight into the model process and more opportunity for the model to incorporate personal information of those making and being affected by the decision.
16

A framework of adaptive T-S type rough-fuzzy inference systems (ARFIS)

Lee, Chang Su January 2009 (has links)
[Truncated abstract] Fuzzy inference systems (FIS) are information processing systems using fuzzy logic mechanism to represent the human reasoning process and to make decisions based on uncertain, imprecise environments in our daily lives. Since the introduction of fuzzy set theory, fuzzy inference systems have been widely used mainly for system modeling, industrial plant control for a variety of practical applications, and also other decisionmaking purposes; advanced data analysis in medical research, risk management in business, stock market prediction in finance, data analysis in bioinformatics, and so on. Many approaches have been proposed to address the issue of automatic generation of membership functions and rules with the corresponding subsequent adjustment of them towards more satisfactory system performance. Because one of the most important factors for building high quality of FIS is the generation of the knowledge base of it, which consists of membership functions, fuzzy rules, fuzzy logic operators and other components for fuzzy calculations. The design of FIS comes from either the experience of human experts in the corresponding field of research or input and output data observations collected from operations of systems. Therefore, it is crucial to generate high quality FIS from a highly reliable design scheme to model the desired system process best. Furthermore, due to a lack of a learning property of fuzzy systems themselves most of the suggested schemes incorporate hybridization techniques towards better performance within a fuzzy system framework. ... This systematic enhancement is required to update the FIS in order to produce flexible and robust fuzzy systems for unexpected unknown inputs from real-world environments. This thesis proposes a general framework of Adaptive T-S (Takagi-Sugeno) type Rough-Fuzzy Inference Systems (ARFIS) for a variety of practical applications in order to resolve the problems mentioned above in the context of a Rough-Fuzzy hybridization scheme. Rough set theory is employed to effectively reduce the number of attributes that pertain to input variables and obtain a minimal set of decision rules based on input and output data sets. The generated rules are examined by checking their validity to use them as T-S type fuzzy rules. Using its excellent advantages in modeling non-linear systems, the T-S type fuzzy model is chosen to perform the fuzzy inference process. A T-S type fuzzy inference system is constructed by an automatic generation of membership functions and rules by the Fuzzy C-Means (FCM) clustering algorithm and the rough set approach, respectively. The generated T-S type rough-fuzzy inference system is then adjusted by the least-squares method and a conjugate gradient descent algorithm towards better performance within a fuzzy system framework. To show the viability of the proposed framework of ARFIS, the performance of ARFIS is compared with other existing approaches in a variety of practical applications; pattern classification, face recognition, and mobile robot navigation. The results are very satisfactory and competitive, and suggest the ARFIS is a suitable new framework for fuzzy inference systems by showing a better system performance with less number of attributes and rules in each application.
17

A rough set approach to bushings fault detection

Mpanza, Lindokuhle Justice 06 June 2012 (has links)
M. Ing. / Fault detection tools have gained popularity in recent years due to the increasing need for reliable and predictable equipments. Transformer bushings account for the majority of transformer faults. Hence, to uphold the integrity of the power transmission and dis- tribution system, a tool to detect and identify faults in their developing stage is necessary in transformer bushings. Among the numerous tools for bushings monitoring, dissolved gas analysis (DGA) is the most commonly used. The advances in DGA and data storage capabilities have resulted in large amount of data and ultimately, the data analysis crisis. Consequent to that, computational intelligence methods have advanced to deal with this data analysis problem and help in the decision-making process. Numerous computational intelligence approaches have been proposed for bushing fault detection. Most of these approaches focus on the accuracy of prediction and not much research has been allocated to investigate the interpretability of the decisions derived from these systems. This work proposes a rough set theory (RST) model for bushing fault detection based on DGA data analyzed using the IEEEc57.104 and the IEC 60599 standards. RST is a rule-based technique suitable for analyzing vague, uncertain and imprecise data. RST extracts rules from the data to model the system. These rules are used for prediction and interpreting the decision process. The lesser the number of rules, the easier it is to interpret the model. The performance of the RST is dependent on the discretization technique employed. An equal frequency bin (EFB), Boolean reasoning (BR) and entropy partition (EP) are used to develop an RST model. The model trained using EFB data performs better than the models trained using BR and EP. The accuracy achieved is 96.4%, 96.0% and 91.3% for EFB, BR and EP respectively. This work also pro poses an ant colony optimization (ACO) for discretization. A model created using ACO discretized achieved an accuracy of 96.1%, which is compatible with the three methods above. When considering the overall performance, the ACO is a better discretization tool since it produces an accurate model with the least number of rules. The rough set tool proposed in this work is benchmarked against a multi-layer perceptron (MLP) and radial basis function (RBF) neural networks. Results prove that RST modeling for bushing is equally as capable as the MLP and better than RBF. The RST, MLP and RBF are used in an ensemble of classifiers. The ensemble performs better than the standalone models.
18

Implementation av ett kunskapsbas system för rough set theory med kvantitativa mätningar / Implementation of a Rough Knowledge Base System Supporting Quantitative Measures

Andersson, Robin January 2004 (has links)
<p>This thesis presents the implementation of a knowledge base system for rough sets [Paw92]within the logic programming framework. The combination of rough set theory with logic programming is a novel approach. The presented implementation serves as a prototype system for the ideas presented in [VDM03a, VDM03b]. The system is available at "http://www.ida.liu.se/rkbs". </p><p>The presented language for describing knowledge in the rough knowledge base caters for implicit definition of rough sets by combining different regions (e.g. upper approximation, lower approximation, boundary) of other defined rough sets. The rough knowledge base system also provides methods for querying the knowledge base and methods for computing quantitative measures. </p><p>We test the implemented system on a medium sized application example to illustrate the usefulness of the system and the incorporated language. We also provide performance measurements of the system.</p>
19

以規則為基礎的分類演算法:應用粗糙集 / A Rule-Based classification algorithm: a rough set approach

廖家奇, Liao, Chia Chi Unknown Date (has links)
在本論文中,我們提出了一個以規則為基礎的分類演算法,名為ROUSER(ROUgh SEt Rule),它利用粗糙集理論作為搜尋啟發的基礎,進而建立規則。我們使用一個已經被廣泛利用的工具實作ROUSER,也使用數個公開資料集對它進行實驗,並將它應用於真實世界的案例。 本論文的初衷可被追溯到一個真實世界的案例,而此案例的目標是從感應器所蒐集的資料中找出與機械故障之間的關聯。為了能支援機械故障的根本原因分析,我們設計並實作了一個以規則為基礎的分類演算法,它所產生的模型是由人類可理解的決策規則所組成,而故障的徵兆與原因則被決策規則所連結。此外,資料中存在著矛盾。舉例而言,不同時間點所蒐集的兩筆紀錄極為相似、甚至相同(除了時間戳記),但其中一筆紀錄與機械故障相關,另一筆則否。本案例的挑戰在於分析矛盾的資料。我們使用粗糙集理論克服這個難題,因為它可以處理不完美知識。 研究者們已經提出了各種不同的分類演算法,而實踐者們則已經將它們應用於各種領域,然而多數分類演算法的設計並不強調演算法所產生模型的可解釋性與可理解性。ROUSER的設計是專門從名目資料中萃取人類可理解的決策規則。而ROUSER與其它多數規則分類演算法不同的地方是利用粗糙集方法選取特徵。ROUSER也提供了數種方式來選擇合宜的屬性與值配對,作為規則的前項。此外,ROUSER的規則產生方法是基於separate-and-conquer策略,因此比其它基於粗糙集的分類演算法所廣泛採用的不可分辨矩陣方法還有效率。 我們進行延伸實驗來驗證ROUSER的能力。對於名目資料的實驗裡,ROUSER在半數的結果中的準確率可匹敵、甚至勝過其他以規則為基礎的分類演算法以及決策樹分類演算法。ROUSER也可以在一些離散化的資料集之中達到可匹敵甚至超越的準確率。我們也提供了內建的特徵萃取方法與其它方法的比較的實驗結果,以及數種用來決定規則前項的方法的實驗結果。 / In this thesis, we propose a rule-based classification algorithm named ROUSER (ROUgh SEt Rule), which uses the rough set theory as the basis of the search heuristics in the process of rule generation. We implement ROUSER using a well developed and widely used toolkit, evaluate it using several public data sets, and examine its applicability using a real-world case study. The origin of the problem addressed in this thesis can be traced back to a real-world problem where the goal is to determine whether a data record collected from a sensor corresponds to a machine fault. In order to assist in the root cause analysis of the machine faults, we design and implement a rule-based classification algorithm that can generate models consisting of human understandable decision rules to connect symptoms to the cause. Moreover, there are contradictions in data. For example, two data records collected at different time points are similar, or the same (except their timestamps), while one is corresponding to a machine fault but not the other. The challenge is to analyze data with contradictions. We use the rough set theory to overcome the challenge, since it is able to process imperfect knowledge. Researchers have proposed various classification algorithms and practitioners have applied them to various application domains, while most of the classification algorithms are designed without a focus on interpretability or understandability of the models built using the algorithms. ROUSER is specifically designed to extract human understandable decision rules from nominal data. What distinguishes ROUSER from most, if not all, other rule-based classification algorithms is that it utilizes a rough set approach to select features. ROUSER also provides several ways to decide an appropriate attribute-value pair for the antecedents of a rule. Moreover, the rule generation method of ROUSER is based on the separate-and-conquer strategy, and hence it is more efficient than the indiscernibility matrix method that is widely adopted in the classification algorithms based on the rough set theory. We conduct extensive experiments to evaluate the capability of ROUSER. On about half of the nominal data sets considered in experiments, ROUSER can achieve comparable or better accuracy than do classification algorithms that are able to generate decision rules or trees. On some of the discretized data sets, ROUSER can achieve comparable or better accuracy. We also present the results of the experiments on the embedded feature selection method and several ways to decide an appropriate attribute-value pair for the antecedents of a rule.
20

Rough set-based reasoning and pattern mining for information filtering

Zhou, Xujuan January 2008 (has links)
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Page generated in 0.0588 seconds