• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Spam Filter Based on Reinforcement and Collaboration

Yang, Chih-Chin 07 August 2008 (has links)
Growing volume of spam mails have not only decreased the productivity of people but also become a security threat on the Internet. Mail servers should have abilities to filter out spam mails which change time by time precisely and manage increasing spam rules which generated by mail servers automatically and effectively. Most paper only focused on single aspect (especially for spam rule generation) to prevent spam mail. However, in real word, spam prevention is not just applying data mining algorithm for rule generation. To filter out spam mails correctly in a real world, there are still many issues should be considered in addition to spam rule generation. In this paper, we integrate three modules to form a complete anti-spam system, they are spam rule generation module, spam rule reinforcement module and spam rule exchange module. In this paper, rule-based data mining approach is used to generate exchangeable spam rules. The feedback of user¡¦s returns is reinforced spam rule. The distributing spam rules are exchanged through machine-readable XML format. The results of experiment draw the following conclusion: (1) The spam filter can filter out the Chinese mails by analyzing the header characteristics. (2) Rules exchanged among mail improve the spam recall and accuracy of mail servers. (3) Rules reinforced improve the effectiveness of spam rule.
2

Using optimisation techniques to granulise rough set partitions

Crossingham, Bodie 26 January 2009 (has links)
Rough set theory (RST) is concerned with the formal approximation of crisp sets and is a mathematical tool which deals with vagueness and uncertainty. RST can be integrated into machine learning and can be used to forecast predictions as well as to determine the causal interpretations for a particular data set. The work performed in this research is concerned with using various optimisation techniques to granulise the rough set input partitions in order to achieve the highest forecasting accuracy produced by the rough set. The forecasting accuracy is measured by using the area under the curve (AUC) of the receiver operating characteristic (ROC) curve. The four optimisation techniques used are genetic algorithm, particle swarm optimisation, hill climbing and simulated annealing. This newly proposed method is tested on two data sets, namely, the human immunodeficiency virus (HIV) data set and the militarised interstate dispute (MID) data set. The results obtained from this granulisation method are compared to two previous static granulisation methods, namely, equal-width-bin and equal-frequency-bin partitioning. The results conclude that all of the proposed optimised methods produce higher forecasting accuracies than that of the two static methods. In the case of the HIV data set, the hill climbing approach produced the highest accuracy, an accuracy of 69.02% is achieved in a time of 12624 minutes. For the MID data, the genetic algorithm approach produced the highest accuracy. The accuracy achieved is 95.82% in a time of 420 minutes. The rules generated from the rough set are linguistic and easy-to-interpret, but this does come at the expense of the accuracy lost in the discretisation process where the granularity of the variables are decreased.
3

Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks

Mal-Sarkar, Sanchita January 2009 (has links)
No description available.
4

Anti-Spam Study: an Alliance-based Approach

Chiu, Yu-fen 12 September 2006 (has links)
The growing problem of spam has generated a need for reliable anti-spam filters. There are many filtering techniques along with machine learning and data miming used to reduce the amount of spam. Such algorithms can achieve very high accuracy but with some amount of false positive tradeoff. Generally false positives are prohibitively expensive in the real world. Much work has been done to improve specific algorithms for the task of detecting spam, but less work has been report on leveraging multiple algorithms in email analysis. This study presents an alliance-based approach to classify, discovery and exchange interesting information on spam. Furthermore, the spam filter in this study is build base on the mixture of rough set theory (RST), genetic algorithm (GA) and XCS classifier system. RST has the ability to process imprecise and incomplete data such as spam. GA can speed up the rate of finding the optimal solution (i.e. the rules used to block spam). The reinforcement learning of XCS is a good mechanism to suggest the appropriate classification for the email. The results of spam filtering by alliance-based approach are evaluated by several statistical methods and the performance is great. Two main conclusions can be drawn from this study: (1) the rules exchanged from other mail servers indeed help the filter blocking more spam than before. (2) a combination of algorithms improves both accuracy and reducing false positives for the problem of spam detection.
5

SQL Implementation of Value Reduction with Multiset Decision Tables

Chen, Chen 16 May 2014 (has links)
No description available.
6

Classification Models in Clinical Decision Making

Gil-Herrera, Eleazar 01 January 2013 (has links)
In this dissertation, we present a collection of manuscripts describing the development of prognostic models designed to assist clinical decision making. This work is motivated by limitations of commonly used techniques to produce accessible prognostic models with easily interpretable and clinically credible results. Such limitations hinder prognostic model widespread utilization in medical practice. Our methodology is based on Rough Set Theory (RST) as a mathematical tool for clinical data anal- ysis. We focus on developing rule-based prognostic models for end-of life care decision making in an effort to improve the hospice referral process. The development of the prognostic models is demonstrated using a retrospective data set of 9,103 terminally ill patients containing physiological characteristics, diagnostic information and neurological function values. We develop four RST-based prognostic models and compare them with commonly used classification techniques including logistic regression, support vector machines, random forest and decision trees in terms of characteristics related to clinical credibility such as accessibility and accuracy. RST based models show comparable accuracy with other methodologies while providing accessible models with a structure that facilitates clinical interpretation. They offer both more insight into the model process and more opportunity for the model to incorporate personal information of those making and being affected by the decision.
7

A framework of adaptive T-S type rough-fuzzy inference systems (ARFIS)

Lee, Chang Su January 2009 (has links)
[Truncated abstract] Fuzzy inference systems (FIS) are information processing systems using fuzzy logic mechanism to represent the human reasoning process and to make decisions based on uncertain, imprecise environments in our daily lives. Since the introduction of fuzzy set theory, fuzzy inference systems have been widely used mainly for system modeling, industrial plant control for a variety of practical applications, and also other decisionmaking purposes; advanced data analysis in medical research, risk management in business, stock market prediction in finance, data analysis in bioinformatics, and so on. Many approaches have been proposed to address the issue of automatic generation of membership functions and rules with the corresponding subsequent adjustment of them towards more satisfactory system performance. Because one of the most important factors for building high quality of FIS is the generation of the knowledge base of it, which consists of membership functions, fuzzy rules, fuzzy logic operators and other components for fuzzy calculations. The design of FIS comes from either the experience of human experts in the corresponding field of research or input and output data observations collected from operations of systems. Therefore, it is crucial to generate high quality FIS from a highly reliable design scheme to model the desired system process best. Furthermore, due to a lack of a learning property of fuzzy systems themselves most of the suggested schemes incorporate hybridization techniques towards better performance within a fuzzy system framework. ... This systematic enhancement is required to update the FIS in order to produce flexible and robust fuzzy systems for unexpected unknown inputs from real-world environments. This thesis proposes a general framework of Adaptive T-S (Takagi-Sugeno) type Rough-Fuzzy Inference Systems (ARFIS) for a variety of practical applications in order to resolve the problems mentioned above in the context of a Rough-Fuzzy hybridization scheme. Rough set theory is employed to effectively reduce the number of attributes that pertain to input variables and obtain a minimal set of decision rules based on input and output data sets. The generated rules are examined by checking their validity to use them as T-S type fuzzy rules. Using its excellent advantages in modeling non-linear systems, the T-S type fuzzy model is chosen to perform the fuzzy inference process. A T-S type fuzzy inference system is constructed by an automatic generation of membership functions and rules by the Fuzzy C-Means (FCM) clustering algorithm and the rough set approach, respectively. The generated T-S type rough-fuzzy inference system is then adjusted by the least-squares method and a conjugate gradient descent algorithm towards better performance within a fuzzy system framework. To show the viability of the proposed framework of ARFIS, the performance of ARFIS is compared with other existing approaches in a variety of practical applications; pattern classification, face recognition, and mobile robot navigation. The results are very satisfactory and competitive, and suggest the ARFIS is a suitable new framework for fuzzy inference systems by showing a better system performance with less number of attributes and rules in each application.
8

A rough set approach to bushings fault detection

Mpanza, Lindokuhle Justice 06 June 2012 (has links)
M. Ing. / Fault detection tools have gained popularity in recent years due to the increasing need for reliable and predictable equipments. Transformer bushings account for the majority of transformer faults. Hence, to uphold the integrity of the power transmission and dis- tribution system, a tool to detect and identify faults in their developing stage is necessary in transformer bushings. Among the numerous tools for bushings monitoring, dissolved gas analysis (DGA) is the most commonly used. The advances in DGA and data storage capabilities have resulted in large amount of data and ultimately, the data analysis crisis. Consequent to that, computational intelligence methods have advanced to deal with this data analysis problem and help in the decision-making process. Numerous computational intelligence approaches have been proposed for bushing fault detection. Most of these approaches focus on the accuracy of prediction and not much research has been allocated to investigate the interpretability of the decisions derived from these systems. This work proposes a rough set theory (RST) model for bushing fault detection based on DGA data analyzed using the IEEEc57.104 and the IEC 60599 standards. RST is a rule-based technique suitable for analyzing vague, uncertain and imprecise data. RST extracts rules from the data to model the system. These rules are used for prediction and interpreting the decision process. The lesser the number of rules, the easier it is to interpret the model. The performance of the RST is dependent on the discretization technique employed. An equal frequency bin (EFB), Boolean reasoning (BR) and entropy partition (EP) are used to develop an RST model. The model trained using EFB data performs better than the models trained using BR and EP. The accuracy achieved is 96.4%, 96.0% and 91.3% for EFB, BR and EP respectively. This work also pro poses an ant colony optimization (ACO) for discretization. A model created using ACO discretized achieved an accuracy of 96.1%, which is compatible with the three methods above. When considering the overall performance, the ACO is a better discretization tool since it produces an accurate model with the least number of rules. The rough set tool proposed in this work is benchmarked against a multi-layer perceptron (MLP) and radial basis function (RBF) neural networks. Results prove that RST modeling for bushing is equally as capable as the MLP and better than RBF. The RST, MLP and RBF are used in an ensemble of classifiers. The ensemble performs better than the standalone models.
9

Implementation av ett kunskapsbas system för rough set theory med kvantitativa mätningar / Implementation of a Rough Knowledge Base System Supporting Quantitative Measures

Andersson, Robin January 2004 (has links)
<p>This thesis presents the implementation of a knowledge base system for rough sets [Paw92]within the logic programming framework. The combination of rough set theory with logic programming is a novel approach. The presented implementation serves as a prototype system for the ideas presented in [VDM03a, VDM03b]. The system is available at "http://www.ida.liu.se/rkbs". </p><p>The presented language for describing knowledge in the rough knowledge base caters for implicit definition of rough sets by combining different regions (e.g. upper approximation, lower approximation, boundary) of other defined rough sets. The rough knowledge base system also provides methods for querying the knowledge base and methods for computing quantitative measures. </p><p>We test the implemented system on a medium sized application example to illustrate the usefulness of the system and the incorporated language. We also provide performance measurements of the system.</p>
10

Implementation av ett kunskapsbas system för rough set theory med kvantitativa mätningar / Implementation of a Rough Knowledge Base System Supporting Quantitative Measures

Andersson, Robin January 2004 (has links)
This thesis presents the implementation of a knowledge base system for rough sets [Paw92]within the logic programming framework. The combination of rough set theory with logic programming is a novel approach. The presented implementation serves as a prototype system for the ideas presented in [VDM03a, VDM03b]. The system is available at "http://www.ida.liu.se/rkbs". The presented language for describing knowledge in the rough knowledge base caters for implicit definition of rough sets by combining different regions (e.g. upper approximation, lower approximation, boundary) of other defined rough sets. The rough knowledge base system also provides methods for querying the knowledge base and methods for computing quantitative measures. We test the implemented system on a medium sized application example to illustrate the usefulness of the system and the incorporated language. We also provide performance measurements of the system.

Page generated in 0.1015 seconds