• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 14
  • 12
  • 11
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cellular associative neural networks for pattern recognition

Orovas, Christos January 2000 (has links)
No description available.
2

Analysis of neural network mapping functions : generating evidential support

Howes, Peter John January 1999 (has links)
No description available.
3

An Automated Rule Refinement System

Andrews, Robert January 2003 (has links)
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ∀ 1 ≤ i ≤ n : xi ∈ [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
4

A KNOWLEDGE-BASED MODELING TOOL FOR CLASSIFICATION

GONG, RONGSHENG 02 October 2006 (has links)
No description available.
5

Empirical investigation of decision tree extraction from neural networks

Rangwala, Maimuna H. 08 September 2006 (has links)
No description available.
6

Enhancing genetic programming for predictive modeling

König, Rikard January 2014 (has links)
<p>Avhandling för teknologie doktorsexamen i datavetenskap, som kommer att försvaras offentligt tisdagen den 11 mars 2014 kl. 13.15, M404, Högskolan i Borås. Opponent: docent Niklas Lavesson, Blekinge Tekniska Högskola, Karlskrona.</p>
7

Enhancing genetic programming for predictive modeling

König, Rikard January 2014 (has links)
See separate file, "Abstract.png"
8

Obtaining Accurate and Comprehensible Data Mining Models : An Evolutionary Approach

Johansson, Ulf January 2007 (has links)
When performing predictive data mining, the use of ensembles is claimed to virtually guarantee increased accuracy compared to the use of single models. Unfortunately, the problem of how to maximize ensemble accuracy is far from solved. In particular, the relationship between ensemble diversity and accuracy is not completely understood, making it hard to efficiently utilize diversity for ensemble creation. Furthermore, most high-accuracy predictive models are opaque, i.e. it is not possible for a human to follow and understand the logic behind a prediction. For some domains, this is unacceptable, since models need to be comprehensible. To obtain comprehensibility, accuracy is often sacrificed by using simpler but transparent models; a trade-off termed the accuracy vs. comprehensibility trade-off. With this trade-off in mind, several researchers have suggested rule extraction algorithms, where opaque models are transformed into comprehensible models, keeping an acceptable accuracy. In this thesis, two novel algorithms based on Genetic Programming are suggested. The first algorithm (GEMS) is used for ensemble creation, and the second (G-REX) is used for rule extraction from opaque models. The main property of GEMS is the ability to combine smaller ensembles and individual models in an almost arbitrary way. Moreover, GEMS can use base models of any kind and the optimization function is very flexible, easily permitting inclusion of, for instance, diversity measures. In the experimentation, GEMS obtained accuracies higher than both straightforward design choices and published results for Random Forests and AdaBoost. The key quality of G-REX is the inherent ability to explicitly control the accuracy vs. comprehensibility trade-off. Compared to the standard tree inducers C5.0 and CART, and some well-known rule extraction algorithms, rules extracted by G-REX are significantly more accurate and compact. Most importantly, G-REX is thoroughly evaluated and found to meet all relevant evaluation criteria for rule extraction algorithms, thus establishing G-REX as the algorithm to benchmark against.
9

Explainable Intrusion Detection Systems using white box techniques

Ables, Jesse 08 December 2023 (has links) (PDF)
Artificial Intelligence (AI) has found increasing application in various domains, revolutionizing problem-solving and data analysis. However, in decision-sensitive areas like Intrusion Detection Systems (IDS), trust and reliability are vital, posing challenges for traditional black box AI systems. These black box IDS, while accurate, lack transparency, making it difficult to understand the reasons behind their decisions. This dissertation explores the concept of eXplainable Intrusion Detection Systems (X-IDS), addressing the issue of trust in X-IDS. It explores the limitations of common black box IDS and the complexities of explainability methods, leading to the fundamental question of trusting explanations generated by black box explainer modules. To address these challenges, this dissertation presents the concept of white box explanations, which are innately explainable. While white box algorithms are typically simpler and more interpretable, they often sacrifice accuracy. However, this work utilized white box Competitive Learning (CL), which can achieve competitive accuracy in comparison to black box IDS. We introduce Rule Extraction (RE) as another white box technique that can be applied to explain black box IDS. It involves training decision trees on the inputs, weights, and outputs of black box models, resulting in human-readable rulesets that serve as global model explanations. These white box techniques offer the benefits of accuracy and trustworthiness, which are challenging to achieve simultaneously. This work aims to address gaps in the existing literature, including the need for highly accurate white box IDS, a methodology for understanding explanations, small testing datasets, and comparisons between white box and black box models. To achieve these goals, the study employs CL and eclectic RE algorithms. CL models offer innate explainability and high accuracy in IDS applications, while eclectic RE enhances trustworthiness. The contributions of this dissertation include a novel X-IDS architecture featuring Self-Organizing Map (SOM) models that adhere to DARPA’s guidelines for explainable systems, an extended X-IDS architecture incorporating three CL-based algorithms, and a hybrid X-IDS architecture combining a Deep Neural Network (DNN) predictor with a white box eclectic RE explainer. These architectures create more explainable, trustworthy, and accurate X-IDS systems, paving the way for enhanced AI solutions in decision-sensitive domains.
10

Rule Driven Job-Shop Scheduling Derived from Neural Networks through Extraction

Ganduri, Chandrasekhar 18 December 2004 (has links)
No description available.

Page generated in 0.214 seconds