571 |
Σύγκριση μεθόδων δημιουργίας έμπειρων συστημάτων με κανόνες για προβλήματα κατηγοριοποίησης από σύνολα δεδομένωνΤζετζούμης, Ευάγγελος 31 January 2013 (has links)
Σκοπός της παρούσας εργασίας είναι η σύγκριση διαφόρων μεθόδων κατηγοριοποίησης που στηρίζονται σε αναπαράσταση γνώσης με κανόνες μέσω της δημιουργίας έμπειρων συστημάτων από γνωστά σύνολα δεδομένων. Για την εφαρμογή των μεθόδων και τη δημιουργία και υλοποίηση των αντίστοιχων έμπειρων συστημάτων χρησιμοποιούμε διάφορα εργαλεία όπως: (α) Το ACRES, το οποίο είναι ένα εργαλείο αυτόματης παραγωγής έμπειρων συστημάτων με συντελεστές βεβαιότητας. Οι συντελεστές βεβαιότητος μπορούν να υπολογίζονται κατά δύο τρόπους και επίσης παράγονται δύο τύποι έμπειρων συστημάτων που στηρίζονται σε δύο διαφορετικές μεθόδους συνδυασμού των συντελεστών βεβαιότητας (κατά MYCIN και μιας γενίκευσης αυτής του MYCIN με χρήση βαρών που υπολογίζονται μέσω ενός γενετικού αλγορίθμου). (β) Το WEKA, το οποίο είναι ένα εργαλείο που περιέχει αλγόριθμους μηχανικής μάθησης. Συγκεκριμένα, στην εργασία χρησιμοποιούμε τον αλγόριθμο J48, μια υλοποίηση του γνωστού αλγορίθμου C4.5, που παράγει δένδρα απόφασης, δηλ. κανόνες. (γ) Το CLIPS, το οποίο είναι ένα κέλυφος για προγραμματισμό με κανόνες. Εδώ, εξάγονται οι κανόνες από το δέντρο απόφασης του WEKA και υλοποιούνται στο CLIPS με ενδεχόμενες μετατροπές. (δ) Το FuzzyCLIPS, το οποίο επίσης είναι ένα κέλυφος για την δημιουργία ασαφών ΕΣ. Είναι μια επέκταση του CLIPS που χρησιμοποιεί ασαφείς κανόνες και συντελεστές βεβαιότητος. Εδώ, το έμπειρο σύστημα που παράγεται μέσω του CLIPS μετατρέπεται σε ασαφές έμπειρο σύστημα με ασαφοποίηση κάποιων μεταβλητών. (ε) Το GUI Ant-Miner, το οποίο είναι ένα εργαλείο για την εξαγωγή κανόνων κατηγοριοποίησης από ένα δοσμένο σύνολο δεδομένων. με τη χρήση ενός μοντέλου ακολουθιακής κάλυψης, όπως ο αλγόριθμος AntMiner.
Με βάση τις παραπάνω μεθόδους-εργαλεία δημιουργήθηκαν έμπειρα συστήματα από πέντε σύνολα δεδομένων κατηγοριοποίησης από τη βάση δεδομένων UCI Machine Learning Repository. Τα συστήματα αυτά αξιολογήθηκαν ως προς την ταξινόμηση με βάση γνωστές μετρικές (ορθότητα, ευαισθησία, εξειδίκευση και ακρίβεια). Από τη σύγκριση των μεθόδων και στα πέντε σύνολα δεδομένων, εξάγουμε τα παρακάτω συμπεράσματα: (α) Αν επιθυμούμε αποτελέσματα με μεγαλύτερη ακρίβεια και μεγάλη ταχύτητα, θα πρέπει μάλλον να στραφούμε στην εφαρμογή WEKA. (β) Αν θέλουμε να κάνουμε και παράλληλους υπολογισμούς, η μόνη εφαρμογή που μας παρέχει αυτή τη δυνατότητα είναι το FuzzyCLIPS, θυσιάζοντας όμως λίγη ταχύτητα και ακρίβεια. (γ) Όσον αφορά το GUI Ant-Miner, λειτουργεί τόσο καλά όσο και το WEKA όσον αφορά την ακρίβεια αλλά είναι πιο αργή μέθοδος. (δ) Σχετικά με το ACRES, λειτουργεί καλά όταν δουλεύουμε με υποσύνολα μεταβλητών, έτσι ώστε να παράγεται σχετικά μικρός αριθμός κανόνων και να καλύπτονται σχεδόν όλα τα στιγμιότυπα στο σύνολο έλεγχου. Στα σύνολα δεδομένων μας το ACRES δεν θεωρείται πολύ αξιόπιστο υπό την έννοια ότι αναγκαζόμαστε να δουλεύουμε με υποσύνολο μεταβλητών και όχι όλες τις μεταβλητές του συνόλου δεδομένων. Όσο πιο πολλές μεταβλητές πάρουμε ως υποσύνολο στο ACRES, τόσο πιο αργό γίνεται. / The aim of this thesis is the comparison of several classification methods that are based on knowledge representation with rules via the creation of expert systems from known data sets. For the application of those methods and the creation and implementation of the corresponding expert systems, we use various tools such as: (a) ACRES, which is a tool for automatic production of expert systems with certainty factors. The certainty factors can be calculated via two different methods and also two different types of expert systems can be produced based on different methods of certainty propagation (that of MYCIN and a generalized version of MYCIN one that uses weights calculated via a genetic algorithm). (b) WEKA, which is a tool that contains machine learning algorithms. Specifically, we use J48, an implementation of the known algorithm C4.5, which produces decision trees, which are coded rules. (c) CLIPS, which is a shell for rule based programming. Here, the rules encoded on the decision true produced by WEKA are extracted and codified in CLIPS with possible changes. (d) FuzzyCLIPS, which is a shell for creating fuzzy expert systems. It's an extension of CLIPS that uses fuzzy rules and certainty factors. Here, the expert system created via CLIPS is transferred to a fuzzy expert system by making some variables fuzzy. (e) GUI Ant-Miner, which is a tool for classification rules extraction from a given data set, using a sequential covering model, such as the AntMiner algorithm.
Based on the above methods-tools, expert systems were created from five (5) classification data sets from the UCI Machine Learning Repository. Those systems have been evaluated according to their classification capabilities based on known metrics (accuracy, sensitivity, specificity and precision). From the comparison of the methods on the five data sets, we conclude the following: (a) if we want results with greater accuracy and high speed, we should probably turn into WEKA. (b) if we want to do parallel calculations too, the only tool that provides us this capability is FuzzyCLIPS, sacrificing little speed and accuracy. (c) With regards to GUI Ant-Miner, it works as well as WEKA in terms of accuracy, but it is slower. (d) About ACRES, it works well when we work with subsets of the variables, so that it produces a relatively small number or rules and covers almost all the instances of the test set. For our datasets, ACRES is not considered very reliable in the sense that we should work with subsets of variables, not all the variables of the dataset. The more variables we consider as a subset in ACRES, the slower it becomes.
|
572 |
Issues of civil liability arising from the use of expert systemsAlheit, Karin 08 1900 (has links)
Computers have become indispensable in all walks of life, causing people to rely
increasingly on their accurate performance. Defective computer programs, the
incorrect use of computer programs and the non-use of computer programs can
cause serious damage. Expert systems are an application of artificial intelligence
techniques whereby the human reasoning process is simulated in a computer system,
enabling the system to act as a human expert when executing a task. Expert
systems are used by professional users as an aid in reaching a decision and by nonprofessional
users to solve a problem or to decide upon a specific course of action.
As such they can be compared to a consumer product through which professional
services are sold. The various parties that may possibly be held liable in the event
of damage suffered by the use of expert systems are identified as consisting of two
main groups, namely the producers and the users. Because of the frequent
exemption of liability for any consequential loss in standard form computer contracts,
the injured user may often have only a delictual action at her disposal. The faultbased
delictual actions in SA law give inadequate protection to unsuspecting software
users who incur ·personal and property damage through the use of defective expert
systems since it is almost impossible for an unsophisticated injured party to prove the
negligence of the software developer during the technical production process. For
this reason it is recommended that software liability be grounded on strict liability in
analogy to the European Directive on Liability for Defective Products. It is also
pointed out that software standards and quality assurance procedures have a major
role to play in the determination of the elements of wrongfulness and negligence in
software liability and that the software industry should be accorded professional
status to ensure a safe standard of computer programming. / Private Law / LL.D.
|
573 |
KBE I PRODUKTUTVECKLING PÅ SCANIA : En undersökning av potentialen i CATIA Knowledgeware / KBE IN PRODUCT DEVELOPMENT AT SCANIA : An investigation of the potential in CATIA KnowledgewareJonas, Lundin, Mats, Sköldebrand January 2008 (has links)
Övergången från CATIA V4 till CATIA V5 innebär nya möjligheter för konstruktörerna på Scania att arbeta med Knowledge Based Engineering, KBE, för att effektivisera och kvalitets-säkra sitt arbete. Då CATIA V5 är en ny plattform som innehåller verktyg med samlingsnamnet knowledgware, för att bygga in kunskap i modeller ville Scania undersöka potentialen i att arbeta med KBE, och hur detta skulle kunna ske på Scania. Vid traditionell produktutveckling tas en helt ny artikel fram vid behov och ofta innebär detta att arbete som tidigare utförts, görs om igen. Syftet med arbetet är därför att undersöka huruvida KBE i CATIA V5 kan erbjuda möjligheter att återanvända kunskap från tidigare arbete och samtidigt kvalitetssäkra denna, samt utreda vilka knowledgewarelicenser som i så fall kan vara lämpliga för Scania. För att göra detta har en litteraturstudie genomförts för att undersöka vad som har gjorts inom området, och även en intervjustudie har utförts inom R&D på Scania. Vidare har sakkunniga på Linköpings Universitet och Tekniska Högskolan i Jönköping intervjuats. Detta material har sedan sammanställts och analyserats för att sedan resultera i slutsats och rekommendationer. Arbetet har resulterat i en demonstrationsmodell för Scania internt, som baserar sig på den information som framkommit under litteraturstudier och intervjuer. Att arbeta med KBE har både fördelar och nackdelar där den största svårigheten ligger i att bedöma om en artikel lämpar sig för KBE-modellering. Vinsterna med KBE är att stora tidsvinster kan göras och kvalitet kan säkerställas. De mest användbara licenserna för Scanias vidkommande är KWA och PKT, med vilka exempelvis kontroller av standarder och återanvändning av geometrier kan göras. Den slutliga rekommendationen baserat på teori och resultat är att Scania bör överväga att införa KBE som arbetssätt, och därför tillsätta en grupp som fungerar som expertis inom KBE. Denna bör då fungera som support och en resurs vid skapande av KBE-modeller och ansvara för att dessa är korrekta och underhålls. Vidare bör arbete med att definiera fysiska gränssnitt mellan artiklar startas och lämpligtvis då av GEO- eller Layoutgrupperna. / The transition from CATIA V4 to CATIA V5 opens up new possibilities for designers at Scania to work with Knowledge Based Engineering, KBE, in order to increase efficiency and assure quality. As CATIA V5 is a new platform complete with tools, referred to as knowledgeware, for infusing knowledge into models, Scania wanted to investigate the potential of working with KBE, and how this could be used at Scania. In traditional product development a completely new model is produced when needed, and this often entails performing tasks already undertaken and completed. Therefore, the purpose of this thesis is to ascertain whether or not KBE in CATIA V5 can offer the possibility to reuse knowledge from previous work and assuring the quality of this, and if so, determine which knowledgeware licenses would be appropriate for Scania. In order to do this, a literature study was conducted in order to look into what had been done in this field. Also, an interview study was carried out within Scanias R&D department. In addition to this, interviews were held with expertise at Linköping University and Jönköping University. The material was then compiled and analyzed, resulting in conclusions and recommendations. The thesis resulted in a demonstration model for Scania internally, based on the information gathered from literature and interviews. Working with KBE has its pros and cons, the biggest difficulty being to determine whether or not an article is suitable for KBE-modelling. The profits of KBE include quality assurance and sizeable reductions in design time. The most useful knowledgeware licenses for Scania are KWA and PKT, which for example enables users to implement checks for standards and to easily reuse geometry. The final recommendations of this thesis, based on theory and results, is that Scania should consider introducing KBE, and should therefore appoint a group to function as an authority on KBE. This group would provide support and act as a resource in the creation of KBE-models, and also be responsible for the validity and maintenance on these. Furthermore work should begin with defining physical interfaces between articles, preferably by GEO- and Layout groups.
|
574 |
An analysis of the current nature, status and relevance of data mining tools to enable organizational learningHattingh, Martin 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The use of technological tools has developed rapidly over the past decade or two. As
one of the areas of business technology, Data Mining has been receiving substantial
attention, and thus the study defined the scope and framework for the application of
data mining in the first place.
Because of the wide area of application of data mining, an overview and comparative
analysis was given of the specific data mining tools available to the knowledge
worker.
For the purposes ofthe study, and because the goals of data mining involve
knowledge extraction, the concept of organizational learning was analysed. The
factors needed to facilitate this learning process were also taken into consideration,
with a view towards enabling the process through their improved availability.
Actual enablement of the learning process, through the improved factor availability
described above, was analysed through the use of each specific tool reviewed.
The salient conclusion of this study was that data mining tools, applied correctly, and
within the correct framework and infrastructure, can enable the organizational
learning process on several levels. Because of the complexity of the learning process,
it was found that there are several factors to consider when implementing a data
mining strategy.
Recommendations were offered for the improved enablement of the organizational
learning process, through establishing more comprehensive technology plans, creating
learning environments, and promoting transparency and accountability. Finally,
suggestions were made for further research on the competitive application of data
mining strategies. / AFRIKAANSE OPSOMMING: Die gebruik van tegnologiese hulpmiddels het gedurende die afgelope dekade of twee
snel toegeneem. As een afdeling van ondernemings tegnologie, is daar aansienlike
belangstelling in 'Data Mining' (die myn van data), en dus het die studie eers die
omvang en raamwerk van 'Data Mining' gedefinieer.
As gevolg van die wye toepassingsveld van 'Data Mining', is daar 'n oorsig en
vergelykende analise gegee van die spesifieke 'Data Mining' hulpmiddels tot
beskikking van die kennis werker.
Vir die doel van die studie, en omdat die doelwitte van 'Data Mining' kennisonttrekking
behels, is die konsep van organisatoriese leer geanaliseer. Die faktore
benodig om hierdie leerproses te fasiliteer is ook in berekening gebring, met die
mikpunt om die proses in staat te stel deur verbeterde beskikbaarheid van hierdie
faktore.
Werklike instaatstelling van die leerproses, deur die verbeterde faktor beskikbaarheid
hierbo beskryf, is geanaliseer deur 'n oorsig van die gebruik van elke spesifieke
hulpmiddel.
Die gevolgtrekking van hierdie studie was dat 'Data Mining' hulpmiddels, indien
korrek toegepas, binne die korrekte raamwerk en infrastruktuur, die organisatoriese
leerproses op verskeie vlakke in staat kan stel. As gevolg van die ingewikkeldheid van
die leerproses, is gevind dat daar verskeie faktore is wat in ag geneem moet word
wanneer 'n 'Data Mining' strategie geïmplementeer word.
Aanbevelings is gemaak vir die verbeterde instaatstelling van die organisatoriese
leerproses, deur die daarstelling van meer omvattende tegnologie planne, die skep van
leer-vriendelike omgewings, en die bevordering van deursigtigheid en rekenskap. In
die laaste plek is daar voorstelle gemaak vir verdere navorsing oor die kompeterende
toepassing van 'Data Mining' strategieë.
|
575 |
A strategic, system-based knowledge management approach to dealing with high error rates in the deployment of point-of-care devicesKhoury, Gregory Robert 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2014. / There is a growing trend towards the use of point of care testing in resource poor settings, in particular in the diagnosis and treatment of infectious diseases such as Human Immunodeficiency Virus (HIV), Tuberculosis (TB) and Malaria. The Alere PIMA CD4 counter is widely used as a point of care device in the staging and management of HIV. While the instrument has been extensively validated and shown to be comparable to central laboratory testing, little is known about the error rates of these devices, as well as the factors that contribute to error rates. This research was a retrospective analysis of error rates from 61 PIMA point of care devices deployed in nine African countries belonging to Medisciens Sans Frontiers. The data was collected between January 2011 and June 2013.
The objectives of the study were to determine the overall error rate and, where possible, determine the root cause. Thereafter the study aimed to determine the variables that contribute to the root causes and make recommendations to reduce the error rate.
The overall error was determined to be 13.2 percent. The errors were further divided into four root causes and error rates assigned to each root cause based on the error codes generated by the instrument. These error rates were found to be operator error (48.4%), instrument error (2.0%), reagent/cartridge error (1%) and sample error (4.3%). It was found that a high percentage of the errors were ambiguous (44.3%), meaning that they had more than one possible root cause.
A systems-based knowledge management approach was used to create a qualitative politicised influence diagram, which described the variables that affect each of the root causes. The influence diagram was subjected to loop analysis where individual loops were described in terms of the knowledge type (tacit or explicit), the knowing type (know-how, know-who, know-what and know-why), and the actors involved with each variable. Where possible, the variable was described as contributing to pre-analytical, analytical or post-analytical error.
Recommendations to reduce the error rates for each of the variables were then made based on the findings.
|
576 |
Expert system rules for the classification of road intersections and turns in Hong KongLi, Zhijie, 李志杰 January 2005 (has links)
published_or_final_version / abstract / Geography / Master / Master of Philosophy
|
577 |
Sistem za identifikaciju procesnih parametara štampe / The system for processing parameter identification in printingZeljković Željko 25 July 2016 (has links)
<p>Kroz istraživanja je postavljen i razvijen kompleksan model sistema<br />identifikacije procesnih parametara štampe na osnovama<br />savremenih programskih sistema i alata koji omogućuju značajno<br />ubrzanje procesa dolaska do rešenja čime su se unapredili grafički<br />proizvodni procesi i procesi sticanja i proširivanja znanja. Model<br />je baziran na integrativnim modulima koga čine, sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na algoritamskoj programskoj strukturi, sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na principima gradnje ekspernih sistema i sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na učenju na daljinu.</p> / <p>The complex model of the printing processing parameter identification<br />system is set and developed through research on the basis of modern<br />software systems and tools that enable you to significantly speed up the<br />process reaching solutions which have improved graphics production<br />processes and the processes of acquiring and expanding knowledge. The<br />model is based on the integration modules, which consist the printing<br />processing parameter identification system based on algorithmic structure,<br />the printing processing parameter identification system based on the<br />construction principles of expert systems and the printing processing<br />parameter identification system based on distance learning principles.</p>
|
578 |
Temporal Connectionist Expert Systems Using a Temporal Backpropagation AlgorithmCivelek, Ferda N. (Ferda Nur) 12 1900 (has links)
Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results was that compressing the weight matrix required more training epochs to produce correct results. To get a measure of the correctness of the results, an error measure which is the value of the error squared was summed over all patterns to get a total sum of squares.
|
579 |
Intelligent systems using GMDH algorithmsUnknown Date (has links)
Design of intelligent systems that can learn from the environment and adapt to the change in the environment has been pursued by many researchers in this age of information technology. The Group Method of Data Handling (GMDH) algorithm to be implemented is a multilayered neural network. Neural network consists of neurons which use information acquired in training to deduce relationships in order to predict future responses. Most software tool during the simulation of the neural network based algorithms in a sequential, single processor machine like Pascal, C or C++ takes several hours or even days. But in this thesis, the GMDH algorithm was modified and implemented into a software tool written in Verilog HDL and tested with specific application (XOR) to make the simulation faster. The purpose of the development of this tool is also to keep it general enough so that it can have a wide range of uses, but robust enough that it can give accurate results for all of those uses. Most of the applications of neural networks are basically software simulations of the algorithms only but in this thesis the hardware design is also developed of the algorithm so that it can be easily implemented on hardware using Field Programmable Gate Array (FPGA) type devices. The design is small enough to require a minimum amount of memory, circuit space, and propagation delay. / by Mukul Gupta. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
|
580 |
Unifying the conceptual levels of network security through the use of patternsUnknown Date (has links)
Network architectures are described by the International Standard for
Organization (ISO), which contains seven layers. The internet uses four of these layers,
of which three are of interest to us. These layers are Internet Protocol (IP) or Network
Layer, Transport Layer and Application Layer. We need to protect against attacks that
may come through any of these layers. In the world of network security, systems are plagued by various attacks, internal and external, and could result in Denial of Service (DoS) and/or other damaging effects. Such attacks and loss of service can be devastating for the users of the system. The implementation of security devices such as Firewalls and Intrusion Detection Systems
(IDS), the protection of network traffic with Virtual Private Networks (VPNs), and the
use of secure protocols for the layers are important to enhance the security at each of
these layers.We have done a survey of the existing network security patterns and we have written the missing patterns. We have developed security patterns for abstract IDS, Behavior–based IDS and Rule-based IDS and as well as for Internet Protocol Security (IPSec) and Transport Layer Security (TLS) protocols. We have also identified the need for a VPN pattern and have developed security patterns for abstract VPN, an IPSec VPN and a TLS VPN. We also evaluated these patterns with respect to some aspects to simplify their application by system designers. We have tried to unify the security of the network layers using security patterns by tying in security patterns for network transmission, network protocols and network boundary devices. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
Page generated in 0.0391 seconds