Spelling suggestions: "subject:"[een] EXPERT SYSTEMS"" "subject:"[enn] EXPERT SYSTEMS""
441 |
The S2 automated agent (S2A2) : a training aid for commanders and intelligence officersJaniszewski, John T. 01 January 1999 (has links)
No description available.
|
442 |
Cooperating aips in the context-based reasoning paradigmJohansson, Lars 01 January 1999 (has links)
No description available.
|
443 |
Computer integrated machining parameter selection in a job shop using expert systems and algorithmsGopalakrishnan, B. January 1988 (has links)
The research for this dissertation is focused on the selection of machining parameters for a job shop using expert systems and algorithms. The machining processes are analyzed in detail and rule based expert systems are developed for the analysis of process plans based on operation and work-material compatibility, the selection of machines, cutting tools, cutting fluids, and tool angles. Data base design is examined for this problem. Algorithms are developed to evaluate the selection of machines and cutting tools based on cost considerations. An algorithm for optimizing cutting conditions in turning operations has been developed. Data framework and evaluation procedures are developed for other machining operations involving different types of machines and tools. / Ph. D.
|
444 |
The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraftLombard, Orpha Cornelia 05 1900 (has links)
This dissertation describes a research study conducted to determine the benefits and
use of ontology technologies to support a simulation environment that evaluates
countermeasures employed to protect military aircraft.
Within the military, aircraft represent a significant investment and these valuable assets
need to be protected against various threats, such as man-portable air-defence
systems. To counter attacks from these threats, countermeasures are deployed, developed
and evaluated by utilising modelling and simulation techniques. The system
described in this research simulates real world scenarios of aircraft, missiles and
countermeasures in order to assist in the evaluation of infra-red countermeasures
against missiles in specified scenarios.
Traditional ontology has its origin in philosophy, describing what exists and how
objects relate to each other. The use of formal ontologies in Computer Science have
brought new possibilities for modelling and representation of information and knowledge
in several domains. These advantages also apply to military information systems
where ontologies support the complex nature of military information. After considering
ontologies and their advantages against the requirements for enhancements
of the simulation system, an ontology was constructed by following a formal development
methodology. Design research, combined with the adaptive methodology
of development, was conducted in a unique way, therefore contributing to establish
design research as a formal research methodology. The ontology was constructed
to capture the knowledge of the simulation system environment and the use of it
supports the functions of the simulation system in the domain.
The research study contributes to better communication among people involved in
the simulation studies, accomplished by a shared vocabulary and a knowledge base
for the domain. These contributions affirmed that ontologies can be successfully use
to support military simulation systems / Computing / M. Tech. (Information Technology)
|
445 |
Socio-semantic conversational information accessSahay, Saurav 15 November 2011 (has links)
The main contributions of this thesis revolve around development of an integrated conversational recommendation system, combining data and information models with community network and interactions to leverage multi-modal information access. We have developed a real time conversational information access community agent that leverages community knowledge by pushing relevant recommendations to users of the community. The recommendations are delivered in the form of web resources, past conversation and people to connect to. The information agent (cobot, for community/ collaborative bot) monitors the community conversations, and is 'aware' of users' preferences by implicitly capturing their short term and long term knowledge models from conversations. The agent leverages from health and medical domain knowledge to extract concepts, associations and relationships between concepts; formulates queries for semantic search and provides socio-semantic recommendations in the conversation after applying various relevance filters to the candidate results. The agent also takes into account users' verbal intentions in conversations while making recommendation decision.
One of the goals of this thesis is to develop an innovative approach to delivering relevant information using a combination of social networking, information aggregation, semantic search and recommendation techniques. The idea is to facilitate timely and relevant social information access by mixing past community specific conversational knowledge and web information access to recommend and connect users with relevant information.
Language and interaction creates usable memories, useful for making decisions about what actions to take and what information to retain. Cobot leverages these interactions to maintain users' episodic and long term semantic models. The agent
analyzes these memory structures to match and recommend users in conversations by matching with the contextual information need. The social feedback on the recommendations is registered in the system for the algorithms to promote community
preferred, contextually relevant resources.
The nodes of the semantic memory are frequent concepts extracted from user's interactions. The concepts are connected with associations that develop when concepts co-occur frequently. Over a period of time when the user participates in more interactions, new concepts are added to the semantic memory. Different conversational
facets are matched with episodic memories and a spreading activation search on the
semantic net is performed for generating the top candidate user recommendations for the conversation.
The tying themes in this thesis revolve around informational and social aspects of a unified information access architecture that integrates semantic extraction and indexing with user modeling and recommendations.
|
446 |
Σύγκριση μεθόδων δημιουργίας έμπειρων συστημάτων με κανόνες για προβλήματα κατηγοριοποίησης από σύνολα δεδομένωνΤζετζούμης, Ευάγγελος 31 January 2013 (has links)
Σκοπός της παρούσας εργασίας είναι η σύγκριση διαφόρων μεθόδων κατηγοριοποίησης που στηρίζονται σε αναπαράσταση γνώσης με κανόνες μέσω της δημιουργίας έμπειρων συστημάτων από γνωστά σύνολα δεδομένων. Για την εφαρμογή των μεθόδων και τη δημιουργία και υλοποίηση των αντίστοιχων έμπειρων συστημάτων χρησιμοποιούμε διάφορα εργαλεία όπως: (α) Το ACRES, το οποίο είναι ένα εργαλείο αυτόματης παραγωγής έμπειρων συστημάτων με συντελεστές βεβαιότητας. Οι συντελεστές βεβαιότητος μπορούν να υπολογίζονται κατά δύο τρόπους και επίσης παράγονται δύο τύποι έμπειρων συστημάτων που στηρίζονται σε δύο διαφορετικές μεθόδους συνδυασμού των συντελεστών βεβαιότητας (κατά MYCIN και μιας γενίκευσης αυτής του MYCIN με χρήση βαρών που υπολογίζονται μέσω ενός γενετικού αλγορίθμου). (β) Το WEKA, το οποίο είναι ένα εργαλείο που περιέχει αλγόριθμους μηχανικής μάθησης. Συγκεκριμένα, στην εργασία χρησιμοποιούμε τον αλγόριθμο J48, μια υλοποίηση του γνωστού αλγορίθμου C4.5, που παράγει δένδρα απόφασης, δηλ. κανόνες. (γ) Το CLIPS, το οποίο είναι ένα κέλυφος για προγραμματισμό με κανόνες. Εδώ, εξάγονται οι κανόνες από το δέντρο απόφασης του WEKA και υλοποιούνται στο CLIPS με ενδεχόμενες μετατροπές. (δ) Το FuzzyCLIPS, το οποίο επίσης είναι ένα κέλυφος για την δημιουργία ασαφών ΕΣ. Είναι μια επέκταση του CLIPS που χρησιμοποιεί ασαφείς κανόνες και συντελεστές βεβαιότητος. Εδώ, το έμπειρο σύστημα που παράγεται μέσω του CLIPS μετατρέπεται σε ασαφές έμπειρο σύστημα με ασαφοποίηση κάποιων μεταβλητών. (ε) Το GUI Ant-Miner, το οποίο είναι ένα εργαλείο για την εξαγωγή κανόνων κατηγοριοποίησης από ένα δοσμένο σύνολο δεδομένων. με τη χρήση ενός μοντέλου ακολουθιακής κάλυψης, όπως ο αλγόριθμος AntMiner.
Με βάση τις παραπάνω μεθόδους-εργαλεία δημιουργήθηκαν έμπειρα συστήματα από πέντε σύνολα δεδομένων κατηγοριοποίησης από τη βάση δεδομένων UCI Machine Learning Repository. Τα συστήματα αυτά αξιολογήθηκαν ως προς την ταξινόμηση με βάση γνωστές μετρικές (ορθότητα, ευαισθησία, εξειδίκευση και ακρίβεια). Από τη σύγκριση των μεθόδων και στα πέντε σύνολα δεδομένων, εξάγουμε τα παρακάτω συμπεράσματα: (α) Αν επιθυμούμε αποτελέσματα με μεγαλύτερη ακρίβεια και μεγάλη ταχύτητα, θα πρέπει μάλλον να στραφούμε στην εφαρμογή WEKA. (β) Αν θέλουμε να κάνουμε και παράλληλους υπολογισμούς, η μόνη εφαρμογή που μας παρέχει αυτή τη δυνατότητα είναι το FuzzyCLIPS, θυσιάζοντας όμως λίγη ταχύτητα και ακρίβεια. (γ) Όσον αφορά το GUI Ant-Miner, λειτουργεί τόσο καλά όσο και το WEKA όσον αφορά την ακρίβεια αλλά είναι πιο αργή μέθοδος. (δ) Σχετικά με το ACRES, λειτουργεί καλά όταν δουλεύουμε με υποσύνολα μεταβλητών, έτσι ώστε να παράγεται σχετικά μικρός αριθμός κανόνων και να καλύπτονται σχεδόν όλα τα στιγμιότυπα στο σύνολο έλεγχου. Στα σύνολα δεδομένων μας το ACRES δεν θεωρείται πολύ αξιόπιστο υπό την έννοια ότι αναγκαζόμαστε να δουλεύουμε με υποσύνολο μεταβλητών και όχι όλες τις μεταβλητές του συνόλου δεδομένων. Όσο πιο πολλές μεταβλητές πάρουμε ως υποσύνολο στο ACRES, τόσο πιο αργό γίνεται. / The aim of this thesis is the comparison of several classification methods that are based on knowledge representation with rules via the creation of expert systems from known data sets. For the application of those methods and the creation and implementation of the corresponding expert systems, we use various tools such as: (a) ACRES, which is a tool for automatic production of expert systems with certainty factors. The certainty factors can be calculated via two different methods and also two different types of expert systems can be produced based on different methods of certainty propagation (that of MYCIN and a generalized version of MYCIN one that uses weights calculated via a genetic algorithm). (b) WEKA, which is a tool that contains machine learning algorithms. Specifically, we use J48, an implementation of the known algorithm C4.5, which produces decision trees, which are coded rules. (c) CLIPS, which is a shell for rule based programming. Here, the rules encoded on the decision true produced by WEKA are extracted and codified in CLIPS with possible changes. (d) FuzzyCLIPS, which is a shell for creating fuzzy expert systems. It's an extension of CLIPS that uses fuzzy rules and certainty factors. Here, the expert system created via CLIPS is transferred to a fuzzy expert system by making some variables fuzzy. (e) GUI Ant-Miner, which is a tool for classification rules extraction from a given data set, using a sequential covering model, such as the AntMiner algorithm.
Based on the above methods-tools, expert systems were created from five (5) classification data sets from the UCI Machine Learning Repository. Those systems have been evaluated according to their classification capabilities based on known metrics (accuracy, sensitivity, specificity and precision). From the comparison of the methods on the five data sets, we conclude the following: (a) if we want results with greater accuracy and high speed, we should probably turn into WEKA. (b) if we want to do parallel calculations too, the only tool that provides us this capability is FuzzyCLIPS, sacrificing little speed and accuracy. (c) With regards to GUI Ant-Miner, it works as well as WEKA in terms of accuracy, but it is slower. (d) About ACRES, it works well when we work with subsets of the variables, so that it produces a relatively small number or rules and covers almost all the instances of the test set. For our datasets, ACRES is not considered very reliable in the sense that we should work with subsets of variables, not all the variables of the dataset. The more variables we consider as a subset in ACRES, the slower it becomes.
|
447 |
KBE I PRODUKTUTVECKLING PÅ SCANIA : En undersökning av potentialen i CATIA Knowledgeware / KBE IN PRODUCT DEVELOPMENT AT SCANIA : An investigation of the potential in CATIA KnowledgewareJonas, Lundin, Mats, Sköldebrand January 2008 (has links)
Övergången från CATIA V4 till CATIA V5 innebär nya möjligheter för konstruktörerna på Scania att arbeta med Knowledge Based Engineering, KBE, för att effektivisera och kvalitets-säkra sitt arbete. Då CATIA V5 är en ny plattform som innehåller verktyg med samlingsnamnet knowledgware, för att bygga in kunskap i modeller ville Scania undersöka potentialen i att arbeta med KBE, och hur detta skulle kunna ske på Scania. Vid traditionell produktutveckling tas en helt ny artikel fram vid behov och ofta innebär detta att arbete som tidigare utförts, görs om igen. Syftet med arbetet är därför att undersöka huruvida KBE i CATIA V5 kan erbjuda möjligheter att återanvända kunskap från tidigare arbete och samtidigt kvalitetssäkra denna, samt utreda vilka knowledgewarelicenser som i så fall kan vara lämpliga för Scania. För att göra detta har en litteraturstudie genomförts för att undersöka vad som har gjorts inom området, och även en intervjustudie har utförts inom R&D på Scania. Vidare har sakkunniga på Linköpings Universitet och Tekniska Högskolan i Jönköping intervjuats. Detta material har sedan sammanställts och analyserats för att sedan resultera i slutsats och rekommendationer. Arbetet har resulterat i en demonstrationsmodell för Scania internt, som baserar sig på den information som framkommit under litteraturstudier och intervjuer. Att arbeta med KBE har både fördelar och nackdelar där den största svårigheten ligger i att bedöma om en artikel lämpar sig för KBE-modellering. Vinsterna med KBE är att stora tidsvinster kan göras och kvalitet kan säkerställas. De mest användbara licenserna för Scanias vidkommande är KWA och PKT, med vilka exempelvis kontroller av standarder och återanvändning av geometrier kan göras. Den slutliga rekommendationen baserat på teori och resultat är att Scania bör överväga att införa KBE som arbetssätt, och därför tillsätta en grupp som fungerar som expertis inom KBE. Denna bör då fungera som support och en resurs vid skapande av KBE-modeller och ansvara för att dessa är korrekta och underhålls. Vidare bör arbete med att definiera fysiska gränssnitt mellan artiklar startas och lämpligtvis då av GEO- eller Layoutgrupperna. / The transition from CATIA V4 to CATIA V5 opens up new possibilities for designers at Scania to work with Knowledge Based Engineering, KBE, in order to increase efficiency and assure quality. As CATIA V5 is a new platform complete with tools, referred to as knowledgeware, for infusing knowledge into models, Scania wanted to investigate the potential of working with KBE, and how this could be used at Scania. In traditional product development a completely new model is produced when needed, and this often entails performing tasks already undertaken and completed. Therefore, the purpose of this thesis is to ascertain whether or not KBE in CATIA V5 can offer the possibility to reuse knowledge from previous work and assuring the quality of this, and if so, determine which knowledgeware licenses would be appropriate for Scania. In order to do this, a literature study was conducted in order to look into what had been done in this field. Also, an interview study was carried out within Scanias R&D department. In addition to this, interviews were held with expertise at Linköping University and Jönköping University. The material was then compiled and analyzed, resulting in conclusions and recommendations. The thesis resulted in a demonstration model for Scania internally, based on the information gathered from literature and interviews. Working with KBE has its pros and cons, the biggest difficulty being to determine whether or not an article is suitable for KBE-modelling. The profits of KBE include quality assurance and sizeable reductions in design time. The most useful knowledgeware licenses for Scania are KWA and PKT, which for example enables users to implement checks for standards and to easily reuse geometry. The final recommendations of this thesis, based on theory and results, is that Scania should consider introducing KBE, and should therefore appoint a group to function as an authority on KBE. This group would provide support and act as a resource in the creation of KBE-models, and also be responsible for the validity and maintenance on these. Furthermore work should begin with defining physical interfaces between articles, preferably by GEO- and Layout groups.
|
448 |
An analysis of the current nature, status and relevance of data mining tools to enable organizational learningHattingh, Martin 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The use of technological tools has developed rapidly over the past decade or two. As
one of the areas of business technology, Data Mining has been receiving substantial
attention, and thus the study defined the scope and framework for the application of
data mining in the first place.
Because of the wide area of application of data mining, an overview and comparative
analysis was given of the specific data mining tools available to the knowledge
worker.
For the purposes ofthe study, and because the goals of data mining involve
knowledge extraction, the concept of organizational learning was analysed. The
factors needed to facilitate this learning process were also taken into consideration,
with a view towards enabling the process through their improved availability.
Actual enablement of the learning process, through the improved factor availability
described above, was analysed through the use of each specific tool reviewed.
The salient conclusion of this study was that data mining tools, applied correctly, and
within the correct framework and infrastructure, can enable the organizational
learning process on several levels. Because of the complexity of the learning process,
it was found that there are several factors to consider when implementing a data
mining strategy.
Recommendations were offered for the improved enablement of the organizational
learning process, through establishing more comprehensive technology plans, creating
learning environments, and promoting transparency and accountability. Finally,
suggestions were made for further research on the competitive application of data
mining strategies. / AFRIKAANSE OPSOMMING: Die gebruik van tegnologiese hulpmiddels het gedurende die afgelope dekade of twee
snel toegeneem. As een afdeling van ondernemings tegnologie, is daar aansienlike
belangstelling in 'Data Mining' (die myn van data), en dus het die studie eers die
omvang en raamwerk van 'Data Mining' gedefinieer.
As gevolg van die wye toepassingsveld van 'Data Mining', is daar 'n oorsig en
vergelykende analise gegee van die spesifieke 'Data Mining' hulpmiddels tot
beskikking van die kennis werker.
Vir die doel van die studie, en omdat die doelwitte van 'Data Mining' kennisonttrekking
behels, is die konsep van organisatoriese leer geanaliseer. Die faktore
benodig om hierdie leerproses te fasiliteer is ook in berekening gebring, met die
mikpunt om die proses in staat te stel deur verbeterde beskikbaarheid van hierdie
faktore.
Werklike instaatstelling van die leerproses, deur die verbeterde faktor beskikbaarheid
hierbo beskryf, is geanaliseer deur 'n oorsig van die gebruik van elke spesifieke
hulpmiddel.
Die gevolgtrekking van hierdie studie was dat 'Data Mining' hulpmiddels, indien
korrek toegepas, binne die korrekte raamwerk en infrastruktuur, die organisatoriese
leerproses op verskeie vlakke in staat kan stel. As gevolg van die ingewikkeldheid van
die leerproses, is gevind dat daar verskeie faktore is wat in ag geneem moet word
wanneer 'n 'Data Mining' strategie geïmplementeer word.
Aanbevelings is gemaak vir die verbeterde instaatstelling van die organisatoriese
leerproses, deur die daarstelling van meer omvattende tegnologie planne, die skep van
leer-vriendelike omgewings, en die bevordering van deursigtigheid en rekenskap. In
die laaste plek is daar voorstelle gemaak vir verdere navorsing oor die kompeterende
toepassing van 'Data Mining' strategieë.
|
449 |
A strategic, system-based knowledge management approach to dealing with high error rates in the deployment of point-of-care devicesKhoury, Gregory Robert 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2014. / There is a growing trend towards the use of point of care testing in resource poor settings, in particular in the diagnosis and treatment of infectious diseases such as Human Immunodeficiency Virus (HIV), Tuberculosis (TB) and Malaria. The Alere PIMA CD4 counter is widely used as a point of care device in the staging and management of HIV. While the instrument has been extensively validated and shown to be comparable to central laboratory testing, little is known about the error rates of these devices, as well as the factors that contribute to error rates. This research was a retrospective analysis of error rates from 61 PIMA point of care devices deployed in nine African countries belonging to Medisciens Sans Frontiers. The data was collected between January 2011 and June 2013.
The objectives of the study were to determine the overall error rate and, where possible, determine the root cause. Thereafter the study aimed to determine the variables that contribute to the root causes and make recommendations to reduce the error rate.
The overall error was determined to be 13.2 percent. The errors were further divided into four root causes and error rates assigned to each root cause based on the error codes generated by the instrument. These error rates were found to be operator error (48.4%), instrument error (2.0%), reagent/cartridge error (1%) and sample error (4.3%). It was found that a high percentage of the errors were ambiguous (44.3%), meaning that they had more than one possible root cause.
A systems-based knowledge management approach was used to create a qualitative politicised influence diagram, which described the variables that affect each of the root causes. The influence diagram was subjected to loop analysis where individual loops were described in terms of the knowledge type (tacit or explicit), the knowing type (know-how, know-who, know-what and know-why), and the actors involved with each variable. Where possible, the variable was described as contributing to pre-analytical, analytical or post-analytical error.
Recommendations to reduce the error rates for each of the variables were then made based on the findings.
|
450 |
Sistem za identifikaciju procesnih parametara štampe / The system for processing parameter identification in printingZeljković Željko 25 July 2016 (has links)
<p>Kroz istraživanja je postavljen i razvijen kompleksan model sistema<br />identifikacije procesnih parametara štampe na osnovama<br />savremenih programskih sistema i alata koji omogućuju značajno<br />ubrzanje procesa dolaska do rešenja čime su se unapredili grafički<br />proizvodni procesi i procesi sticanja i proširivanja znanja. Model<br />je baziran na integrativnim modulima koga čine, sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na algoritamskoj programskoj strukturi, sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na principima gradnje ekspernih sistema i sistem<br />identifikacije procesnih parametara štampe na osnovi sistema<br />zasnovanih na učenju na daljinu.</p> / <p>The complex model of the printing processing parameter identification<br />system is set and developed through research on the basis of modern<br />software systems and tools that enable you to significantly speed up the<br />process reaching solutions which have improved graphics production<br />processes and the processes of acquiring and expanding knowledge. The<br />model is based on the integration modules, which consist the printing<br />processing parameter identification system based on algorithmic structure,<br />the printing processing parameter identification system based on the<br />construction principles of expert systems and the printing processing<br />parameter identification system based on distance learning principles.</p>
|
Page generated in 0.0408 seconds