Spelling suggestions: "subject:"csrknowledge acquisition (expert systems)"" "subject:"bothknowledge acquisition (expert systems)""
41 |
Widening the knowledge acquisition bottleneck for intelligent tutoring systems : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in the University of Canterbury /Suraweera, Pramuditha. January 2006 (has links)
Thesis (Ph. D.)--University of Canterbury, 2006. / Typescript (photocopy). Includes bibliographical references (p. 246-255). Also available via the World Wide Web.
|
42 |
Trust on the semantic web /Cloran, Russell Andrew. January 2006 (has links)
Thesis (M.Sc. (Computer Science)) - Rhodes University, 2007.
|
43 |
Economic modelling using computational intelligence techniquesKhoza, Msizi Smiso 09 December 2013 (has links)
M.Ing. ( Electrical & Electronic Engineering Science) / Economic modelling tools have gained popularity in recent years due to the increasing need for greater knowledge to assist policy makers and economists. A number of computational intelligence approaches have been proposed for economic modelling. Most of these approaches focus on the accuracy of prediction and not much research has been allocated to investigate the interpretability of the decisions derived from these systems. This work proposes the use of computational intelligence techniques (Rough set theory (RST) and the Multi-layer perceptron (MLP) model) to model the South African economy. RST is a rule-based technique suitable for analysing vague, uncertain and imprecise data. RST extracts rules from the data to model the system. These rules are used for prediction and interpreting the decision process. The lesser the number of rules, the easier it is to interpret the model. The performance of the RST is dependent on the discretization technique employed. An equal frequency bin (EFB), Boolean reasoning (BR), entropy partition (EP) and the Naïve algorithm (NA) are used to develop an RST model. The model trained using EFB data performs better than the models trained using BR and EP. RST was used to model South Africa’s financial sector. Here, accuracy of 86.8%, 57.7%, 64.5% and 43% were achieved for EFB, BR, EP and NA respectively. This work also proposes an ensemble of rough set theory and the multi-layer perceptron model to model the South African economy wherein, a prediction of the direction of the gross domestic product is presented. This work also proposes the use of an auto-associative Neural Network to impute missing economic data. The auto-associative neural network imputed the ten variables or attributes that were used in the prediction model. These variables were: Construction contractors rating lack of skilled labour as constraint, Tertiary economic sector contribution to GDP, Income velocity of circulation of money, Total manufacturing production volume, Manufacturing firms rating lack of skilled labour as constraint, Total asset value of banking industry, Nominal unit labour cost, Total mass of Platinum Group Metals (PGMs) mined, Total revenue from sale of PGMs and the Gross Domestic Expenditure (GDE). The level of imputation accuracy achieved varied with the attribute. The accuracy ranged from 85.9% to 98.7%.
|
44 |
Trust on the semantic webCloran, Russell Andrew 07 August 2006 (has links)
The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
|
45 |
Ferramenta de aquisição de conhecimento por modelos explicitosSchiavini, Marcos Melo 20 February 1991 (has links)
Orientador: Marcio Luiz de Andrade Netto / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica / Made available in DSpace on 2018-07-13T23:17:40Z (GMT). No. of bitstreams: 1
Schiavini_MarcosMelo_M.pdf: 13404383 bytes, checksum: fe85c025b2fd3ced855421c365c05348 (MD5)
Previous issue date: 1991 / Resumo: A tese apresenta uma contribuição para agilizar e organizar o processo de aquisição de conhecimento necessário ao desenvolvimento de sistemas Especialistas. Para tanto é descrita uma ferramenta computacional de auxilio ao processo de aquisição e engenharia de conhecimento - CAKE - que emprega um modelo do domínio durante sua interação com o especialista. O modelo é elaborado e representado com o auxílio do KADS, uma metodologia de construção de sistemas baseados em conhecimento [WIELINGA 89]. Com esse trabalho visamos obter uma ferramenta de aquisição de conhecimento que não apenas apresente as vantagens de empregar um modelo como também não tenha seu uso limitado a apenas um domínio particular. Para tanto concebemos uma ferramenta que deixa explícito o modelo utilizado para guiar o processo de aquisição de conhecimento. O engenheiro do conhecimento pode alterar a ferramenta para adequá-la às suas necessidades / Abstract: The theses presents a contribution to facilitate and to organize the knowledge acquisition process necessary in the development of Expert Systems. A computer aided knowledge acquisition and engineering tool - CAKE -that employs a domain model in its interaction with the expert is proposed. The model is constructed and represented with the help of KADS, a methodology to construct knowledge based systems [WIELINGA 89]. It is intended, with this work, to obtain a knowledge acquisition tool that not only has the advantages of using a model, but also does not have its applicability limited to a particular domain. For this purpose, we have conceived a tool that leaves explicit the model used in guiding the knowledge acquisition processo The knowledge engineer is able to modify the tool to make the necessary adaptations for his further needs / Mestrado / Mestre em Engenharia Elétrica
|
46 |
An analysis of the current nature, status and relevance of data mining tools to enable organizational learningHattingh, Martin 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The use of technological tools has developed rapidly over the past decade or two. As
one of the areas of business technology, Data Mining has been receiving substantial
attention, and thus the study defined the scope and framework for the application of
data mining in the first place.
Because of the wide area of application of data mining, an overview and comparative
analysis was given of the specific data mining tools available to the knowledge
worker.
For the purposes ofthe study, and because the goals of data mining involve
knowledge extraction, the concept of organizational learning was analysed. The
factors needed to facilitate this learning process were also taken into consideration,
with a view towards enabling the process through their improved availability.
Actual enablement of the learning process, through the improved factor availability
described above, was analysed through the use of each specific tool reviewed.
The salient conclusion of this study was that data mining tools, applied correctly, and
within the correct framework and infrastructure, can enable the organizational
learning process on several levels. Because of the complexity of the learning process,
it was found that there are several factors to consider when implementing a data
mining strategy.
Recommendations were offered for the improved enablement of the organizational
learning process, through establishing more comprehensive technology plans, creating
learning environments, and promoting transparency and accountability. Finally,
suggestions were made for further research on the competitive application of data
mining strategies. / AFRIKAANSE OPSOMMING: Die gebruik van tegnologiese hulpmiddels het gedurende die afgelope dekade of twee
snel toegeneem. As een afdeling van ondernemings tegnologie, is daar aansienlike
belangstelling in 'Data Mining' (die myn van data), en dus het die studie eers die
omvang en raamwerk van 'Data Mining' gedefinieer.
As gevolg van die wye toepassingsveld van 'Data Mining', is daar 'n oorsig en
vergelykende analise gegee van die spesifieke 'Data Mining' hulpmiddels tot
beskikking van die kennis werker.
Vir die doel van die studie, en omdat die doelwitte van 'Data Mining' kennisonttrekking
behels, is die konsep van organisatoriese leer geanaliseer. Die faktore
benodig om hierdie leerproses te fasiliteer is ook in berekening gebring, met die
mikpunt om die proses in staat te stel deur verbeterde beskikbaarheid van hierdie
faktore.
Werklike instaatstelling van die leerproses, deur die verbeterde faktor beskikbaarheid
hierbo beskryf, is geanaliseer deur 'n oorsig van die gebruik van elke spesifieke
hulpmiddel.
Die gevolgtrekking van hierdie studie was dat 'Data Mining' hulpmiddels, indien
korrek toegepas, binne die korrekte raamwerk en infrastruktuur, die organisatoriese
leerproses op verskeie vlakke in staat kan stel. As gevolg van die ingewikkeldheid van
die leerproses, is gevind dat daar verskeie faktore is wat in ag geneem moet word
wanneer 'n 'Data Mining' strategie geïmplementeer word.
Aanbevelings is gemaak vir die verbeterde instaatstelling van die organisatoriese
leerproses, deur die daarstelling van meer omvattende tegnologie planne, die skep van
leer-vriendelike omgewings, en die bevordering van deursigtigheid en rekenskap. In
die laaste plek is daar voorstelle gemaak vir verdere navorsing oor die kompeterende
toepassing van 'Data Mining' strategieë.
|
47 |
Automated On-line Diagnosis and Control Configuration in Robotic Systems Using Model Based Analytical RedundancyKmelnitsky, Vitaly M 22 February 2002 (has links)
Because of the increasingly demanding tasks that robotic systems are asked to perform, there is a need to make them more reliable, intelligent, versatile and self-sufficient. Furthermore, throughout the robotic system?s operation, changes in its internal and external environments arise, which can distort trajectory tracking, slow down its performance, decrease its capabilities, and even bring it to a total halt. Changes in robotic systems are inevitable. They have diverse characteristics, magnitudes and origins, from the all-familiar viscous friction to Coulomb/Sticktion friction, and from structural vibrations to air/underwater environmental change. This thesis presents an on-line environmental Change, Detection, Isolation and Accommodation (CDIA) scheme that provides a robotic system the capabilities to achieve demanding requirements and manage the ever-emerging changes. The CDIA scheme is structured around a priori known dynamic models of the robotic system and the changes (faults). In this approach, the system monitors its internal and external environments, detects any changes, identifies and learns them, and makes necessary corrections into its behavior in order to minimize or counteract their effects. A comprehensive study is presented that deals with every stage, aspect, and variation of the CDIA process. One of the novelties of the proposed approach is that the profile of the change may be either time or state-dependent. The contribution of the CDIA scheme is twofold as it provides robustness with respect to unmodeled dynamics and with respect to torque-dependent, state-dependent, structural and external environment changes. The effectiveness of the proposed approach is verified by the development of the CDIA scheme for a SCARA robot. Results of this extensive numerical study are included to verify the applicability of the proposed scheme.
|
48 |
Data mining with structure adapting neural networksAlahakoon, Lakpriya Damminda, 1968- January 2000 (has links)
Abstract not available
|
49 |
The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraftLombard, Orpha Cornelia January 2014 (has links)
This dissertation describes a research study conducted to determine the benefits and
use of ontology technologies to support a simulation environment that evaluates
countermeasures employed to protect military aircraft.
Within the military, aircraft represent a significant investment and these valuable assets
need to be protected against various threats, such as man-portable air-defence
systems. To counter attacks from these threats, countermeasures are deployed, developed
and evaluated by utilising modelling and simulation techniques. The system
described in this research simulates real world scenarios of aircraft, missiles and
countermeasures in order to assist in the evaluation of infra-red countermeasures
against missiles in specified scenarios.
Traditional ontology has its origin in philosophy, describing what exists and how
objects relate to each other. The use of formal ontologies in Computer Science have
brought new possibilities for modelling and representation of information and knowledge
in several domains. These advantages also apply to military information systems
where ontologies support the complex nature of military information. After considering
ontologies and their advantages against the requirements for enhancements
of the simulation system, an ontology was constructed by following a formal development
methodology. Design research, combined with the adaptive methodology
of development, was conducted in a unique way, therefore contributing to establish
design research as a formal research methodology. The ontology was constructed
to capture the knowledge of the simulation system environment and the use of it
supports the functions of the simulation system in the domain.
The research study contributes to better communication among people involved in
the simulation studies, accomplished by a shared vocabulary and a knowledge base
for the domain. These contributions affirmed that ontologies can be successfully use
to support military simulation systems / Computing / M. Tech. (Information Technology)
|
50 |
Constraint-based software for broadband networks planning : a software framework for planning with the holistic approachManaf, Afwarman, 1962- January 2000 (has links)
Abstract not available
|
Page generated in 0.1314 seconds