• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 18
  • 9
  • 8
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 188
  • 188
  • 84
  • 73
  • 39
  • 38
  • 31
  • 27
  • 26
  • 23
  • 22
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The use of a group decision support system environment for knowledge acquisition.

Liou, Yihwa Irene. January 1989 (has links)
Knowledge acquisition is not only the most important but also most difficult task knowledge engineers face when they begin to develop expert systems. One of the first problems they encounter is the need to identify at least one individual with appropriate expertise who is able and willing to participate in the development project. They must also be able to use a variety of techniques to elicit the knowledge that they require. These include such traditional knowledge acquisition methods as interviewing, thinking-aloud protocol analysis, on-site observation, and repertory grid analysis. As expert system applications have become more complex, knowledge engineers have found that they must work with and tap the domain knowledge of not one but several individuals. They have also discovered that the traditional methods do not work well in eliciting the knowledge residing in a group of individuals. The complexity of the systems, the difficulties inherent in working with multiple experts, and the lack of appropriate tools have combined to make the knowledge acquisition task even more arduous and time consuming. Group Decision Support Systems (GDSS) have been proven to be useful tools for improving the efficiency and effectiveness of a multiplicity of group activities. It would appear that by bringing experts together in a GDSS environment and using computer-based tools to facilitate group interaction and information exchange, a knowledge engineer could eliminate many of these problems. This research was designed to explore the possibility of using a GDSS environment to facilitate knowledge acquisition from multiple experts. The primary research question was "Does A GDSS environment facilitate the acquisition of knowledge from multiple experts?" The principle contributions of this research are (1) demonstration of the first use of a GDSS environment to elicit knowledge from multiple experts; (2) establishment of a methodology for knowledge acquisition in a GDSS environment; (3) development of process models for acquiring knowledge; (4) development of guidelines for designing and evaluating group support tools; and (5) recognition of some implications of using a computer-supported cooperative approach to extract knowledge from a group of experts. (Abstract shortened with permission of author.)
62

A case-based system for lesson plan construction

Saad, Aslina January 2011 (has links)
Planning for teaching imposes a significant burden on teachers, as teachers need to prepare different lesson plans for different classes according to various constraints. Statistical evidence shows that lesson planning in the Malaysian context is done in isolation and lesson plan sharing is limited. The purpose of this thesis is to investigate whether a case-based system can reduce the time teachers spend on constructing lesson plans. A case-based system was designed SmartLP. In this system, a case consists of a problem description and solution pair and an attributevalue representation for the case is used. SmartLP is a synthesis type of CBR system which attempts to create a new solution by combining parts of previous solutions in the adaptation. Five activities in the CBR cycle retrieve, reuse, revise, review and retain are created via three types of design: application, architectural and user interface. The inputs are the requirements and constraints of the curriculum and the student facilities available, and the output is the solution, i.e. appropriate elements of a lesson plan. The retrieval module consists of five types of search advanced search, hierarchical, Boolean, basic and browsing. Solving a problem in this system involves obtaining a problem description, measuring the similarity of the current problem to previous problems stored in a database, retrieving one or more similar cases and attempting to reuse the solution of the retrieved cases, possibly after adaptation. Case adaptation for multiple lesson plans helps teachers to customise the retrieved plan to suit their constraints. This is followed by case revision, which allows users to access and revise their constructed lesson plans in the system. Validation mechanisms, through case verification, ensure that the retained cases are of quality. A formative study was conducted to investigate the effects of SmartLP on performance. The study revealed that all the lesson plans constructed with SmartLP assistance took significantly less time than the control lesson plans constructed without SmartLP assistance, although they might have access to computers and other tools. No significant difference in writing quality, measured by a scoring system, was noticed for the control group, who constructed lesson plans on the same tasks without receiving any assistance. The limitations of SmartLP are indicated and the focus of further research is proposed. Keywords: Case-based system, CBR approach, knowledge acquisition, knowledge representation, case representation, evaluation, lesson planning.
63

ASKNet : automatically creating semantic knowledge networks from natural language text

Harrington, Brian January 2009 (has links)
This thesis details the creation of ASKNet (Automated Semantic Knowledge Network), a system for creating large scale semantic networks from natural language texts. Using ASKNet as an example, we will show that by using existing natural language processing (NLP) tools, combined with a novel use of spreading activation theory, it is possible to efficiently create high quality semantic networks on a scale never before achievable. The ASKNet system takes naturally occurring English text (e.g., newspaper articles), and processes them using existing NLP tools. It then uses the output of those tools to create semantic network fragments representing the meaning of each sentence in the text. Those fragments are then combined by a spreading activation based algorithm that attempts to decide which portions of the networks refer to the same real-world entity. This allows ASKNet to combine the small fragments together into a single cohesive resource, which has more expressive power than the sum of its parts. Systems aiming to build semantic resources have typically either overlooked information integration completely, or else dismissed it as being AI-complete, and thus unachievable. In this thesis we will show that information integration is both an integral component of any semantic resource, and achievable through a combination of NLP technologies and novel applications of spreading activation theory. While extraction and integration of all knowledge within a text may be AI-complete, we will show that by processing large quantities of text efficiently, we can compensate for minor processing errors and missed relations with volume and creation speed. If relations are too difficult to extract, or we are unsure which nodes should integrate at any given stage, we can simply leave them to be picked up later when we have more information or come across a document which explains the concept more clearly. ASKNet is primarily designed as a proof of concept system. However, this thesis will show that it is capable of creating semantic networks larger than any existing similar resource in a matter of days, and furthermore that the networks it creates of are sufficient quality to be used for real world tasks. We will demonstrate that ASKNet can be used to judge semantic relatedness of words, achieving results comparable to the best state-of-the-art systems.
64

An intelligent system for a telecommunications network domain.

02 June 2008 (has links)
Knowledge in organizations today is considered as one of the most important assets the organization possesses. A considerable part of this knowledge is the knowledge possessed by the individuals employed by the organization. In order for intelligent systems to perform some of the tasks their human counter parts perform in an organization the intelligent systems need to acquire the knowledge their human counter parts possess for the specific task. To develop an intelligent system that can perform a specific task in an organization, the knowledge needed to perform the task will have to be extracted from the individuals in the organization via knowledge acquisition. This knowledge will then be presented so that the intelligent system can understand it and perform the task. In order to develop an intelligent system an ontology representing the domain under consideration as well as the rules that constitute the reasoning behind the intelligent system needs to be developed. In this dissertation a development environment for developing intelligent systems called the Collaborative Ontology Builder for Reasoning and Analysis (COBRA) was developed. COBRA provides a development environment for developing the ontology and rules for an intelligent system. COBRA was used in this study to develop a Cellular telecommunications Network Consistency Checking Intelligent System (CNCCIS), which was implemented in a cellular telecommunications network. / Prof. E.M. Ehlers
65

Knowledge Elicitation for Design Task Sequencing Knowledge

Burge, Janet E. 13 October 1999 (has links)
"There are many types of knowledge involved in producing a design (the process of specifying a description of an artifact that satisfies a collection of constraints [Brown, 1992]). Of these, one of the most crucial is the design plan: the sequence of steps taken to create the design (or a portion of the design). A number of knowledge elicitation methods can be used to obtain this knowledge from the designer. The success of the elicitation depends on the match between the knowledge elicitation method used and the information being sought. The difficulty with obtaining design plan information is that this information may involve implicit knowledge, i.e. knowledge that can not be expressed explicitly. In this thesis, an approach is used that combines two knowledge elicitation techniques: one direct, to directly request the design steps and their sequence, and one indirect, to refine this knowledge by obtaining steps and sequences that may be implicit. The two techniques used in this thesis were Forward Scenario Simulation (FSS), a technique where the domain expert describes how the procedure followed to solve it, and Card Sort, a technique where the domain expert is asked to sort items (usually entities in the domain) along different attributes. The Design Ordering Elicitation System (DOES) was built to perform the knowledge elicitation. This system is a web-based system designed to support remote knowledge elicitation: KE performed without the presence of the knowledge engineer. This system was used to administer knowledge elicitation sessions to evaluate the effectiveness of these techniques at obtaining design steps and their sequencing. The results indicate that using an indirect technique together with a direct technique obtains more alternative sequences for the design steps than using the direct technique alone."
66

Um método de trabalho para auxiliar a definição de requisitos / A work method to aid the requirements definition

De Bortoli, Lis Angela January 1999 (has links)
Vários são os problemas que afligem o desenvolvimento de software. Estes problemas, que originaram a crise do software nos anos 60, perduram ate hoje. Praticas de Engenharia de Software tem sido adotadas em todas as fases do ciclo de vida para tentar minimizá-los. A etapa de definição de requisitos é considerada como a atividade mais importante, decisiva e ao mesmo tempo critica do desenvolvimento de software, principalmente no que diz respeito a elicitação. A Engenharia de Requisitos é a disciplina que procura sistematizar o processo de definir requisitos. Muitas vezes os sistemas de informação das organizações são complexos e/ou informais, apresentando características que dificultam o seu entendimento. Além disso, a maioria das metodologias existentes não enfatiza a aquisição de conhecimento sobre o problema a ser resolvido. Este trabalho apresenta um método para auxiliar a aquisição de conhecimento de sistemas de informação, bem como sua representação e validação. O método proposto, que tem a finalidade de sistematizar uma tarefa anterior a definição de requisitos do software, ou seja, é um método de apoio a elicitação de requisitos, inclui as etapas de elicitação, modelagem e validação. Na etapa de elicitação é feita a aquisição de conhecimento dos fatos e das situações que compõem o sistema de informações vigente, utilizando para isso, técnicas como entrevistas, observações e uma abordagem baseada em etnografia. Para guiar esta etapa foi elaborada uma sistematização combinando as técnicas anteriormente citadas. No final da etapa de elicitação são produzidas representações textuais dos objetos elicitados e o Léxico Ampliado da Linguagem, que descreve a linguagem da aplicação em estudo. A partir dessas representações é feita a modelagem através de Workflow. Na etapa de validação, as representações produzidas pelas etapas de elicitação e modelagem são validadas junto aos atores do sistema de informação. A partir das representações produzidas o engenheiro de requisitos poderá definir os requisitos funcionais do software a ser construído. A aplicação do método é adequada para ambientes que já possuem um sistema de informação definido, seja ele formal ou informal. O método proposto foi aplicado a uma situação real e parte deste estudo de caso é apresentado neste trabalho. / There are many problems in software development. These problems, which had started the software crisis in the 1960s, still exist. Software Engineering practices have been adopted in all fases of the life cycle as an attempt to minimize them. The requirements definition is considered the most important, decisive and difficult activity in software development, particularly the elicitation of the system requirements. The Requirements Engineering is the discipline that try to sistematize the requirements definition process. Information systems are often complexes, informals and present features that make difficult to understand them. Besides, most of the existing metodologies do not handle procedures for knowledge acquisition about the problem to be solved. This work proposes a method to help knowledge acquisition of information systems, and also representation and validation of the acquired knowledge. The proposal method which support the requirements elicitation, anticipate the software requirements definition process. The method includes three stages: elicitation, modeling and validation. The elicitation stage comprises knowledge acquisition of the facts and situations of the information system, through the application of techniques such as interviews, observations and a based approach on ethnography. Textual representations are produced at the end of the elicitation stage to represent the elicited objects. The Language Extended Lexicon, which describe the language used in the information system, is also produced in this stage. In the modeling stage workflow representations are produced based on those initial representations. In the validation stage all the representations produced are validated by the actors working on the information system. Based on those representations the requirements engineer can define the functional software requirements. The method can be applied to environments where a defined information systems exists. The proposal method was applied in a real world situation and part of this case study is presented in this work.
67

Incremental knowledge acquisition for natural language processing

Pham, Son Bao, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Linguistic patterns have been used widely in shallow methods to develop numerous NLP applications. Approaches for acquiring linguistic patterns can be broadly categorised into three groups: supervised learning, unsupervised learning and manual methods. In supervised learning approaches, a large annotated training corpus is required for the learning algorithms to achieve decent results. However, annotated corpora are expensive to obtain and usually available only for established tasks. Unsupervised learning approaches usually start with a few seed examples and gather some statistics based on a large unannotated corpus to detect new examples that are similar to the seed ones. Most of these approaches either populate lexicons for predefined patterns or learn new patterns for extracting general factual information; hence they are applicable to only a limited number of tasks. Manually creating linguistic patterns has the advantage of utilising an expert's knowledge to overcome the scarcity of annotated data. In tasks with no annotated data available, the manual way seems to be the only choice. One typical problem that occurs with manual approaches is that the combination of multiple patterns, possibly being used at different stages of processing, often causes unintended side effects. Existing approaches, however, do not focus on the practical problem of acquiring those patterns but rather on how to use linguistic patterns for processing text. A systematic way to support the process of manually acquiring linguistic patterns in an efficient manner is long overdue. This thesis presents KAFTIE, an incremental knowledge acquisition framework that strongly supports experts in creating linguistic patterns manually for various NLP tasks. KAFTIE addresses difficulties in manually constructing knowledge bases of linguistic patterns, or rules in general, often faced in existing approaches by: (1) offering a systematic way to create new patterns while ensuring they are consistent; (2) alleviating the difficulty in choosing the right level of generality when creating a new pattern; (3) suggesting how existing patterns can be modified to improve the knowledge base's performance; (4) making the effort in creating a new pattern, or modifying an existing pattern, independent of the knowledge base's size. KAFTIE, therefore, makes it possible for experts to efficiently build large knowledge bases for complex tasks. This thesis also presents the KAFDIS framework for discourse processing using new representation formalisms: the level-of-detail tree and the discourse structure graph.
68

Diagnostics, prognostics and fault simulation for rolling element bearings

Sawalhi, Nader, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Vibration signals generated from spalled elements in rolling element bearings (REBs) are investigated in this thesis. A novel signal-processing algorithm to diagnose localized faults in rolling element bearings has been developed and tested on a variety of signals. The algorithm is based on Spectral Kurtosis (SK), which has special qualities for detecting REBs faults. The algorithm includes three steps. It starts by pre-whitening the signal's power spectral density using an autoregressive (AR) model. The impulses, which are contained in the residual of the AR model, are then enhanced using the minimum entropy deconvolution (MED) technique, which effectively deconvolves the effect of the transmission path and clarifies the impulses. Finally the output of the MED filter is decomposed using complex Morlet wavelets and the SK is calculated to select the best filter for the envelope analysis. Results show the superiority of the developed algorithm and its effectiveness in extracting fault features from the raw vibration signal. The problem of modelling the vibration signals from a spalled bearing in a gearbox environment is discussed. This problem has been addressed through the incorporation of a time varying, non-linear stiffness bearing model into a previously developed gear model. It has the new capacity of modeling localized faults and extended faults in the different components of the bearing. The simulated signals were found to have the same basic characteristics as measured signals, and moreover were found to have a characteristic seen in the measured signals, and also referred to in the literature, of double pulses corresponding to entry into and exit from a localized fault, which could be made more evident by the MED technique. The simulation model is useful for producing typical fault signals from gearboxes to test new diagnostic algorithms, and also prognostic algorithms. The thesis provides two main tools (SK algorithm and the gear bearing simulation model), which could be effectively employed to develop a successful prognostic model.
69

Cross-language Ontology Learning : Incorporating and Exploiting Cross-language Data in the Ontology Learning Process

Hjelm, Hans January 2009 (has links)
An ontology is a knowledge-representation structure, where words, terms or concepts are defined by their mutual hierarchical relations. Ontologies are becoming ever more prevalent in the world of natural language processing, where we currently see a tendency towards using semantics for solving a variety of tasks, particularly tasks related to information access. Ontologies, taxonomies and thesauri (all related notions) are also used in various variants by humans, to standardize business transactions or for finding conceptual relations between terms in, e.g., the medical domain. The acquisition of machine-readable, domain-specific semantic knowledge is time consuming and prone to inconsistencies. The field of ontology learning therefore provides tools for automating the construction of domain ontologies (ontologies describing the entities and relations within a particular field of interest), by analyzing large quantities of domain-specific texts. This thesis studies three main topics within the field of ontology learning. First, we examine which sources of information are useful within an ontology learning system and how the information sources can be combined effectively. Secondly, we do this with a special focus on cross-language text collections, to see if we can learn more from studying several languages at once, than we can from a single-language text collection. Finally, we investigate new approaches to formal and automatic evaluation of the quality of a learned ontology. We demonstrate how to combine information sources from different languages and use them to train automatic classifiers to recognize lexico-semantic relations. The cross-language data is shown to have a positive effect on the quality of the learned ontologies. We also give theoretical and experimental results, showing that our ontology evaluation method is a good complement to and in some aspects improves on the evaluation measures in use today. / För att köpa boken skicka en beställning till exp@ling.su.se/ To order the book send an e-mail to exp@ling.su.se
70

Diagram-Based Support for Collaborative Learning in Mathematical Exercise

WATANABE, Toyohide, MURASE, Yosuke, KOJIRI, Tomoko 01 April 2009 (has links)
No description available.

Page generated in 0.1309 seconds