Spelling suggestions: "subject:"knowledgebase"" "subject:"knowledge.based""
1 |
The Design of an Oncology Knowledge Base from an Online Health ForumRamadan, Omar 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Knowledge base completion is an important task that allows scientists to reason over knowledge bases and discover new facts. In this thesis, a patient-centric knowledge base
is designed and constructed using medical entities and relations extracted from the health
forum r/cancer. The knowledge base stores information in binary relation triplets. It is
enhanced with an is-a relation that is able to represent the hierarchical relationship between different medical entities. An enhanced Neural Tensor Network that utilizes the frequency of occurrence of relation triplets in the dataset is then developed to infer new facts from the enhanced knowledge base. The results show that when the enhanced inference model uses the enhanced knowledge base, a higher accuracy (73.2 %) and recall@10 (35.4%) are obtained. In addition, this thesis describes a methodology for knowledge base and associated inference model design that can be applied to other chronic diseases.
|
2 |
Equipment selection in opencast mining using a hybrid knowledge base system and genetic algorithmsHaidar, Ali Doureid January 1996 (has links)
No description available.
|
3 |
Représentation de la connaissance combinant les aspects de l'algèbre à la logique de prédicats dans un contexte de diagnostic de pannesVeillette, Michel January 1996 (has links)
Trois questions importantes se posent à l'élaboration de systèmes d'aide au diagnostic. Quels sont les éléments de la connaissance indispensables au diagnostic? Quelle forme doit prendre la représentation de la connaissance pour être facilement exploitée par l'ingénieur? Comment doit-on organiser ces éléments et quels sont les mécanismes de traitement impliqués qui facilitent l'adaptation du système informatique aux diverses installations rencontrées? C'est à ces questions que cette thèse s'adresse. L'objectif de cette thèse est d'élaborer un mode de représentation de la connaissance qui soit proche des formalismes et des modèles employés par l'ingénieur et qui puisse organiser la connaissance en entités correspondant aux éléments d'une installation à diagnostiquer. Cette représentation de la connaissance repose sur la notion de composante qui regroupe dans une entité les éléments de connaissance relatifs à cette composante. La composante procure la souplesse et rend explicite l'organisation fonctionnelle et structurelle des éléments physiques et conceptuels de l'installation. Chaque composante intègre la description des connaissances relatives aux entrées et sorties, aux paramètres internes, aux comportements, aux modèles de panne, aux fonctions et aux heuristiques de ces éléments. Pour faciliter l'exploitation de la représentation par l'ingénieur, le formalisme exprime les relations algébriques, qualitatives et descriptives des modèles utilisés par celui-ci. Pour ce faire, le formalisme combine les aspects algébriques de la connaissance avec la logique des prédicats, ce qui constitue un des aspects originaux de cette thèse. Ce lien avec la logique des prédicats apporte un support théorique qui met en relation la représentation avec celles présentées par d'autres auteurs du domaine. Cette thèse décrit le formalisme de la représentation et les mécanismes qui résolvent la dimension logique et la dimension algébrique de la connaissance représentée. Les mécanismes parcourent les liens définis entre les éléments de l'installation, tout en conservant les chemins d'inférences employés par ces mécanismes. Un prototype a été élaboré et plusieurs exemples sont résolus par celui-ci. [Résumé abrégé par UMI]
|
4 |
Productivity prediction model based on Bayesian analysis and productivity consoleYun, Seok Jun 29 August 2005 (has links)
Software project management is one of the most critical activities in modern software
development projects. Without realistic and objective management, the software development
process cannot be managed in an effective way. There are three general
problems in project management: effort estimation is not accurate, actual status is
difficult to understand, and projects are often geographically dispersed. Estimating
software development effort is one of the most challenging problems in project
management. Various attempts have been made to solve the problem; so far, however,
it remains a complex problem. The error rate of a renowned effort estimation
model can be higher than 30% of the actual productivity. Therefore, inaccurate estimation
results in poor planning and defies effective control of time and budgets in
project management. In this research, we have built a productivity prediction model
which uses productivity data from an ongoing project to reevaluate the initial productivity
estimate and provides managers a better productivity estimate for project
management. The actual status of the software project is not easy to understand
due to problems inherent in software project attributes. The project attributes are
dispersed across the various CASE (Computer-Aided Software Engineering) tools and
are difficult to measure because they are not hard material like building blocks. In
this research, we have created a productivity console which incorporates an expert
system to measure project attributes objectively and provides graphical charts to
visualize project status. The productivity console uses project attributes gathered
in KB (Knowledge Base) of PAMPA II (Project Attributes Monitoring and Prediction
Associate) that works with CASE tools and collects project attributes from the
databases of the tools. The productivity console and PAMPA II work on a network, so
geographically dispersed projects can be managed via the Internet without difficulty.
|
5 |
An Automatically Generated Lexical Knowledge Base with Soft DefinitionsScaiano, Martin January 2016 (has links)
There is a need for methods that understand and represent the meaning of text for use in Artificial Intelligence (AI). This thesis demonstrates a method to automatically extract a lexical knowledge base from dictionaries for the purpose of improving machine reading. Machine reading refers to a process by which a computer processes natural language text into a representation that supports inference or inter-connection with existing knowledge (Clark and Harrison, 2010).1
There are a number of linguistic ideas associated with representing and applying the meaning of words which are unaddressed in current knowledge representations. This work draws heavily from the linguistic theory of frame semantics (Fillmore, 1976). A word is not a strictly defined construct; instead, it evokes our knowledge and experiences, and this information is adapted to a given context by human intelligence. This can often be seen in dictionaries, as a word may have many senses, but some are only subtle variations of the same theme or core idea. Further unaddressed issue is that sentences may have multiple reasonable and valid interpretations (or readings).
This thesis postulates that there must be algorithms that work with symbolic rep- resentations which can model how words evoke knowledge and then contextualize that knowledge. I attempt to answer this previously unaddressed question, “How can a sym- bolic representation support multiple interpretations, evoked knowledge, soft word senses, and adaptation of meaning?” Furthermore, I implement and evaluate the proposed so- lution.
This thesis proposes the use of a knowledge representation called Multiple Interpre- tation Graphs (MIGs), and a lexical knowledge structure called auto-frames to support contextualization. MIG is used to store a single auto-frame, the representation of a sen- tence, or an entire text. MIGs and auto-frames are produced from dependency parse trees using an algorithm I call connection search. MIG supports representing multiple different interpretations of a text, while auto-frames combine multiple word senses and in- formation related to the word into one representation. Connection search contextualizes MIGs and auto-frames, and reduces the number of interpretations that are considered valid.
In this thesis, as proof of concept and evaluation, I extracted auto-frames from Long- man Dictionary of Contemporary English (LDOCE). I take the point of view that a word’s meaning depends on what it is connected to in its definition. I do not use a
1The term machine reading was coined by Etzioni et al. (2006). ii

predetermined set of semantic roles; instead, auto-frames focus on the connections or mappings between a word’s context and its definitions.
Once I have extracted the auto-frames, I demonstrate how they may be contextu- alized. I then apply the lexical knowledge base to reading comprehension. The results show that this approach can produce good precision on this task, although more re- search and refinement is needed. The knowledge base and source code is made available to the community at http://martin.scaiano.com/Auto-frames.html or by contacting martin@scaiano.com.
|
6 |
Working in a University Setting: Performing an Internship with Miami University’s Information Technology (IT) ServicesCoimbatore, Shanti L. 09 November 2007 (has links)
No description available.
|
7 |
Drivers of Knowledge Base Adoption, Analysis of Czech Corporate Environment / Drivers of Knowledge Base Adoption, Analysis of Czech Corporate EnvironmentRakovská, Zuzana January 2015 (has links)
This thesis analyses the process of knowledge-base adoption in the enterprise environment. Using data from two knowledge-management systems operated by the company, Semanta, s.r.o. we studied the day-to-day interactions of employees using the system and identified the important drivers of system adoption. We began by studying the effect of co-workers' collaborative activities on knowledge creation within the system. It was found that they had a positive and significant impact upon overall knowledge creation and thus on adoption. Secondly, we explored how the newly defined concept of gamification could help determine and encourage an increase in knowledge creation. The use of gamification tools, such as the "Hall of Fame" page, turned out to have significant influence in the adoption process. Thirdly, we examined how users continually seek knowledge within the system and how asking for missing information and being supplied with answers has an impact on adoption rates. It was shown that the quicker the responses and the more experts dealing with requests the greater the impact on knowledge base adoption. Finally, we showed that the size and character of the company deploying the knowledge management system does not influence the adoption drivers. This thesis represents an effort to fill the...
|
8 |
CONSTRUCTION OF EFL TEACHER EDUCATORS’ KNOWLEDGE BASE IN A TEACHER EDUCATION PROGRAM IN NICARAGUADávila, Angel María 01 December 2018 (has links)
AN ABSTRACT OF THE DISSERTATION The purpose of this qualitative phenomenological study was to understand and describe the sources of Nicaraguan EFL teacher educators’ knowledge base, the types of knowledge and skills that constructed their knowledge base, and the relationship of this knowledge base and classroom practices in a teacher education program at a Nicaraguan University. This study presents a literature review on the sources of knowledge and knowledge base of EFL teacher educators in the field of language teacher education. I used a purposeful sampling technique to select both the research site and the six EFL teacher educators who participated as research participants in this study. Data were collected from three sources: a curriculum analysis, six one-shot semi-structured interviews, and a document analysis to lesson plans, syllabi, and assessment instruments used by the research participants. To analyze the data collected, I used the qualitative data analysis model proposed by Miles, Huberman, and Saldaña (2014). As a mode of findings, I describe the sources of knowledge, a categorization of knowledge base and skills that Nicaraguan EFL teacher educators possess as well as the relationship they identified between their knowledge base and their teaching practices in EFL teacher education classrooms. Findings revealed that Nicaraguan EFL teacher educators possess sixteen types of knowledge and fourteen types of skills that resulted from eight sources of knowledge, among which English proficiency, own experiences as language learners, subject knowledge, pedagogical knowledge, teaching experience in EFL teacher education programs, assessment knowledge of language student teachers, and knowledge of students’ L1 seem to be the most important when it has to do with actual teaching in language teacher education classrooms. In addition, according to the findings, the process of becoming an EFL teacher educator may take many years. It begins with the professional coursework teacher educators take in their language teacher education programs where they first become English teachers. It continues with teaching experiences either in high schools, English teaching centers, or universities. Their professional knowledge as teacher educators is completed through the interaction with EFL preservice student teachers in teacher education classrooms, in which their previous pedagogical, linguistic, and teaching experiences as EFL teachers is transformed. In other words, their professional identity as EFL teacher educators is developed as they begin teaching in EFL teacher education programs. Pursuing this further, this study presents some pedagogical implications based on the findings that can help improve the quality and preparation of EFL teacher educators in Nicaragua. Finally, it offers some avenues for more research regarding the knowledge base of EFL teacher educators in Nicaraguan teacher education programs.
|
9 |
Efficient Extraction and Query Benchmarking of Wikipedia DataMorsey, Mohamed 06 January 2014 (has links) (PDF)
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches.
However, the DBpedia release process is heavy-weight and the releases are
sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication.
Basically, knowledge bases, including DBpedia, are stored in triplestores in
order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general.
Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks.
Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
|
10 |
Kontextbezogene, workflowbasierte Assessmentverfahren auf der Grundlage semantischer WissensbasenMolch, Silke 26 October 2015 (has links) (PDF)
Mit diesem Beitrag sollen Anwendungs- und Einsatzszenarien von komplexen, kontextbezogenen Echtzeit-Assessment- bzw. Evaluierungsverfahren im Bereich des operativen Prozessmanagements bei interdisziplinären, ganzheitlichen Planungen vorgestellt und prototypisch demonstriert werden. Dazu werden kurz die jeweiligen strukturellen und prozessoralen Grundvoraussetzungen bzw. eingesetzten Methoden erläutert und deren aufeinander abgestimmtes Zusammenspiel im gesamten Handlungsablauf demonstriert.
|
Page generated in 0.0511 seconds