• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 30
  • 26
  • 13
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 197
  • 197
  • 39
  • 33
  • 29
  • 28
  • 27
  • 27
  • 24
  • 22
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Aplikace principů znalostního managementu ve vybrané firmě / Application of Knowledge Management Principles in Selected Company

Červienka, Juraj January 2013 (has links)
The thesis deals with the issue of the knowledge management and its principles. The introduction of thesis is addressed to theoretical basics of the knowledge management that is followed by the practical part. The theoretical part provides the starting point for the proposal and applications of system for the chosen company. The main aim of the practical part was to form the application for management of projects and the repository of the knowledge of the chosen company. This aim should be followed by increasing of the work efficiency and enhancing of the access to the information. The resulting application will be set up into the company workings.
62

Expertní systém pro volbu vhodné metody využití odpadů / Expert system for choice of proper method for waste utilization

Fikar, Josef January 2011 (has links)
This work consists in development of expert system intended for choosing appropriate method of waste processing. The software is created in VisiRule software which is built on Prolog language and is part of WinProlog 4.900 development tool. It also deals with problematics of creating of knowledge base for applications of this type and judging of suitability of possible approaches to creating an expert system for given purpose.
63

Podpora knowledge managementu v systému ALVAO / Knowledge Management Support in the ALVAO System

Pramuka, Tomáš January 2015 (has links)
This thesis is focused on knowledge management as seen in ITIL library. It includes analysis of knowledge management solutions: knowledge base in the Service Now system and actual solution of knowledge base in the ALVAO system. Furthermore, there is a design and implementation of an extension from knowledge base to knowledge management in the ALVAO system described. It has also been designed and implemented integration with the Microsoft SharePoint .
64

Kontextbezogene, workflowbasierte Assessmentverfahren auf der Grundlage semantischer Wissensbasen

Molch, Silke January 2015 (has links)
Mit diesem Beitrag sollen Anwendungs- und Einsatzszenarien von komplexen, kontextbezogenen Echtzeit-Assessment- bzw. Evaluierungsverfahren im Bereich des operativen Prozessmanagements bei interdisziplinären, ganzheitlichen Planungen vorgestellt und prototypisch demonstriert werden. Dazu werden kurz die jeweiligen strukturellen und prozessoralen Grundvoraussetzungen bzw. eingesetzten Methoden erläutert und deren aufeinander abgestimmtes Zusammenspiel im gesamten Handlungsablauf demonstriert.
65

Knowledge Base : Back-end interface and possible uses

Liliequist, Erik, Jonsson, Martin January 2016 (has links)
This paper addresses two different aspects of the subject known as knowledge bases, or knowledge graphs. A knowledge base is defined as a comprehensive semantically organized machine-readable collection of universally relevant or domain-specific entities, classes, and facts. The objective of this paper is to explore how a knowledge base can be used to gain information about an entity. First we present one way to access information from the knowledge base using a back-end interface. This back-end interface takes simple parameters as input which are used to query the knowledge base. The main objective here is to be able to access the right entity, to be able to answers the questions correctly.  After that follows a discussion about the need for knowledge bases and possible uses. The discussions will partly be based on results from our implementation, but also consider other similar implementation, and interviews with possible users in the business and society.  We conclude that the back-end interface developed performs well enough, with a high precision, to be ran in an unsupervised system. Furthermore we realise that the interface can be improved in several ways by focusing on smaller domains of information. Several different possible uses have been identified. From these uses a market analysis has been done from which we conclude good market possibilities. Some of the key problems with implementing the interface regards the credibility of the information in the knowledge base. This is one of the main problems that needs to be solved to fully implement knowledge bases in business and society. / Den här rapporten tar upp två olika områden som berör knowledge bases. En knowledge base definieras som en omfattande semantiskt organiserad maskinläslig samling av universellt relevanta eller domän-specifika entiteter, klasser, och fakta. Målet med rapporten är att undersöka hur en knowledge base kan användas för att få fram information om en entitet. Först presenteras ett tillvägagångsätt för kommunikation mot en knowledge base med hjälp av ett back-end gränssnitt. Back-end gränssnittet tar enkla parametrar som input och använder dessa för att köra en query mot en knowledge base. Huvudfokus i denna del kommer ligga i att få rätt svar på frågorna och kommer därmed att utvärderas utifrån det. Det andra området som arbetet berör är en diskussion kring hur knowledge bases kan integreras i samhället och näringslivet för att få ut en ökad nytta. Diskussionerna kommer att baseras på resultaten från den första delen av arbetet till viss del, men även andra liknande studier kommer vägas in för att ge ett bredare diskussionsunderlag. Utöver detta baseras också diskussionen på intervjuer med möjliga intressenter inom näringsliv och samhälle. Det utvecklade gränssnittet presterar på en nivå, med hög precision, som vi bedömer tillräcklig för implementering i oövervakade system. Dessutom har flertalet förbättringsområden identifierats. Huvudsakligen berör dessa att mer specifika implementationer kan få högre precision då specifikare kontroller kan genomföras. Flertal möjliga användningsområden har identifierats. Med dessa som grund har en marknadsanalys genomförts som pekar på goda förutsättningar för tekniken. Ett av det största problemen berör trovärdigheten i informationen i knowledge basen. Det är ett problem som måste lösas innan tekniken kan implementeras fullt ut i näringsliv och samhälle.
66

Efficient Extraction and Query Benchmarking of Wikipedia Data

Morsey, Mohamed 12 April 2013 (has links)
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches. However, the DBpedia release process is heavy-weight and the releases are sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication. Basically, knowledge bases, including DBpedia, are stored in triplestores in order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks. Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
67

Symbolic Semantic Memory in Transformer Language Models

Morain, Robert Kenneth 16 March 2022 (has links)
This paper demonstrates how transformer language models can be improved by giving them access to relevant structured data extracted from a knowledge base. The knowledge base preparation process and modifications to transformer models are explained. We evaluate these methods on language modeling and question answering tasks. These results show that even simple additional knowledge augmentation leads to a reduction in validation loss by 73%. These methods also significantly outperform common ways of improving language models such as increasing the model size or adding more data.
68

Intelligent Data Mining Techniques for Automatic Service Management

Wang, Qing 07 November 2018 (has links)
Today, as more and more industries are involved in the artificial intelligence era, all business enterprises constantly explore innovative ways to expand their outreach and fulfill the high requirements from customers, with the purpose of gaining a competitive advantage in the marketplace. However, the success of a business highly relies on its IT service. Value-creating activities of a business cannot be accomplished without solid and continuous delivery of IT services especially in the increasingly intricate and specialized world. Driven by both the growing complexity of IT environments and rapidly changing business needs, service providers are urgently seeking intelligent data mining and machine learning techniques to build a cognitive ``brain" in IT service management, capable of automatically understanding, reasoning and learning from operational data collected from human engineers and virtual engineers during the IT service maintenance. The ultimate goal of IT service management optimization is to maximize the automation of IT routine procedures such as problem detection, determination, and resolution. However, to fully automate the entire IT routine procedure is still a challenging task without any human intervention. In the real IT system, both the step-wise resolution descriptions and scripted resolutions are often logged with their corresponding problematic incidents, which typically contain abundant valuable human domain knowledge. Hence, modeling, gathering and utilizing the domain knowledge from IT system maintenance logs act as an extremely crucial role in IT service management optimization. To optimize the IT service management from the perspective of intelligent data mining techniques, three research directions are identified and considered to be greatly helpful for automatic service management: (1) efficiently extract and organize the domain knowledge from IT system maintenance logs; (2) online collect and update the existing domain knowledge by interactively recommending the possible resolutions; (3) automatically discover the latent relation among scripted resolutions and intelligently suggest proper scripted resolutions for IT problems. My dissertation addresses these challenges mentioned above by designing and implementing a set of intelligent data-driven solutions including (1) constructing the domain knowledge base for problem resolution inference; (2) online recommending resolution in light of the explicit hierarchical resolution categories provided by domain experts; and (3) interactively recommending resolution with the latent resolution relations learned through a collaborative filtering model.
69

Fuzzy model rozhodování investora do fotovoltaických technologií v předprojekční fázi / Fuzzy Model of Investor´s Decision into Photovoltaic Technologies During Pre-Design Phase

Pavlíček, Michal January 2016 (has links)
Dissertation deals with fuzzy knowledge base supporting investor’s investment decision into the photovoltaic technologies during pre-design phase, when the engineering solution is not known yet. Probably, majority of investors have particular image about the cost and risk coming from investment into the photovoltaic technologies. However, this image is limited to the restricted knowledge areas of each investor. During the period, when the investor is planning investment into the costly photovoltaic technologies at complex, vague and heavily qualified information about the conditions and risks of investment in the specific region, where the environment is constantly inconsistent and multidimensional, it is possible to solve this complex situation with fuzzy logic. This dissertation is focusing on creation of fuzzy knowledge base with selected installed projects in Europe since 2008 and its use with expert system. Furthermore, is covering the definition and description of the variables, which are included in the investor decision making process. The complete designed architecture of the fuzzy knowledge base is tuned and five projects with different size are tested. The fuzzy knowledge base consists of overall 24 variables and 187 statements.
70

A Comparison of the Effects of Instruction Using Traditional Methods to Instruction using Reading Apprenticeship

Lowery, David Carlton 07 August 2010 (has links)
The purpose of this quasi-experimental study was to compare the effects of literature instruction using traditional methods to literature instruction using Reading Apprenticeship (RA) to determine if outcomes of attitude and achievement of students enrolled in World Literature courses are changed. Participants included 104 students from 1 junior college in a southeastern state. Of these 104 students, 68 were taught using a traditional method of instruction, and 36 were taught using the RA method of instruction. Students were administered the Rhody Secondary Reading Attitude Survey to determine attitude scores at the beginning of the semester and attitude scores at the end of the semester. In addition, the Accuplacer-Reading Comprehension Test was administered to assess students‘ reading achievement at both the beginning of the semester and at the end of the semester. To analyze the data, a repeated-measures MANOVA was used to determine if statistically significant differences were present in students‘ attitudes and achievement scores based on instruction type. Also, the repeated measures MANOVA was used to determine if there was an interaction between attitude and achievement scores. After analyzing the data that was collected, the results indicated a statistically significant difference between the attitude scores of students taught literature using traditional instruction and students taught literature using RA instruction. The attitudes of students who were taught World Literature through traditional instructional methods experienced little change, and the attitudes of students who were taught World Literature using the RA method significantly increased. The results of the achievement tests and the interaction were not statistically significant.

Page generated in 0.0413 seconds