• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 51
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Flexible RDF data extraction from Wiktionary - Leveraging the power of community build linguistic wikis

Brekle, Jonas 26 February 2018 (has links)
We present a declarative approach implemented in a comprehensive opensource framework (based on DBpedia) to extract lexical-semantic resources (an ontology about language use) from Wiktionary. The data currently includes language, part of speech, senses, definitions, synonyms, taxonomies (hyponyms, hyperonyms, synonyms, antonyms) and translations for each lexical word. Main focus is on flexibility to the loose schema and configurability towards differing language-editions ofWiktionary. This is achieved by a declarative mediator/wrapper approach. The goal is, to allow the addition of languages just by configuration without the need of programming, thus enabling the swift and resource-conserving adaptation of wrappers by domain experts. The extracted data is as fine granular as the source data in Wiktionary and additionally follows the lemon model. It enables use cases like disambiguation or machine translation. By offering a linked data service, we hope to extend DBpedia’s central role in the LOD infrastructure to the world of Open Linguistics. I
182

Modellierung von touristischen Merkmalen in RDF und Evaluation am Anwendungsfall Vakantieland

Frommhold, Marvin 26 February 2018 (has links)
Die Tourismus-Domäne ist eine sehr informations-intensive Industrie. Im eTourismus-Bereich ist daher eine mächtiges Datenschema zur geeigneten Speicherung und Abfrage der Daten erforderlich. Bis vor einigen Jahren haben dafür relationale Datenbanken und Dokumenten-zentrierte Systeme ausgereicht. Für einen Touristen spielt aber heute das schnelle und einfache Befriedigen seines Informationsbedürfnisses eine immer größer werdende Rolle. Aus diesem Grund wird mehr und mehr auf den Einsatz von semantischen Technologien im Bereich des eTourismus gesetzt. So geschehen auch bei der Transformation des Tourismus-Portals vakantieland.nl zu einer semantischen Web-Applikation. Eine solche Umstellung bringt jedoch auch neue Probleme mit sich. Zum Beispiel die Frage, wie touristische Informationen geeignet mit Hilfe des Resource Description Frameworks (RDF) modelliert werden können. In dieser Arbeit wird dieser Frage in Bezug auf die Modellierung von Eigenschaften von touristischen Zielen nachgegangen. Dazu wird eine bestehende eTourismus- Ontologie analysiert und basierend darauf ein geeignetes Schema definiert. Anschließend wird die Ontologie einer Evolution unterzogen, um diese an das neue Schema anzupassen. Um den Nutzen des Tourismus- Portals zusätzlich zu erhöhen, werden außerdem die bereits existierenden Filterfunktionen erweitert.
183

Quality Assurance of RDB2RDF Mappings

Westphal, Patrick 27 February 2018 (has links)
Today, the Web of Data evolved to a semantic information network containing large amounts of data. Since such data may stem from different sources, ranging from automatic extraction processes to extensively curated knowledge bases, its quality also varies. Thus, currently research efforts are made to find methodologies and approaches to measure the data quality in the Web of Data. Besides the option to consider the actual data in a quality assessment, taking the process of data generation into account is another possibility, especially for extracted data. An extraction approach that gained popularity in the last years is the mapping of relational databases to RDF (RDB2RDF). By providing definitions of how RDF should be generated from relational database content, huge amounts of data can be extracted automatically. Unfortunately, this also means that single errors in the mapping definitions can affect a considerable portion of the generated data. Thus, from a quality assurance point of view, the assessment of these RDB2RDF mapping definitions is important to guarantee high quality RDF data. This is not covered by recent quality research attempts in depth and is examined in this thesis. After a structured evaluation of existing approaches, a quality assessment methodology and quality dimensions of importance for RDB2RDF mappings are proposed. The formalization of this methodology is used to define 43 metrics to characterize the quality of an RDB2RDF mapping project. These metrics are also implemented for a software prototype of the proposed methodology, which is used in a practical evaluation of three different datasets that are generated applying the RDB2RDF approach.
184

Einsatz von RDF/XML in MONARCH

Schreiber, Alexander 10 May 2000 (has links)
Im Rahmen der vorliegenden Studienarbeit sollen der Stand und die Praktikabilitaet von RDF/XML untersucht und eine auf XML/RDF basierende Technologie zum Metadaten-Handling in MONARCH entwickelt werden. Weiterhin sollen auf der Grundlage von RDF/XML neue Features fuer MONARCH, speziell aggregierte Dokumente, entwickelt werden.
185

Digitale Archivsysteme - Erfahrungen und Perspektiven

Hübner, Uwe, Thümer, Ingrid, Ziegler, Christoph 09 June 2000 (has links)
1995 wurde mit dem Aufbau des Multimedia Online Archiv Chemnitz (MONARCH) begonnen. Aktuelle Entwicklungen sind der Einsatz von RDF für die Metadaten und von digitalen Signaturen. Die technische Sicherung der Dauerhaftigkeit erfolgt durch Migrationskonzepte. Zu den vorhandenen Dokumentenformaten wird XML kommen, diskutiert werden DTD-Alternativen. Als Anwendungsaspekte werden u.a. die Behandlung aggregierter Dokumente und der Umgang mit dem Plagiatsproblem betrachtet.
186

Large-Scale Multilingual Knowledge Extraction, Publishing and Quality Assessment: The case of DBpedia

Kontokostas, Dimitrios 04 September 2018 (has links)
No description available.
187

EXPLORATORY SEARCH USING VECTOR MODEL AND LINKED DATA

Daeun Yim (9143660) 30 July 2020 (has links)
The way people acquire knowledge has largely shifted from print to web resources. Meanwhile, search has become the main medium to access information. Amongst various search behaviors, exploratory search represents a learning process that involves complex cognitive activities and knowledge acquisition. Research on exploratory search studies on how to make search systems help people seek information and develop intellectual skills. This research focuses on information retrieval and aims to build an exploratory search system that shows higher clustering performance and diversified search results. In this study, a new language model that integrates the state-of-the-art vector language model (i.e., BERT) with human knowledge is built to better understand and organize search results. The clustering performance of the new model (i.e., RDF+BERT) was similar to the original model but slight improvement was observed with conversational texts compared to the pre-trained language model and an exploratory search baseline. With the addition of the enrichment phase of expanding search results to related documents, the novel system also can display more diverse search results.
188

Flexible Authoring of Metadata for Learning : Assembling forms from a declarative data and view model

Enoksson, Fredrik January 2011 (has links)
With the vast amount of information in various formats that is produced today it becomes necessary for consumers ofthis information to be able to judge if it is relevant for them. One way to enable that is to provide information abouteach piece of information, i.e. provide metadata. When metadata is to be edited by a human being, a metadata editorneeds to be provided. This thesis describes the design and practical use of a configuration mechanism for metadataeditors called annotation profiles, that is intended to enable a flexible metadata editing environment. An annotationprofile is an instance of an Annotation Profile Model (APM), which is an information model that can gatherinformation from many sources. This model has been developed by the author together with colleagues at the RoyalInstitute of Technology and Uppsala University in Sweden. It is designed so that an annotation profile can holdenough information for an application to generate a customized metadata editor from it. The APM works withmetadata expressed in a format called RDF (Resource Description Framwork), which forms the technical basis for theSemantic Web. It also works with metadata that is expressed using a model similar to RDF. The RDF model providesa simple way of combining metadata standards and this makes it possible for the resulting metadata editor to combinedifferent metadata standards into one metadata description. Resources that are meant to be used in a learning situationcan be of various media types (audio- or video-files, documents, etc.), which gives rise to a situation where differentmetadata standards have to be used in combination. Such a resource would typically contain educational metadatafrom one standard, but for each media type a different metadata standard might be used for the technical description.To combine all the metadata into a single metadata record is desirable and made possible when using RDF. The focusin this thesis is on metadata for resources that can be used in such learning contexts.One of the major advantages of using annotation profiles is that they enable change of metadata editor without havingto modify the code of an application. In contrast, the annotation profile is updated to fit the required changes. In thisway, the programmer of an application can avoid the responsibility of deciding which metadata that can be edited aswell as the structure of it. Instead, such decisions can be left to the metadata specialist that creates the annotationprofiles to be used.The Annotation Profile Model can be divided into two models, the Graph Pattern Model that holds information onwhat parts of the metadata that can be edited, and the Form Template Model that provides information about how thedifferent parts of the metadata editor should be structured. An instance of the Graph Pattern Model is called a graphpattern, and it defines which parts of the metadata that the annotation profile will be editable. The author hasdeveloped an approach to how this information can be used when the RDF metadata to edit is stored on a remotesystem, e.g. a system that can only be accessed over a network. In such cases the graph pattern cannot be useddirectly, even though it defines the structures that can be affected in the editing process. The method developeddescribes how the specific parts of metadata are extracted for editing and updating when the metadata author hasfinished editing.A situation where annotation profiles have proven valuable is presented in chapter 6. Here the author have taken partin developing a portfolio system for learning resources in the area of blood diseases, hematology. A set of annotationprofiles was developed in order to adapt the portfolio system for this particular community. The annotation profilesmade use of an existing curriculum for hematology that provides a competence profile of this field. The annotationprofile makes use this curriculum in two ways:1. As a part of the personal profile for each user, i.e. metadata about a person. Through the editor, created from anannotation profile, the user can express his/her skill/knowledge/competence in the field of hematology.2. The metadata can associate a learning resource can with certain parts of the competence description, thusexpressing that the learning resource deals with a specific part of the competence profile. This provides a mechanismfor matching learning need with available learning resources.As the field of hematology is evolving, the competence profile will need to be updated. Because of the use ofannotation profiles, the metadata editors in question can be updated simply by changing the corresponding annotationprofiles. This is an example of the benefits of annotation profiles within an installed application. Annotation Profilescan also be used for applications that aim to support different metadata expressions, since the set of metadata editorscan be easily changed.The system of portfolios mentioned above provides this flexibility in metadata expression, and it has successfullybeen configured to work with resources from other domain areas, notably organic farming, by using another set ofannotation profiles. Hence, to use annotation profiles has proven useful in these settings due to the flexibility that theAnnotation Profile Model enables. Plans for the future include developing an editor for annotation profiles in order toprovide a simple way to create such profiles. / QC 20110426
189

Mozilla jako vývojová platforma / Mozilla as a Development Platform

Vídeňský, Martin January 2008 (has links)
This thesis deals with introduction of Mozilla as a development platform. Thesis is divided into four parts. The first one consists of a theoretical introduction, which describes architecture, the most important technologies and motivation for usage of Mozilla as a development platform. The second part leads step by step threw making own project. The third part is dedicated to description of the example application Tester. Tester is an e-learning project designed for easier learning process with scope on vocabulary practise. In the conclusion of thesis, there is the evaluation of Mozilla platform based on practical experience.
190

Ontologie a Semantický Web / Ontology and Semantic Web

Stuchlík, Radek Unknown Date (has links)
The purpose of the master's thesis "Ontology and Semantic Web" is to give a description of general principles of ontologies, which are closely associated with so-called new generation of web: semantic web. The thesis is conceived as tutorial and it is focused on both theoretical basics and practical examples of the use of technologies developed recently. The aim of this tutorial is to present main ideas of semantic web, technologies and data formats, which should provide its implementation into standard practice.

Page generated in 0.028 seconds