• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 27
  • 27
  • 21
  • 20
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 146
  • 123
  • 108
  • 81
  • 67
  • 63
  • 56
  • 54
  • 51
  • 49
  • 46
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata Harmonization

Nilsson, Mikael January 2010 (has links)
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented. / QC 20101117
142

SWI-Prolog as a Semantic Web Tool for semantic querying in Bioclipse: Integration and performance benchmarking

Lampa, Samuel January 2010 (has links)
The huge amounts of data produced in high-throughput techniques in the life sciences and the need for integration of heterogeneous data from disparate sources in new fields such as Systems Biology and translational drug development require better approaches to data integration. The semantic web is anticipated to provide solutions through new formats for knowledge representation and management. Software libraries for semantic web formats are becoming mature, but there exist multiple tools based on foundationally different technologies. SWI-Prolog, a tool with semantic web support, was integrated into the Bioclipse bio- and cheminformatics workbench software and evaluated in terms of performance against non Prolog-based semantic web tools in Bioclipse, Jena and Pellet, for querying a data set consisting of mostly numerical, NMR shift values, in the semantic web format RDF. The integration has given access to the convenience of the Prolog language for working with semantic data and defining data management workflows in Bioclipse. The performance comparison shows that SWI-Prolog is superior in terms of performance over Jena and Pellet for this specific dataset and suggests Prolog-based tools as interesting for further evaluations.
143

Semantic desktop focusing on harvesting domain specific information in planningaid documents / A Model for Processing Document in IRIS Semantic Desktop System

Etezadi, Ali Reza January 2008 (has links)
<p>Planning is indeed a highly regulated procedure at the operational level such as military related activities where the staff may benefit from documents such as guidelines that regulate the work process, responsibilities and results of such planning activities.</p><p>This thesis proposes a method for analyzing office documents that make up an operational order according to document ontology. With the semantic desktops aiming at combining semantic annotations and intelligent reasoning in desktop computers, the product of this project intends to add a plug-in to such environments such as IRIS semantic desktop, which accordingly enables such application to interpret documents whether the they  or change within the application.</p><p>The result of our work helps the end user to extract data using his/her favorite patterns such as goals, targets or even milestones that make up decisive points. This information eventually form semantic objects, which ultimately reside in the knowledgebase of the semantic desktop for further reasoning in the future referring of the application, whether automatically or upon the user's request.</p>
144

RDF und XML - Moeglichkeiten fuer digitale Publikation und Archivierung

Schreiber, Alexander 08 May 2000 (has links) (PDF)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur Rechnernetze und verteilte Systeme (Fakultaet fuer Informatik) der TU Chemnitz. Workshop-Thema: Infrastruktur der ¨Digitalen Universitaet¨ Der Vortrag beschaeftigt sich mit den Moeglichkeiten die XML und RDF fuer das digitale Publizieren und Archivieren bieten.
145

Einsatz von RDF/XML in MONARCH

Schreiber, Alexander 10 May 2000 (has links) (PDF)
Im Rahmen der vorliegenden Studienarbeit sollen der Stand und die Praktikabilitaet von RDF/XML untersucht und eine auf XML/RDF basierende Technologie zum Metadaten-Handling in MONARCH entwickelt werden. Weiterhin sollen auf der Grundlage von RDF/XML neue Features fuer MONARCH, speziell aggregierte Dokumente, entwickelt werden.
146

Wie sehr können maschinelle Indexierung und modernes Information Retrieval Bibliotheksrecherchen verbessern?

Hauer, Manfred 30 November 2004 (has links) (PDF)
Mit maschinellen Verfahren lässt sich die Qualität der Inhaltserschließung dramatisch steigern. intelligentCAPTURE ist seit 2002 produktiv im Einsatz in Bibliotheken und Dokumentationszentren. Zu dessen Verfahren gehören Module für die Dokumentenakquisition, insbesondere Scanning und OCR, korrekte Textextraktion aus PDF-Dateien und Websites sowie Spracherkennung für "textlose" Objekte. Zusätzliche Verfahren zur Informationsextraktion können optional folgen. Als relevant erkannter Content wird mittels der CAI-Engine (Computer Aided Indexing) maschinell inhaltlich ausgewertet. Dort findet ein Zusammenspiel computerlinguistischer Verfahren (sprachabhängige Morphologie, Syntaxanalyse, Statistik) und semantischer Strukturen (Klassifikationen, Systematiken, Thesauri, Topic Maps, RDF, semantische Netze) statt. Aufbereitete Inhalte und fertige, human editierbare Indexate werden schließlich über frei definierbare Exportformate an die jeweiligen Bibliothekssysteme und in der Regel auch an intelligentSEARCH übergeben. intelligentSEARCH ist eine zentrale Verbunddatenbank zum Austausch zwischen allen produktiven Partnern weltweit aus dem öffentlichen und privatwirtschaftlichen Bereich. Der Austausch ist auf tauschbare Medien, bislang Inhaltsverzeichnisse, aus urheberrechtlichen Gründen begrenzt. Gleichzeitig ist diese Datenbank "Open Content" für die akademische Öffentlichkeit mit besonders leistungsstarken Retrieval-Funktionen, insbesondere mit semantischen Recherche-Möglichkeiten und der Visualisierung von semantischen Strukturen (http://www.agi-imc.de/intelligentSEARCH.nsf). Sowohl für die Indexierung als auch für die Recherche können unterschiedliche semantische Strukturen genutzt werden - je nach Erkenntnisinteresse, Weltsicht oder Sprache.
147

Formalisms on semi-structured and unstructured data schema computations

Lee, Yau-tat, Thomas. January 2009 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2010. / Includes bibliographical references (p. 115-119). Also available in print.
148

Evaluation of relational database implementation of triple-stores

Funes, Diego Leonardo 25 July 2011 (has links)
The Resource Description Framework (RDF) is the logical data model of the Semantic Web. RDF encodes information as a directed graph using a set of labeled edges known formally as resource-property-value statements or, in common usage, as RDF triples or simply triples. Values recorded in RDF triple form are either Universal Resource Identifiers (URIs) or literals. The use of URIs allows links between distributed data sources, which enables a logical model of data as a graph spanning the Internet. SPARQL is a standard SQL-like query language on RDF triples. This report describes the translation of SPARQL queries to equivalent SQL queries operating on a relational representation of RDF triples, and the physical optimization of that representation using the IBM DB2 relational database management system. Performance was evaluated using the Berlin SPARQL Benchmark. The results show that the implementation can perform well on certain queries, but more work is required to improved overall performance and scalability. / text
149

Linked-OWL: A new approach for dynamic linked data service workflow composition

Ahmad, Hussien, Dowaji, Salah 01 June 2013 (has links)
The shift from Web of Document into Web of Data based on Linked Data principles defined by Tim Berners-Lee posed a big challenge to build and develop applications to work in Web of Data environment. There are several attempts to build service and application models for Linked Data Cloud. In this paper, we propose a new service model for linked data "Linked-OWL" which is based on RESTful services and OWL-S and copes with linked data principles. This new model shifts the service concept from functions into linked data things and opens the road for Linked Oriented Architecture (LOA) and Web of Services as part and on top of Web of Data. This model also provides high level of dynamic service composition capabilities for more accurate dynamic composition and execution of complex business processes in Web of Data environment.
150

Analyse statique de requête pour le Web sémantique

Chekol, Melisachew wudage 19 December 2012 (has links) (PDF)
L'inclusion de requête est un problème bien étudié sur plusieurs décennies de recherche. En règle générale, il est défini comme le problème de déterminer si le résultat d'une requête est inclus dans le résultat d'une autre requête pour tout ensemble de données. Elle a des applications importantes dans l'optimisation des requêtes et la vérification de bases de connaissances. L'objectif principal de cette thèse est de fournir des procédures solides et com- plètes pour déterminer l'inclusion des requêtes SPARQL en vertu d'exprimés en axiomes logiques de description. De plus, nous mettons en œuvre ces procédures à l'appui des résultats théoriques par l'expérimentation. À ce jour, test d'inclusion de requête a été effectuée à l'aide de différentes techniques: homomorphisme de graphes, bases de données canoniques, les tech- niques de la théorie des automates et par une réduction au problème de la va- lidité de la logique. Dans cette thèse, nous utilisons la derniere technique pour tester l'inclusion des requêtes SPARQL utilisant une logique expressive appelée μ-calcul. Pour ce faire, les graphes RDF sont codés comme des systèmes de transitions, et les requêtes et les axiomes du schéma sont codés comme des formules de μ-calcul. Ainsi, l'inclusion de requêtes peut être réduit á test de validité de formule logique. L'objectif de cette thèse est d'identifier les divers fragments de SPARQL (et PSPARQL) et les langages de description logique de schéma pour lequelle l'inculsion est décidable. En outre, afin de fournir théoriquement et expériment- alement éprouvées procédures de vérifier l'inclusion de ces fragments décid- ables. Pas durer au moins mais, cette thèse propose un point de repère pour les solveurs d'inclusion. Ce benchmark est utilisé pour tester et comparer l'état actuel des solveurs d'inclusion.

Page generated in 0.061 seconds