• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 217
  • 76
  • 44
  • 24
  • 20
  • 19
  • 18
  • 17
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 839
  • 839
  • 249
  • 189
  • 176
  • 155
  • 139
  • 112
  • 108
  • 105
  • 105
  • 104
  • 102
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Die Regensburger Verbundklassifikation (RVK) – „ein weites Feld“: Herausforderung von Semantic Web, Ontologien und Entitäten für dieDynamik einer Klassifikation

Werr, Naoka 28 January 2011 (has links)
Schlagwörter wie „information overload“, „digital natives“ oder „digital immigrants“ prägen die heutige Informations- und Wissensgesellschaft. Zahlreiche wissenschaftliche Untersuchungen belegen zudem nachdrücklich, dass die technische Entwicklung in den nächsten Jahren noch rasanter fortschreitet als man es jemals vermuten durfte. Internet- Kommunikationsangeboten kommt bereits jetzt eine außergewöhnliche Bedeutung zu - mit steigender Tendenz. Außerdem werden Kommunikationsservices wie Web 2.0-Anwendungen als ein zunehmend wichtiger Faktor von Internetnutzung unterstrichen und der aktuelle Trend zur persönlichen Vernetzung über das Internet stets hervorgehoben. Die Bedeutung der Kernnutzungen des Internets als Inhaltsquelle und Kommunikationsform wird demnach auch weiterhin zunehmen. Diesem Trend müssen sich auch Klassifikationssysteme stellen. Die RVK hat mit dem im Oktober 2009 lancierten Web-Portal einen ersten Schritt in Richtung Vernetzung getan. Die bisher auf verschiedenen Internetseiten disparat untergebrachten Informationen zur RVK sowie die Datenbanken zur RVK sind nunmehr unter einer Oberfläche vereint, miteinander verknüpft und mit Elementen sozialer Software (RVK-Wiki zur größeren Transparenz bei Abstimmungsvorgängen) angereichert. Im Kontext des derzeit ebenfalls als beliebtes Schlagwort thematisierten Semantic Web ist das Portal der RVK ein Paradigmenwechsel in der langen Geschichte der RVK: Das gesamte Wissen zur RVK wird entsprechend seiner Bedeutung konzeptionell verbunden und bereits weitgehend maschinenlesbar (beispielsweise bezogen auf die Suchfunktion in der Datenbank RVK-Online) offeriert. Wissensmanagement sowie die Verbesserung der Qualität der umfangreichen Informationen zur RVK auf semantischer Ebene sind sehr verbessert worden, verbunden mit dem RVK-Wiki könnte man gar von einem ersten Impuls in Richtung Web 3.0 für die RVK sprechen. Auch die hierarchische Struktur der RVK trägt wesentlich zum Semantic Web bei, da in einer Klassifikation gerade hierarchische Strukturen zur „Ordnung“ des im Überfluss vorhandenen implizierten Wissens beitragen. Wesentlich ist demnach die Definition der Relationen im Web (und somit der entsprechenden Ontologien und Entitäten), um der Quantität der Angebote im World Wide Web auch entsprechend qualitativ hochwertige Services mit bibliothekarischem Mehrwert entgegenzusetzen. Für das Datenmodell des Semantic Web ist somit die Bereitstellung von nachhaltigen Normdaten wie es für die RVK ja angedacht - respektive fast umgesetzt ist – notwendig.
152

Linked Open Projects: Nachnutzung von Ergebnissen im Semantic Web

Pfeffer, Magnus, Eckert, Kai 28 January 2011 (has links)
Semantic Web und Linked Data sind in aller Munde. Nach fast einem Jahrzehnt der Entwicklung der Technologien und Erforschung der Möglichkeiten des Semantic Webs rücken nun die Daten in den Mittelpunk, denn ohne diese wäre das Semantic Web nicht mehr als ein theoretisches Konstrukt. Fast wie das World Wide Web ohne Websites. Bibliotheken besitzen mit Normdaten (PND, SWD) und Titelaufnahmen eine Fülle Daten, die sich zur Befüllung des Semantic Web eignen und teilweise bereits für das Semantic Web aufbereitet und zur Nutzung freigegeben wurden. Die Universitätsbibliothek Mannheim hat sich in zwei verschiedenen Projekten mit der Nutzung solcher Daten befasst – allerdings standen diese zu diesem Zeitpunkt noch nicht als Linked Data zur Verfügung. In einem Projekt ging es um die automatische Erschließung von Publikationen auf der Basis von Abstracts, im anderen Projekt um die automatische Klassifikation von Publikationen auf der Basis von Titeldaten. Im Rahmen dieses Beitrags stellen wir die Ergebnisse der Projekte kurz vor, möchten aber im Schwerpunkt auf einen Nebenaspekt eingehen, der sich erst im Laufe dieser Projekte herauskristallisiert hat: Wie kann man die gewonnenen Ergebnisse dauerhaft und sinnvoll zur Nachnutzung durch Dritte präsentieren? Soviel vorweg: Beide Verfahren können und wollen einen Bibliothekar nicht ersetzen. Die Einsatzmöglichkeiten der generierten Daten sind vielfältig. Konkrete Einsätze, zum Beispiel das Einspielen in einen Verbundkatalog, sind aber aufgrund der Qualität und mangelnden Kontrolle der Daten umstritten. Die Bereitstellung dieser Daten als Linked Data im Semantic Web ist da eine naheliegende Lösung – jeder, der die Ergebnisse nachnutzen möchte, kann das tun, ohne dass ein bestehender Datenbestand damit kompromittiert werden könnte. Diese Herangehensweise wirft aber neue Fragen auf, nicht zuletzt auch nach der Identifizierbarkeit der Ursprungsdaten über URIs, wenn diese (noch) nicht als Linked Data zur Verfügung stehen. Daneben erfordert die Bereitstellung von Ergebnisdaten aber auch weitere Maßnahmen, die über die gängige Praxis von Linked Data hinaus gehen: Die Bereitstellung von Zusatzinformationen, die die Quelle und das Zustandekommen dieser Daten näher beschreiben (Provenienzinformationen), aber auch weitere Informationen, die über das zugrunde liegende Metadatenschema meist hinausgehen, wie Konfidenzwerte im Falle eines automatischen Verfahrens der Datenerzeugung. Dazu präsentieren wir Ansätze auf Basis von RDF Reification und Named Graphs und schildern die aktuellen Entwicklungen auf diesem Gebiet, wie sie zum Beispiel in der Provenance Incubator Group des W3C und in Arbeitsgruppen der Dublin Core Metadaten-Initiative diskutiert werden.
153

Methodology for Conflict Detection and Resolution in Semantic Revision Control Systems

Hensel, Stephan, Graube, Markus, Urbas, Leon January 2016 (has links)
Revision control mechanisms are a crucial part of information systems to keep track of changes. It is one of the key requirements for industrial application of technologies like Linked Data which provides the possibility to integrate data from different systems and domains in a semantic information space. A corresponding semantic revision control system must have the same functionality as established systems (e.g. Git or Subversion). There is also a need for branching to enable parallel work on the same data or concurrent access to it. This directly introduces the requirement of supporting merges. This paper presents an approach which makes it possible to merge branches and to detect inconsistencies before creating the merged revision. We use a structural analysis of triple differences as the smallest comparison unit between the branches. The differences that are detected can be accumulated to high level changes, which is an essential step towards semantic merging. We implemented our approach as a prototypical extension of therevision control system R43ples to show proof of concept.
154

Managing and Consuming Completeness Information for RDF Data Sources

Darari, Fariz 20 June 2017 (has links)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
155

A framework for semantic web implementation based on context-oriented controlled automatic annotation.

Hatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site¿s pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application¿s domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text¿s meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the ¿Intelligent Document¿ ¿The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation¿. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
156

[en] ENRICHING AND ANALYZING SEMANTIC TRAJECTORIES WITH LINKED OPEN DATA / [pt] ENRIQUECENDO E ANALISANDO TRAJETÓRIAS SEMÂNTICAS COM DADOS ABERTOS INTERLIGADOS

LIVIA COUTO RUBACK RODRIGUES 26 February 2018 (has links)
[pt] Os últimos anos testemunharam o uso crescente de dispositivos que rastreiam objetos móveis: equipamentos com GPS e telefones móveis, veículos ou outros sensores da Internet das Coisas, além de dados de localização de check-ins de redes sociais. Estes dados de mobilidade são representados como trajetórias, e armazenam a sequência de posições de um objeto móvel. Porém, estas sequências representam somente os dados de posição originais, que precisam ser semanticamente enriquecidos para permitir tarefas de análise e apoiar um entendimento profundo sobre o comportamento do movimento. Um outro espaço de dados global sem precedentes tem crescido rapidamente, a Web de Dados, graças à iniciativa de Dados Interligados. Estes dados semânticos ricos e livremente disponíveis fornecem uma nova maneira de enriquecer dados de trajetória. Esta tese apresenta contribuições para os desafios que surgem considerando este cenário. Em primeiro lugar, a tese investiga como dados de trajetória podem se beneficiar da iniciativa de dados interligados, guiando todo o processo de enriquecimento semântico utilizando fontes de dados externas. Em segundo lugar, aborda o tópico de computação de similaridade entre entidades representadas como dados interligados com o objetivo de computar a similaridade entre trajetórias semanticamente enriquecidas. A novidade da abordagem apresentada nesta tese consiste em considerar as características relevantes das entidades como listas ranqueadas. Por último, a tese aborda a computação da similaridade entre trajetórias enriquecidas comparando a similaridade entre todas as entidades representadas como dados interligados que representam as trajetórias enriquecidas. / [en] The last years witnessed a growing number of devices that track moving objects: personal GPS equipped devices and GSM mobile phones, vehicles or other sensors from the Internet of Things but also the location data deriving from the Social Networks check-ins. These mobility data are represented as trajectories, recording the sequence of locations of the moving object. However, these sequences only represent the raw location data and they need to be semantically enriched to be meaningful in the analysis tasks and to support a deep understanding of the movement behavior. Another unprecedented global space that is also growing at a fast pace is the Web of Data, thanks to the emergence of the Linked Data initiative. These freely available semantic rich datasets provide a novel way to enhance trajectory data. This thesis presents a contribution to the many challenges that arise from this scenario. First, it investigates how trajectory data may benefit from the Linked Data Initiative by guiding the whole trajectory enrichment process with the use of external datasets. Then, it addresses the pivotal topic of the similarity computation between Linked Data entities with the final objective of computing the similarity between semantically enriched trajectories. The novelty of our approach is that the thesis considers the relevant entity features as a ranked list. Finally, the thesis targets the computation of the similarity between enriched trajectories by comparing the similarity of the Linked Data entities that represent the enriched trajectories.
157

Towards a comprehensive functional layered architecture for the Semantic Web

Gerber, Aurona J. 30 November 2006 (has links)
The Semantic Web, as the foreseen successor of the current Web, is envisioned to be a semantically enriched information space usable by machines or agents that perform sophisticated tasks on behalf of their users. The realisation of the Semantic Web prescribe the development of a comprehensive and functional layered architecture for the increasingly semantically expressive languages that it comprises of. A functional architecture is a model specified at an appropriate level of abstraction identifying system components based on required system functionality, whilst a comprehensive architecture is an architecture founded on established design principles within Software Engineering. Within this study, an argument is formulated for the development of a comprehensive and functional layered architecture through the development of a Semantic Web status model, the extraction of the function of established Semantic Web technologies, as well as the development of an evaluation mechanism for layered architectures compiled from design principles as well as fundamental features of layered architectures. In addition, an initial version of such a comprehensive and functional layered architecture for the Semantic Web is constructed based on the building blocks described above, and this architecture is applied to several scenarios to establish the usefulness thereof. In conclusion, based on the evidence collected as result of the research in this study, it is possible to justify the development of an architectural model, or more specifically, a comprehensive and functional layered architecture for the languages of the Semantic Web. / Computing / PHD (Computer Science)
158

Semantic Web Technologies for T&E Metadata Verification and Validation

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles, Weisenseel, Annette 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The vision of the semantic web is to unleash the next generation of information sharing and interoperability by encoding meaning into the symbols that are used to describe various computational capabilities within the World Wide Web or other networks. This paper describes the application of semantic web technologies to Test and Evaluation (T&E) metadata verification and validation. Verification is a quality process that is used to evaluate whether or not a product, service, or system complies with a regulation, specification, or conditions imposed at the start of a development phase or which exists in the organization. Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. While this often involves acceptance and suitability with external customers, automation provides significant assistance to the customers.
159

Tractable reasoning with quality guarantee for expressive description logics

Ren, Yuan January 2014 (has links)
DL-based ontologies have been widely used as knowledge infrastructures in knowledge management systems and on the Semantic Web. The development of efficient, sound and complete reasoning technologies has been a central topic in DL research. Recently, the paradigm shift from professional to novice users, and from standalone and static to inter-linked and dynamic applications raises new challenges: Can users build and evolve ontologies, both static and dynamic, with features provided by expressive DLs, while still enjoying e cient reasoning as in tractable DLs, without worrying too much about the quality (soundness and completeness) of results? To answer these challenges, this thesis investigates the problem of tractable and quality-guaranteed reasoning for ontologies in expressive DLs. The thesis develops syntactic approximation, a consequence-based reasoning procedure with worst-case PTime complexity, theoretically sound and empirically high-recall results, for ontologies constructed in DLs more expressive than any tractable DL. The thesis shows that a set of semantic completeness-guarantee conditions can be identifed to efficiently check if such a procedure is complete. Many ontologies tested in the thesis, including difficult ones for an off-the-shelf reasoner, satisfy such conditions. Furthermore, the thesis presents a stream reasoning mechanism to update reasoning results on dynamic ontologies without complete re-computation. Such a mechanism implements the Delete-and-Re-derive strategy with a truth maintenance system, and can help to reduce unnecessary over-deletion and re-derivation in stream reasoning and to improve its efficiency. As a whole, the thesis develops a worst-case tractable, guaranteed sound, conditionally complete and empirically high-recall reasoning solution for both static and dynamic ontologies in expressive DLs. Some techniques presented in the thesis can also be used to improve the performance and/or completeness of other existing reasoning solutions. The results can further be generalised and extended to support a wider range of knowledge representation formalisms, especially when a consequence-based algorithm is available.
160

Σχεδιασμός και υλοποίηση crowdsourcing διαδραστικής εκπαιδευτικής εφαρμογής με την χρήση του σημασιολογικού ιστού

Σκαπέτης, Ανδρέας 14 October 2013 (has links)
Τα τελευταία χρόνια γίνεται ολοένα όλο και πιο έντονη η επιθυμία, τόσο από εκπαιδευτικούς ή μαθητές, αλλά και από άτομα μεγαλύτερης ηλικίας που θέλουν να αναπτύξουν την γνώση τους σε κάποιο αντικείμενο, για την δημιουργία εκπαιδευτικών μηχανών (λογισμικών) που θα μπορούν να αντικαταστήσουν σε μεγάλο βαθμό τον ρόλο του εκπαιδευτικού. Η προστιθέμενη αξία ενός εκπαιδευτικού λογισμικού θα μπορούσε να είναι η εύκολη πρόσβαση σε μεγάλο όγκο πληροφοριών, η πιο συστηματική εκμάθηση, καθώς και η εξοικονόμηση χρόνου και εκπαιδευτικών πηγών (εννοώντας τους εκπαιδευτικούς ως φυσικά πρόσωπα). Το ζητούμενο δεν είναι απλά η δημιουργία ενός εκπαιδευτικού λογισμικού αλλά ενός "σωστά" δομημένου εκπαιδευτικού συστήματος. Αυτό σημαίνει ότι ο εκπαιδευόμενος θα μπορεί να αντλεί σωστά και μεθοδικά πληροφορία από αυτό, όπως ακριβώς θα έκανε αν είχε στην διάθεσή του έναν καταρτισμένο εκπαιδευτικό. Στην παρούσα λοιπόν εργασία, μέσα από ένας συνδυασμό νέων τεχνολογιών όπως είναι αυτή των οντολογιών και του σημασιολογικού ιστού καθώς επίσης και θεωριών συσχετιζόμενων με την εκπαίδευση, παρουσιάζονται τα βήματα για δημιουργία ενός διαδραστικού crowdsoursing εκπαιδευτικού συστήματος. Παρουσιάζεται ένα σύστημα που με απλά λόγια θα είναι σε θέση να εξυπηρετεί μαθητές και εκπαιδευτικούς αλλά και οποιονδήποτε άλλο ενδιαφερόμενο, να προσφέρει μεθοδική εκμάθηση, να συλλέγει πληροφορία από τους χρήστες του την οποία να επεξεργάζεται και να την διαθέτει σε αυτούς σε ξανά βελτιωμένη και εμπλουτισμένη. / -

Page generated in 0.0975 seconds