Spelling suggestions: "subject:"ddf"" "subject:"fdf""
151 |
Adaptable metadata creation for the Web of DataEnoksson, Fredrik January 2014 (has links)
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM. / <p>QC 20141028</p>
|
152 |
Ubiquitous user modelingHeckmann, Dominikus January 2005 (has links)
Zugl.: Saarbrücken, Univ., Diss., 2005
|
153 |
Incorporating relational data into the Semantic Web from databases to RDF using relational OWLPérez de Laborda Schwankhart, Cristian January 2006 (has links)
Zugl.: Düsseldorf, Univ., Diss., 2006 / Hergestellt on demand
|
154 |
Entwurf und Realisierung von Ontologien für Multimedia-Anwendungen /Hüsemann, Bodo. January 2005 (has links)
Zugl.: Münster (Westfalen), Universiẗat, Diss., 2005.
|
155 |
MetaBiM - ein semantisches Datenmodell für Baustoff-Informationen im World Wide Web Anwendungen für Beton mit rezyklierter Gesteinskörnung /Schreyer, Marcus. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2002--Stuttgart.
|
156 |
Trust on the semantic web /Cloran, Russell Andrew. January 2006 (has links)
Thesis (M.Sc. (Computer Science)) - Rhodes University, 2007.
|
157 |
Ubiquitous user modelingHeckmann, Dominikus. Unknown Date (has links) (PDF)
University, Diss., 2005--Saarbrücken.
|
158 |
Everything you always wanted to know about blank nodes (but were afraid to ask)Hogan, Aidan, Arenas, Macelo, Mallea, Alejandro, Polleres, Axel 06 May 2014 (has links) (PDF)
In this paper we thoroughly cover the issue of blank nodes, which have been defined in RDF as "existential variables". We
first introduce the theoretical precedent for existential blank nodes from first order logic and incomplete Information
in database theory. We then cover the different (and sometimes incompatible) treatment of blank nodes across the
W3C stack of RDF-related standards. We present an empirical survey of the blank nodes present in a large sample of
RDF data published on the Web (the BTC-2012 dataset), where we find that 25.7% of unique RDF terms are blank
nodes, that 44.9% of documents and 66.2% of domains featured use of at least one blank node, and that aside from
one Linked Data domain whose RDF data contains many "blank node cycles", the vast majority of blank nodes form
tree structures that are efficient to compute simple entailment over. With respect to the RDF-merge of the full data,
we show that 6.1% of blank-nodes are redundant under simple entailment. The vast majority of non-lean cases are
isomorphisms resulting from multiple blank nodes with no discriminating information being given within an RDF
document or documents being duplicated in multiple Web locations. Although simple entailment is NP-complete and
leanness-checking is coNP-complete, in computing this latter result, we demonstrate that in practice, real-world RDF
graphs are sufficiently "rich" in ground information for problematic cases to be avoided by non-naive algorithms.
|
159 |
Uma abordagem para a geração semiautomática de mapeamentos R2R baseado em um catálogo de padrões / An approach for the semi-automatic generation of R2R mapping based on a pattern catalogVinuto, Tiago da Silva January 2017 (has links)
VINUTO, Tiago da Silva. Uma abordagem para a geração semiautomática de mapeamentos R2R baseado em um catálogo de padrões. 2017. 79 f. Dissertação (Mestrado em Ciência da Computação)-Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Jonatas Martins (jonatasmartins@lia.ufc.br) on 2017-06-14T19:34:12Z
No. of bitstreams: 1
2017_dis_tsvinuto.pdf: 2564152 bytes, checksum: 69c006c65896f3b88eebdf993810f56a (MD5) / Approved for entry into archive by Jairo Viana (jairo@ufc.br) on 2017-06-22T19:34:26Z (GMT) No. of bitstreams: 1
2017_dis_tsvinuto.pdf: 2564152 bytes, checksum: 69c006c65896f3b88eebdf993810f56a (MD5) / Made available in DSpace on 2017-06-22T19:34:26Z (GMT). No. of bitstreams: 1
2017_dis_tsvinuto.pdf: 2564152 bytes, checksum: 69c006c65896f3b88eebdf993810f56a (MD5)
Previous issue date: 2017 / The web of linked data has grown considerably in recent years and covers a wide range of different domains today (BIZER; JENTZSCH; CYGANIAK, 2011). Linked data sources use different vocabularies to represent data about a specific type of object. For example, DBpedia 3 and Music ontology 4 use their proprietary vocabularies to represent data About musical artists. Translating data from these bound data sources into the vocabulary that is expected by a linked data application requires a large number of mappings and may require many structural transformations as well as complex transformations in the property value. Several tools emerge to map ontologies such as the SPARQL 1.1 language, LDIF framework and the Mosto tool. We choose to use in our study the R2R language, which was pointed out in (BIZER et al., 2012) as a good option to map ontologies, as it stands out in terms of expressiveness and performance. The R2R mapping language is a language based on the SPARQL language that allows you to transform data from a source vocabulary into a user-defined target vocabulary. However, defining mappings using this language is complex and subject to several types of errors, such as writing errors or even semantic errors, requiring expect user most to define the mappings. In this scenario, we propose an approach, using mapping patterns to automatically generate R2R mappings from a AMs. The approach is divided into two steps: (1) the manual specification of a set of AMs between the vocabulary of a source ontology and the vocabulary of a target ontology of the user’s choice; and (2) the automatic generation of the R2R mappings based on the result of the first step. Finally, we present the R2R By Assertions tool to help the user in the process of generating R2R mapping. / A Web de dados ligados tem crescido consideravelmente nos últimos anos e abrange uma vasta gama de domínios diferentes hoje (BIZER; JENTZSCH; CYGANIAK, 2011). Fontes de dados ligados usam diferentes vocabulários para representar dados sobre um tipo específico de objeto. Como por exemplo, DBpedia e Music ontology que usam seus vocabulários proprietários para representar dados sobre artistas musicais. Traduzir dados dessas fontes de dados para o vocabulário que é esperado por uma aplicação requer um grande número de mapeamentos e pode exigir muitas transformações estruturais, bem como transformações complexas no valor da propriedade. Diversas tecnologias despontam no sentido de traduzir ou mapear ontologias como, a linguagem SPARQL 1.1, a ferramenta Mosto e o framework R2R. Dentre estas escolhemos utilizar em nosso estudo a linguagem R2R, apontada em (BIZER et al., 2012) como uma boa opção para mapear ontologias, pois se destaca em termos de expressividade e desempenho. A linguagem de mapeamento R2R é uma linguagem baseada, na linguagem SPARQL, que permite transformar dados de um vocabulário de origem em um vocabulário de destino definido pelo usuário. Contudo, a construção de mapeamentos utilizando essa linguagem é complexa e sujeita a diversos tipos de erros, tais como erros de escrita ou até mesmo erros semânticos, exigindo do usuário experiência para definir os mapeamentos. Diante deste cenário, propomos uma estratégia, usando padrões de mapeamento, para gerar automaticamente mapeamentos R2R a partir de Assertivas de Mapeamentos (AMs). Nossa abordagem é dividida em duas etapas: (1) a especificação manual de um conjunto de AMs entre o vocabulário de uma ontologia fonte e do vocabulário de uma ontologia alvo de escolha do usuário; e (2) a geração automática dos mapeamentos R2R com base no resultado do primeiro passo. Por último nós apresentamos a ferramenta R2R By Assertions para ajudar o usuário no processo de geração de mapeamentos R2R.
|
160 |
Distributed SPARQL over Big RDF Data - A Comparative Analysis using Presto and MapReduceJanuary 2014 (has links)
abstract: The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the volume of RDF data grew to exponential proportions, the limitations of these systems became apparent and researchers began to focus on using big data analysis tools, most notably Hadoop, to process RDF data. Various studies and benchmarks that evaluate these tools for RDF data processing have been published. In the past two and half years, however, heavy users of big data systems, like Facebook, noted limitations with the query performance of these big data systems and began to develop new distributed query engines for big data that do not rely on map-reduce. Facebook's Presto is one such example.
This thesis deals with evaluating the performance of Presto in processing big RDF data against Apache Hive. A comparative analysis was also conducted against 4store, a native RDF store. To evaluate the performance Presto for big RDF data processing, a map-reduce program and a compiler, based on Flex and Bison, were implemented. The map-reduce program loads RDF data into HDFS while the compiler translates SPARQL queries into a subset of SQL that Presto (and Hive) can understand. The evaluation was done on four and eight node Linux clusters installed on Microsoft Windows Azure platform with RDF datasets of size 10, 20, and 30 million triples. The results of the experiment show that Presto has a much higher performance than Hive can be used to process big RDF data. The thesis also proposes an architecture based on Presto, Presto-RDF, that can be used to process big RDF data. / Dissertation/Thesis / Masters Thesis Computing Studies 2014
|
Page generated in 0.0507 seconds