• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 12
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 81
  • 55
  • 49
  • 38
  • 32
  • 28
  • 22
  • 22
  • 21
  • 21
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Extração e consulta de informações do Currículo Lattes baseada em ontologias / Ontology-based Queries and Information Extraction from the Lattes CV

Eduardo Ferreira Galego 06 November 2013 (has links)
A Plataforma Lattes é uma excelente base de dados de pesquisadores para a sociedade brasileira, adotada pela maioria das instituições de fomento, universidades e institutos de pesquisa do País. Entretanto, é limitada quanto à exibição de dados sumarizados de um grupos de pessoas, como por exemplo um departamento de pesquisa ou os orientandos de um ou mais professores. Diversos projetos já foram desenvolvidos propondo soluções para este problema, alguns inclusive desenvolvendo ontologias a partir do domínio de pesquisa. Este trabalho tem por objetivo integrar todas as funcionalidades destas ferramentas em uma única solução, a SOS Lattes. Serão apresentados os resultados obtidos no desenvolvimento desta solução e como o uso de ontologias auxilia nas atividades de identificação de inconsistências de dados, consultas para construção de relatórios consolidados e regras de inferência para correlacionar múltiplas bases de dados. Além disto, procura-se por meio deste trabalho contribuir com a expansão e disseminação da área de Web Semântica, por meio da criação de uma ferramenta capaz de extrair dados de páginas Web e disponibilizar sua estrutura semântica. Os conhecimentos adquiridos durante a pesquisa poderão ser úteis ao desenvolvimento de novas ferramentas atuando em diferentes ambientes. / The Lattes Platform is an excellent database of researchers for the Brazilian society , adopted by most Brazilian funding agencies, universities and research institutes. However, it is limited as to displaying summarized data from a group of people, such as a research department or students supervised by one or more professor. Several projects have already been developed which propose solutions to this problem, including some developing ontologies from the research domain. This work aims to integrate all the functionality of these tools in a single solution, SOS Lattes. The results obtained in the development of this solution are presented as well as the use of ontologies to help identifying inconsistencies in the data, queries for building consolidated reports and rules of inference for correlating multiple databases. Also, this work intends to contribute to the expansion and dissemination of the Semantic Web, by creating a tool that can extract data from Web pages and provide their semantic structure. The knowledge gained during the study may be useful for the development of new tools operating in different environments.
52

[pt] NOVAS MEDIDAS DE IMPORTÂNCIA DE VÉRTICES PARA APERFEIÇOAR A BUSCA POR PALAVRAS-CHAVE EM GRAFOS RDF / [en] NOVEL NODE IMPORTANCE MEASURES TO IMPROVE KEYWORD SEARCH OVER RDF GRAPHS

ELISA SOUZA MENENDEZ 15 April 2019 (has links)
[pt] Um ponto importante para o sucesso de sistemas de busca por palavras-chave é um mecanismo de ranqueamento que considera a importância dos documentos recuperados. A noção de importância em grafos é tipicamente computada usando medidas de centralidade, que dependem amplamente do grau dos nós, como o PageRank. Porém, em grafos RDF, a noção de importância não é necessariamente relacionada com o grau do nó. Sendo assim, esta tese aborda dois problemas: (1) como definir uma medida de importância em grafos RDF; (2) como usar essas medidas para ajudar a compilar e ranquear respostas a consultas por palavras-chave sobre grafos RDF. Para resolver estes problemas, esta tese propõe uma nova família de medidas, chamada de InfoRank, e um sistema de busca por palavras-chave, chamado QUIRA, para grafos RDF. Esta tese é concluída com experimentos que mostram que a solução proposta melhora a qualidade dos resultados em benchmarks de busca por palavras-chave. / [en] A key contributor to the success of keyword search systems is a ranking mechanism that considers the importance of the retrieved documents. The notion of importance in graphs is typically computed using centrality measures that highly depend on the degree of the nodes, such as PageRank. However, in RDF graphs, the notion of importance is not necessarily related to the node degree. Therefore, this thesis addresses two problems: (1) how to define importance measures for RDF graphs; (2) how to use these measures to help compile and rank results of keyword queries over RDF graphs. To solve these problems, the thesis proposes a novel family of measures, called InfoRank, and a keyword search system, called QUIRA, for RDF graphs. Finally, this thesis concludes with experiments showing that the proposed solution improves the quality of the results in two keyword search benchmarks.
53

Αξιοποίηση τεχνολογιών ανοικτού κώδικα για την ανάπτυξη εφαρμογών σημασιολογικού ιστού

Κασσέ, Παρασκευή 14 February 2012 (has links)
Τα τελευταία χρόνια υπάρχει εκθετική αύξηση του όγκου της πληροφορίας που δημοσιεύεται στο Διαδίκτυο. Καθώς όμως η πληροφορία αυτή δε συνδέεται με τη σημασιολογία της παρατηρείται δυσκολία στη διαχείρισή της και στην πρόσβαση σε αυτήν. Ο Σημασιολογικός Ιστός, λοιπόν, είναι μια ομάδα μεθόδων και τεχνολογιών που σκοπεύουν να δώσουν τη δυνατότητα στις μηχανές να κατανοήσουν τη “σημασιολογία” των πληροφοριών σχετικά με τον Παγκόσμιο Ιστό. Ο Σημασιολογικός Ιστός (Semantic Web) αποτελεί επέκταση του Παγκοσμίου Ιστού. Στο Σημασιολογικό Ιστό οι πληροφορίες εμπλουτίζονται με μεταδεδομένα, τα οποία υπακουούν σε κοινά πρότυπα και επιτρέπουν την εξαγωγή γνώσεως από την ήδη υπάρχουσα, καθώς επίσης και το συνδυασμό της υπάρχουσας πληροφορίας με στόχο την εξαγωγή συμπερασμάτων. Απώτερος στόχος του Σημασιολογικού Ιστού είναι η βελτιωμένη αναζήτηση, η εκτέλεση σύνθετων διεργασιών και η εξατομίκευση της πληροφορίας σύμφωνα με τις ανάγκες του κάθε χρήστη. Στην παρούσα διπλωματική εργασία μελετήθηκε η χρήση των τεχνολογιών του Σημασιολογικού Ιστού για τη βελτίωση της πρόσβασης σε πολιτισμικά δεδομένα. Συγκεκριμένα αρχικά έγινε εμβάθυνση στις τεχνολογίες και στις θεμελιώδεις έννοιες του Σημασιολογικού Ιστού. Παρουσιάστηκαν αναλυτικά οι βασικές γλώσσες σήμανσης: XML που επιτρέπει τη δημιουργία δομημένων εγγράφων με λεξιλόγιο καθορισμένο από το χρήστη, RDF που προσφέρει ένα μοντέλο δεδομένων για την περιγραφή πληροφοριών με τέτοιο τρόπο ώστε να είναι δυνατή η ανάγνωση και η κατανόησή τους από μηχανές. Αναφέρθηκαν, ακόμη, οι διάφοροι τρόποι σύνταξης της γλώσσας RDF καθώς και πως γίνεται αναζήτηση σε γράφους RDF με το πρωτόκολλο SPARQL. Στη συνέχεια ακολουθεί η περιγραφή της RDFS, που πρόκειται για γλώσσα περιγραφής του RDF λεξιλογίου. Έχοντας παρουσιαστεί σε προηγούμενο κεφάλαιο η έννοια της οντολογίας, γίνεται αναφορά στη σημασιολογική γλώσσα σήμανσης OWL, που χρησιμοποιείται για την έκδοση και διανομή οντολογιών στο Διαδίκτυο. Έπειτα ακολουθεί μια ανασκόπηση από επιλεγμένα έργα, ελληνικά, ευρωπαϊκά και διεθνή, των τελευταίων ετών που χρησιμοποιούν τις τεχνολογίες του Σημασιολογικού Ιστού στο τομέα του πολιτισμού και της πολιτισμικής κληρονομιάς. Τέλος στο έβδομο κεφάλαιο παρουσιάζεται μία εφαρμογή διαχείρισης αρχαιολογικών χώρων-μνημείων και μελετώνται σε βάθος οι τεχνολογίες και τα εργαλεία που χρησιμοποιήθηκαν για την υλοποίησή της. / Over the past few years there has been exponential increase of the volume of information published on the Internet. Since information is not connected to its semantics, it is difficult to manipulate and access it. Therefore, the Semantic Web consists of methods and technologies that aim to enable machines to understand information’s semantics. The Semantic Web is an extension of the World Wide Web (WWW). Specifically, information is enriched with metadata, which are subject to common standards and permit knowledge extraction from the existing one and the combination of existing information in order to infer implicit knowledge, as well. Future goals of the Semantic Web are enhanced searching, complicated processes’ execution and information personalization according to each user’s needs. This post-graduate diploma thesis researches the usage of Semantic Web technologies for the enhancement of the access to cultural data. More specifically, Semantic Web technologies and essential concepts were studied. Basic markup languages were presented analytically: XML that allows structured documents’ creation with user defined vocabulary, RDF that offers a data model for such information description that it is readable and understandable by machines. Also, various RDF syntaxes and how to search RDF graphs using SPARQL protocol were referred. Below RDFS description follows, that is a description language of RDF vocabulary. After having introduced the concept of ontology in previous chapter, the semantic markup language OWL is presented, that is used for ontology publishing and distribution on the Internet. A review of selected projects of the last years, Greek, European and international, which are characterized by the application of technologies of the Semantic Web in the sector of Culture and Cultural heritage, is presented. In the last chapter, an application that manages archaeological places- sites is presented and it is studied technologies and tools that were used for it.
54

Razvoj proširive softverske platforme za upravljanje kurikulumom u internacionalizovanom visokom obrazovanju / Extensible software platform for managing curriculum in internationalized higher education

Segedinac Milan 12 June 2014 (has links)
<p>Cilj. Cilj disertacije je razvoj pro&scaron;irive softverske platforme za upravljanje kurikulumom u internacionalizovanom visokom obrazovanju koja doprinosi razvoju zajedničkog jezika u istraživanju kurikuluma, a da se pri tome ne ograniče lokalne zajednice prakse istraživanja kurikuluma pozicionirane u nacionalno-specifične obrazovne kontekste, odnosno da se pokaže da je moguće primeniti postojeće međunarodne standarda, pro&scaron;irene eventualnim dodatnim zahtevima, u visoko&scaron;kolskom sistemu Republike Srbije.<br />Metodologija. Kao osnova za modeliranje metapodataka kojima se opisuju kurikulumi u internacionalizovanom visokom obrazovanju kori&scaron;ćeni su međunarodni standardi MLO-AD i MLO ECTS IP/CC za predstavljanje obrazovnih prilika. Za predstavljanje metapodataka kori&scaron;ćene su OWL ontologije. Za razvoj sistema kori&scaron;ćena je iterativna metoda razvoja softvera, a implementacija je izvr&scaron;ena kori&scaron;ćenjem programskih jezika Java i Python, XML tehnologije, tehnologije veb servisa i tehnologije Semantičkog Veba. Verifikacija modela metapodataka i sistema za upravljanje kurikulumom je izvr&scaron;ena na realnom studijskom programu akreditovanom na na Fakultetu tehničkih nauka u Novom Sadu.<br />Rezultati. U ovoj disertaciji ostvareni su sledeći rezultati:<br />Predložen je model podataka za upravljanje kurikulumom u internacionalizovanom visoko&scaron;kolskom obrazovanju baziran na konceptu obrazovne prilike kao osnovne jedinice za upravljanje kurikulumom, uz oslonac na standardne modele metapodataka za predstavljanje obrazovnih prilika u Evropskoj oblasti visokog obrazovanja.<br />Predložena je ma&scaron;inski čitljiva reprezentacija modela putem sintakse dovoljno ekspresivne za implementaciju modela i sama implementacija modela u obliku OWL ontologije.<br />Predložena je formalna, ma&scaron;inski čitljiva reprezentacije obrazovnih ciljeva koja omogućuje proveru konzistentnosti kurikuluma.<br />Predložena je softverska arhitektura platforme čija je prednost mogućnost jednostavnog pro&scaron;irivanja novim servisima, uključujući i integraciju sa drugim sistemima.<br />Ograničenja istraživanja/implikacije. Osnovno ograničenje teorijskog okvira kojim se razvoj platforme pozicionira u internacionalizovano istraživanje kurikuluma je &scaron;to postojeća praksa upravljanja kurikulumom jo&scaron; uvek odstupa od proklamovanih smernica. Implikacija je ograničena mogućnost da korisnici u punoj meri koriste platformu. Ograničenja predloženog modela odnose se na potrebu njegovog pro&scaron;irivanja zarad zadovoljavanja specifičnih potreba krajnjih korisnika. Implikacija je potreba da se ulože dodatni napori za zadovoljavanje specifičnih potreba korisnika. Ograničenja u delu ma&scaron;inski čitljive reprezentacije su direktna posledica postojećih ograničenja tehnologija Semantičkog Veba u domenu semantičke ekspresivnosti i performansi. Implikacije su ograničena mogućnost kontekstualno zavisnog opserviranja kurikuluma i ograničena mogućnost praktičnog kori&scaron;ćenja reprezentacije modela zbog problema performansi aplikacija Semantičkog Veba.<br />Praktične implikacije. Krajnja namena ovog modela je unapređivanje semantičke interoperabilnosti u Areni Evropskog Visoko&scaron;kolskog obrazovanja putem tehničkog mapiranja informacija o kursevima i drugim obrazovnim prilikama na interoperabilnu specifikaciju obrazovnih prilika. Platforma ima sledeće moguće praktične primene:<br />Ogla&scaron;avanje kurseva.<br />Posredovano prijavljivanje (brokerski servis) za uče&scaron;će u obrazovnim prilikama,<br />Poređenje obrazovnih prilika (u odnosu na referentne repere i međusobno),<br />Evaluacija i kontrola kvaliteta akademskih obrazovnih prilika,i<br />Praćenje studentskog postignuća (Transcript of Records, ToR).<br />Originalnost/vrednost. Originalan doprinos nauci je sledeći.<br />Artikulisanje teorijskog okvira istraživanja kojim se razvoj platforme za upravljanje kurikulumom u internacionalizovanom visokom obrazovanju pozicionira u internacionalizovano istraživanje kurikuluma i koncept obrazovne prilike usvaja kao osnovna jedinica za upravljanje kurikulumom.<br />Predlog modela podataka za upravljanje kurikulumom u internacionalizovanom visoko&scaron;kolskom obrazovanju baziran na konceptu obrazovne prilike kao osnovne jedinice za upravljanje kurikulumom, uz oslonac na standardne modele metapodataka za predstavljanje obrazovnih prilika u Evropskoj oblasti visokog obrazovanja.<br />Predlog ma&scaron;inski čitljive reprezentacije modela putem sintakse dovoljno ekspresivne za implementaciju modela i sama implementacija modela u obliku OWL ontologije.<br />Predlog formalne, ma&scaron;inski čitljive reprezentacije obrazovnih ciljeva koja omogućuje proveru konzistentnosti kurikuluma.<br />Predlog softverske arhitekture platforme baziran na predloženom modelu i tehnologijama Semantičkog Veba.<br />Prototipska implementacija i njena primena na studiji slučaja realnog akreditovanog studijskog programa predstavljaju verifikaciju i potvrdu praktične vrednosti predloženog modela i softverske arhitekture platforme.</p> / <p>Purpose &ndash; The aim of the dissertation is the development of extensible software platform for managing internationalized curriculum in higher education. The platform contributes to the development of shared curriculum study language which does not limit the local educational practices. The solution proposed in this dissertation is in accordance with the national- specific educational context as well as existing international standards and is fully applicant to the higher education system of the Republic of Serbia .<br />Design/methodology/approach &ndash; The platform is based upon the international standards MLO -AD and MLO ECTS IP / CC that represent learning opportunities. The standards are implemented as OWL ontologies. The system was developed by using iterative software development methods , and the implementation is done by using Java and Python programming language , XML technologies, Web services and Semantic Web technologies. The verification of the metadata model and the system in whole is carried out on an accredited degree program in the Faculty of Technical Sciences in Novi Sad.<br />Findings. In this dissertation, following results were achieved:<br />&bull; The data model for managing curriculum in internationalized higher education based upon the concept of learning opportunities and relying on the standardized metadata model for representing the learning opportunities in the European higher education area was proposed.<br />&bull; The machine readable representation of the model was adopted. The syntax is expressive enough for the implementation of the model. The model was implemented as a set OWL ontologies.<br />&bull; The formal, machine readable representation of educational objectives which allows formal curriculum evaluation and consistency check is proposed.<br />&bull; The extensible platform for managing curriculum in the internationalized higher education is proposed.<br />Research limitations / implications &ndash; The main limitation of the theoretical framework that is the basis of the platform is that existing educational practice often is not consistent with the contemporary findings of the curriculum theory. The implication of this fact is limited ability for users to make full use of the platform. Limitations of the proposed model are related to the need for its extensions that should satisfy the specific needs of the end users. The implication of this limit is the need to make additional efforts to meet the specific needs of the users. Limitations concerning the machine-readable representations of the model are a direct consequence of the limitations of existing Semantic Web technologies in the domain of semantic expressiveness and performance. The implications are limited possibility of contextually dependent observation of the curriculum and the limited possibility of practical use of the model representation due to the performance problem present in the Semantic Web applications.<br />The practical implications. The main purpose of this model is to improve semantic interoperability in the European higher education area through the mapping of the information on courses and other educational opportunities onto the interoperable specification of learning opportunities . The platform has the following practical applications:<br />&bull; Course advertising.<br />&bull; Mediated enrollment (brokerage services) to learning opportunities,<br />&bull; Comparison of learning opportunities (relative to the reference benchmarks and each other),<br />&bull; Evaluation and quality control of academic learning opportunities , and<br />&bull; Monitoring of student achievement (Transcript of Records, ToR) .<br />Originality/value &ndash;The scientific contribution of this dissertation are following:<br />&bull; Articulating the theoretical framework necessary for the development of the platform for curriculum management in internationalized higher education.<br />&bull; Development of the data model based upon the concept of the learning opportunities.<br />&bull; Proposing the machine-readable representation of the model through syntax expressive enough for the implementation of the model and the actual implementation of the model in the form of OWL ontologies.<br />&bull; Proposing a formal, machine-readable representation of educational objectives, which allows curriculum consistency check.<br />&bull; Proposing the software platform based on the proposed model and the Semantic Web technologies.<br />The proposed platform was verified by implementing a prototype and applying it to the case study of an accredited degree program held at the Faculty of Novi Sad.</p> / null
55

Deep neural semantic parsing: translating from natural language into SPARQL / Análise semântica neural profunda: traduzindo de linguagem natural para SPARQL

Luz, Fabiano Ferreira 07 February 2019 (has links)
Semantic parsing is the process of mapping a natural-language sentence into a machine-readable, formal representation of its meaning. The LSTM Encoder-Decoder is a neural architecture with the ability to map a source language into a target one. We are interested in the problem of mapping natural language into SPARQL queries, and we seek to contribute with strategies that do not rely on handcrafted rules, high-quality lexicons, manually-built templates or other handmade complex structures. In this context, we present two contributions to the problem of semantic parsing departing from the LSTM encoder-decoder. While natural language has well defined vector representation methods that use a very large volume of texts, formal languages, like SPARQL queries, suffer from lack of suitable methods for vector representation. In the first contribution we improve the representation of SPARQL vectors. We start by obtaining an alignment matrix between the two vocabularies, natural language and SPARQL terms, which allows us to refine a vectorial representation of SPARQL items. With this refinement we obtained better results in the posterior training for the semantic parsing model. In the second contribution we propose a neural architecture, that we call Encoder CFG-Decoder, whose output conforms to a given context-free grammar. Unlike the traditional LSTM encoder-decoder, our model provides a grammatical guarantee for the mapping process, which is particularly important for practical cases where grammatical errors can cause critical failures. Results confirm that any output generated by our model obeys the given CFG, and we observe a translation accuracy improvement when compared with other results from the literature. / A análise semântica é o processo de mapear uma sentença em linguagem natural para uma representação formal, interpretável por máquina, do seu significado. O LSTM Encoder-Decoder é uma arquitetura de rede neural com a capacidade de mapear uma sequência de origem para uma sequência de destino. Estamos interessados no problema de mapear a linguagem natural em consultas SPARQL e procuramos contribuir com estratégias que não dependam de regras artesanais, léxico de alta qualidade, modelos construídos manualmente ou outras estruturas complexas feitas à mão. Neste contexto, apresentamos duas contribuições para o problema de análise semântica partindo da arquitetura LSTM Encoder-Decoder. Enquanto para a linguagem natural existem métodos de representação vetorial bem definidos que usam um volume muito grande de textos, as linguagens formais, como as consultas SPARQL, sofrem com a falta de métodos adequados para representação vetorial. Na primeira contribuição, melhoramos a representação dos vetores SPARQL. Começamos obtendo uma matriz de alinhamento entre os dois vocabulários, linguagem natural e termos SPARQL, o que nos permite refinar uma representação vetorial dos termos SPARQL. Com esse refinamento, obtivemos melhores resultados no treinamento posterior para o modelo de análise semântica. Na segunda contribuição, propomos uma arquitetura neural, que chamamos de Encoder CFG-Decoder, cuja saída está de acordo com uma determinada gramática livre de contexto. Ao contrário do modelo tradicional LSTM Encoder-Decoder, nosso modelo fornece uma garantia gramatical para o processo de mapeamento, o que é particularmente importante para casos práticos nos quais erros gramaticais podem causar falhas críticas em um compilador ou interpretador. Os resultados confirmam que qualquer resultado gerado pelo nosso modelo obedece à CFG dada, e observamos uma melhora na precisão da tradução quando comparada com outros resultados da literatura.
56

Querying a Web of Linked Data

Hartig, Olaf 28 July 2014 (has links)
In den letzten Jahren haben sich spezielle Prinzipien zur Veröffentlichung strukturierter Daten im World Wide Web (WWW) etabliert. Diese Prinzipien erlauben es, von den jeweils angebotenen Daten auf weitere, nach den selben Prinzipien veröffentlichten Daten zu verweisen. Die daraus resultierende Form von Web-Daten wird entsprechend als Linked Data bezeichnet. Mit der Veröffentlichung von Linked Data im WWW entsteht ein sehr großer Datenraum, welcher Daten verschiedenster Anbieter miteinander verbindet und neuartige Möglichkeiten für Web-basierte Anwendungen bietet. Als Basis für die Entwicklung solcher Anwendungen haben mehrere Forschungsgruppen begonnen, Ansätze zu untersuchen, welche diesen Datenraum als eine Art verteilte Datenbank auffassen und die Ausführung deklarativer Anfragen über dieser Datenbank ermöglichen. Forschungsarbeit zu theoretischen Grundlagen der untersuchten Ansätze fehlt jedoch nahezu vollständig. Die vorliegende Dissertation schließt diese Lücke. / During recent years a set of best practices for publishing and connecting structured data on the World Wide Web (WWW) has emerged. These best practices are referred to as the Linked Data principles and the resulting form of Web data is called Linked Data. The increasing adoption of these principles has lead to the creation of a globally distributed space of Linked Data that covers various domains such as government, libraries, life sciences, and media. Approaches that conceive this data space as a huge distributed database and enable an execution of declarative queries over this database hold an enormous potential; they allow users to benefit from a virtually unbounded set of up-to-date data. As a consequence, several research groups have started to study such approaches. However, the main focus of existing work is to address practical challenges that arise in this context. Research on the foundations of such approaches is largely missing. This dissertation closes this gap.
57

Linked Enterprise Data als semantischer, integrierter Informationsraum für die industrielle Datenhaltung / Linked Enterprise Data as semantic and integrated information space for industrial data

Graube, Markus 01 June 2018 (has links) (PDF)
Zunehmende Vernetzung und gesteigerte Flexibilität in Planungs- und Produktionsprozessen sind die notwendigen Antworten auf die gesteigerten Anforderungen an die Industrie in Bezug auf Agilität und Einführung von Mehrwertdiensten. Dafür ist eine stärkere Digitalisierung aller Prozesse und Vernetzung mit den Informationshaushalten von Partnern notwendig. Heutige Informationssysteme sind jedoch nicht in der Lage, die Anforderungen eines solchen integrierten, verteilten Informationsraums zu erfüllen. Ein vielversprechender Kandidat ist jedoch Linked Data, das aus dem Bereich des Semantic Web stammt. Aus diesem Ansatz wurde Linked Enterprise Data entwickelt, welches die Werkzeuge und Prozesse so erweitert, dass ein für die Industrie nutzbarer und flexibler Informationsraum entsteht. Kernkonzept dabei ist, dass die Informationen aus den Spezialwerkzeugen auf eine semantische Ebene gehoben, direkt auf Datenebene verknüpft und für Abfragen sicher bereitgestellt werden. Dazu kommt die Erfüllung industrieller Anforderungen durch die Bereitstellung des Revisionierungswerkzeugs R43ples, der Integration mit OPC UA über OPCUA2LD, der Anknüpfung an industrielle Systeme (z.B. an COMOS), einer Möglichkeit zur Modelltransformation mit SPARQL sowie feingranularen Informationsabsicherung eines SPARQL-Endpunkts. / Increasing collaboration in production networks and increased flexibility in planning and production processes are responses to the increased demands on industry regarding agility and the introduction of value-added services. A solution is the digitalisation of all processes and a deeper connectivity to the information resources of partners. However, today’s information systems are not able to meet the requirements of such an integrated, distributed information space. A promising candidate is Linked Data, which comes from the Semantic Web area. Based on this approach, Linked Enterprise Data was developed, which expands the existing tools and processes. Thus, an information space can be created that is usable and flexible for the industry. The core idea is to raise information from legacy tools to a semantic level, link them directly on the data level even across organizational boundaries, and make them securely available for queries. This includes the fulfillment of industrial requirements by the provision of the revision tool R43ples, the integration with OPC UA via OPCUA2LD, the connection to industrial systems (for example to COMOS), a possibility for model transformation with SPARQL as well as fine granular information protection of a SPARQL endpoint.
58

Un modèle de données pour bibliothèques numériques / A data model for digital libraries

Yang, Jitao 30 May 2012 (has links)
Les bibliothèques numériques sont des systèmes d'information complexes stockant des ressources numériques (par exemple, texte, images, sons, audio), ainsi que des informations sur les ressources numériques ou non-numériques; ces informations sont appelées des métadonnées. Nous proposons un modèle de données pour les bibliothèques numériques permettant l'identification des ressources, l’utilisation de métadonnées et la réutilisation des ressources stockées, ainsi qu’un langage de requêtes pour l’interrogation de ressources. Le modèle que nous proposons est inspiré par l'architecture du Web, qui forme une base solide et universellement acceptée pour les notions et les services attendus d'une bibliothèque numérique. Nous formalisons notre modèle comme une théorie du premier ordre, afin d’exprimer les concepts de bases de la bibliothèque numérique, sans aucune contrainte technique. Les axiomes de la théorie donnent la sémantique formelle des notions du modèle, et en même temps fournissent une définition de la connaissance qui est implicite dans une bibliothèque numérique. La théorie est traduite en un programme Datalog qui, étant donnée une bibliothèque numérique, permet de la compléter efficacement avec les connaissances implicites. Le but de notre travail est de contribuer à la technologie de gestion des informations des bibliothèques numériques. De cette façon, nous pouvons montrer la faisabilité théorique de notre modèle, en montrant qu'il peut être efficacement appliqué. En outre, nous démontrons la faisabilité pratique du modèle en fournissant une traduction complète du modèle en RDF et du langage de requêtes en SPARQL.Nous fournissons un calcul sain et complet pour raisonner sur les graphes RDF résultant de la traduction. Selon ce calcul, nous prouvons la correction de ces deux traductions, montrant que les fonctions de traduction préservent la sémantique de la bibliothèque numérique et de son langage de requêtes. / Digital Libraries are complex information systems, storing digital resources (e.g., text, images, sound, audio), as well as knowledge about digital or non-digital resources; this knowledge is referred to as metadata. We propose a data model for digital libraries supporting resource identification, use of metadata and re-use of stored resources, as well as a query language supporting discovery of resources. The model that we propose is inspired by the architecture of the Web, which forms a solid, universally accepted basis for the notions and services expected from a digital library. We formalize our model as a first-order theory, in order to be able to express the basic concepts of digital libraries without being constrained by any technical considerations. The axioms of the theory give the formal semantics of the notions of the model, and at the same time, provide a definition of the knowledge that is implicit in a digital library. The theory is then translated into a Datalog program that, given a digital library, allows to efficiently complete the digital library with the knowledge implicit in it. The goal of our research is to contribute to the information management technology of digital libraries. In this way, we are able to demonstrate the theoretical feasibility of our digital library model, by showing that it can be efficiently implemented. Moreover, we demonstrate our model’s practical feasibility by providing a full translation of the model into RDF and of the query language into SPARQL. We provide a sound and complete calculus for reasoning on the RDF graphs resulting from translation. Based on this calculus, we prove the correctness of both translations, showing that the translation functions preserve the semantics of the digital library and of the query language.
59

Interroger RDF(S) avec des expressions régulières

Alkhateeb, Faisal 30 June 2008 (has links) (PDF)
RDF est un langage de représentation des connaissances dédié à l'annotation des ressources dans le Web Sémantique. Bien que RDF peut être lui-même utilisé comme un langage de requêtes pour interroger une base de connaissances RDF (utilisant la conséquence RDF), la nécessité d'ajouter plus d'expressivité dans les requêtes a conduit à définir le langage de requêtes SPARQL. Les requêtes SPARQL sont définies à partir des patrons de graphes qui sont fondamentalement des graphes RDF avec des variables. Les requêtes SPARQL restent limitées car elles ne permettent pas d'exprimer des requêtes avec une séquence non-bornée de relations (par exemple, Existe-t-il un itinéraire d'une ville A à une ville B qui n'utilise que les trains ou les bus?"). Nous montrons qu'il est possible d'étendre la syntaxe et la sémantique de RDF, définissant le langage PRDF (pour Path RDF) afin que SPARQL puisse surmonter cette limitation en remplaçant simplement les patrons de graphes basiques par des graphes PRDF. Nous étendons aussi PRDF à CPRDF (pour Constrained Path RDF) permettant d'exprimer des contraintes sur les sommets des chemins traversés (par exemple, "En outre, l'une des correspondances doit fournir une connexion sans fil."). Nous avons fourni des algorithmes corrects et complets pour répondre aux requêtes (la requête est un graphe PRDF ou CPRDF, la base de connaissances est un graphe RDF) basés sur un homomorphisme particulier, ainsi qu'une analyse détaillée de la complexité. Enfin, nous utilisons les graphes PRDF ou CPRDF pour généraliser les requêtes SPARQL, définissant les extensions PSPARQL et CPSPARQL, et fournissons des tests expérimentaux en utilisant une implémentation complète de ces deux langages.
60

Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web Representations

Stefanova, Silvia January 2013 (has links)
This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases. An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs. When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing. To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time. / eSSENCE

Page generated in 0.0502 seconds