• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 29
  • 11
  • 10
  • 9
  • 7
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ontology Alignment Techniques for Linked Open Data Ontologies

Gu, Chen 13 December 2013 (has links)
No description available.
2

Conceptualization of the discourse of linked open data implementations in the cultural heritage sector : A qualitative analysis of linked open data documentation

Olsson, Nic January 2022 (has links)
Öppen länkad data används i kulturarvssektorn för att strukturera och publicera data och metadata på nätet. Tekniken används för att förbättra interoperabilitet mellan separata system och för att möjliggöra nya typer av utforskande och upptäckande av kulturarv. Öppen länkad data kan implementeras på olika sätt och tidigare forskning visar att den mängd beslut som måste fattas ses som ett hinder för spridningen av tekniken. En brist på kunskap och expertis i sektorn har även identifierats genom tidigare studier. Denna studie har för avsikt att underlätta användandet av öppen länkad data i kulturarvssektorn genom att undersöka hur implementeringar framställs i dokumentation. Detta fokus kan sammanfattas genom följande forskningsfråga: Vilka aspekter av implementeringar av öppen länkad data i kulturarvssektorn förekommer i dokument?  Studien använder en kvalitativ dokumentanalys genom vilken innehållet i relevanta dokument har kodats genom en induktiv metod. Det material som analyserats är dokument som är producerade i samband med fyra svenska kulturarvsprojekt som använder öppen länkad data. Genom analysen har 17 aspekter identifierats. Dessa aspekter är indelade i följande sju kategorier: Bakgrund, Data, Databehandlingsprocesser, Externa källor, Kulturarv, Publicerad data, och Resultat. Aspekterna och kategorierna har sammanställts i en konceptuell modell som visar hur implementeringar av öppen länkad data framställs i dokument. En analys av relationer mellan förekomsterna av aspekterna visar att modellering av data, databehandlingsprocesser, och typ av kulturarv framställs som de mest centrala aspekterna av implementeringar av öppen länkad data. Det faktum att databehandlingsprocesser inte kunde nyanseras ytterligare visar hur flexibel tekniken är och hur många beslut som måste fattas av de institutioner som implementerar öppen länkad data. Detta resultat stämmer överens med tidigare forskning. / Linked open data is used in the cultural heritage sector to structure and publish data and metadata on the Web. The technology is used to improve interoperability between separate systems and to enable new forms of exploration and discovery of cultural heritage. Linked open data can be implemented in various ways, and research shows that the number of decisions that have to be made is seen as an obstacle in the diffusion of the technology. A lack of knowledge and expertise in the sector has also been identified through previous studies. The purpose of this study is to simplify the usage of linked open data in the cultural heritage sector by examining how implementations are expressed in documents. This focus can be summarized by the following research question: Which aspects of cultural heritage linked open data implementations are expressed in documents?  This study employed a qualitative document analysis through which the content of relevant documents were coded through an inductive method. The analyzed material consisted of documents produced in relation to four Swedish cultural heritage projects which utilize linked open data. 17 aspects were identified through this analysis. These aspects were grouped into the following seven categories: Background, Cultural heritage, Data, Data processing methods, External sources, Outcomes, and Published data. The aspects and the categories were compiled in a conceptual model which displays how implementations of linked open data are expressed in documents. An analysis of the relations between the occurrences of the aspects showed that data modelling, data processing methods, and type of cultural heritage are perceived as the most central aspects of implementations of linked open data. The fact that data processing methods could not be split into multiple aspects shows how flexible the technology is. It also illustrates the many decisions that implementing institutions have to make. These results match results from previous studies.
3

Instrumente, Interoperabilität, Semantic Web: Ansätze für eine spartenübergreifende Verlinkung musikinstrumentenbezogener Daten

Riedel, Alan 30 October 2019 (has links)
Präsentationsfolien eines Vortrags im Rahmen der Jahrestagung der IAML Deutschland 19.09.2018 in Augsburg.
4

Personnalisation des MOOC par la réutilisation de Ressources Éducatives Libres / MOOC personalization by reusing Open Educational Resources

Hajri, Hiba 08 June 2018 (has links)
La personnalisation de l’apprentissage dans les environnements informatiques pour l’apprentissage humain (EIAH) est un sujet de recherche qui est traité depuis de nombreuses années. Avec l’arrivée des cours en ligne ouverts et massifs (MOOC), la question de la personnalisation se pose de façon encore plus cruciale et de nouveaux défis se présentent aux chercheurs. En effet, le même MOOC peut être suivi par des milliers d’apprenants ayant des profils hétérogènes (connaissances, niveaux éducatif, objectifs, etc). Il devient donc nécessaire de tenir compte de cette hétérogénéité en présentant aux apprenants des contenus éducatifs adaptés à leurs profils afin qu’ils tirent parti au mieux du MOOC.D’un autre côté, de plus en plus de ressources éducatives libres (REL) sont partagées sur le web. Il est important de pouvoir réutiliser ces REL dans un contexte différent de celui pour lequel elles ont été créées. En effet, produire des REL de qualité est une activité coûteuse en temps et la rentabilisation des REL passe par leur réutilisation.Pour faciliter la découverte des REL, des schémas de métadonnées sont utilisés pour décrire les REL.Cependant, l’utilisation de ces schémas a amené à des entrepôts isolés de descriptions hétérogènes et qui ne sont pas interopérables. Afin de régler ce problème, une solution adoptée dans la littérature consiste à appliquer les principes des données ouvertes et liées (LOD) aux descriptions des REL.Dans le cadre de cette thèse, nous nous intéressons à la personnalisation des MOOC et à la réutilisation des REL.Nous proposons un système de recommandation qui fournit à un apprenant en train de suivre un MOOC des ressources externes qui sont des REL adaptées à son profil, tout en respectant les spécificités du MOOC suivi.Pour sélectionner les REL, nous nous intéressons à celles qui possèdent des descriptions insérées dans les LOD, stockées dans des entrepôts accessibles sur le web et offrant des moyens d’accès standardisés. Notre système de recommandation est implémenté dans une plateforme de MOOC, Open edX et il est évalué en utilisant une plateforme de micro-tâches. / For many years now, personalization in TEL is a major subject of intensive research. With the spreading of Massive Open Online Courses (MOOC), the personalization issue becomes more acute. Actually, any MOOC can be followed by thousands of learners with different educational levels, learning styles, preferences, etc. So, it is necessary to present pedagogical contents taking into account their heterogeneous profiles so that they can maximize their benefit from following the MOOC.At the same time, the amount of Open Educational Resources (OER) available on the web is permanently growing. These OERs have to be reused in contexts different from the initial ones for which they were created.Indeed, producing quality OER is costly and requires a lot of time. Then, different metadata schemas are used to describe OER. However, the use of these schemas has led to isolated repositories of heterogeneous descriptions which are not interoperable. In order to address this problem, a solution adopted in the literature is to apply Linked Open Principles (LOD) to OER descriptions.In this thesis, we are interested in MOOC personalization and OER reuse. We design a recommendation technique which computes a set of OERs adapted to the profile of a learner attending some MOOC. The recommended OER are also adapted to the MOOC specificities. In order to find OER, we are interested in those who have metadata respecting LOD principles and stored in repositories available on the web and offering standardized means of access. Our recommender system is implemented in the MOOC platform Open edX and assessed using a micro jobs platform.
5

Qalamos: Connecting Manuscript Traditions

Becker, Michael, Krause, Anett, Schmid, Larissa 09 February 2024 (has links)
No description available.
6

Covering or complete? : Discovering conditional inclusion dependencies

Bauckmann, Jana, Abedjan, Ziawasch, Leser, Ulf, Müller, Heiko, Naumann, Felix January 2012 (has links)
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case. / Datenabhängigkeiten (wie zum Beispiel Integritätsbedingungen), werden verwendet, um die Qualität eines Datenbankschemas zu erhöhen, um Anfragen zu optimieren und um Konsistenz in einer Datenbank sicherzustellen. In den letzten Jahren wurden bedingte Abhängigkeiten (conditional dependencies) vorgestellt, die die Qualität von Daten analysieren und verbessern sollen. Eine bedingte Abhängigkeit ist eine Abhängigkeit mit begrenztem Gültigkeitsbereich, der über Bedingungen auf einem oder mehreren Attributen definiert wird. In diesem Bericht betrachten wir bedingte Inklusionsabhängigkeiten (conditional inclusion dependencies; CINDs). Wir generalisieren die Definition von CINDs anhand der Unterscheidung von überdeckenden (covering) und vollständigen (completeness) Bedingungen. Wir stellen einen Anwendungsfall für solche CINDs vor, der den Nutzen von CINDs bei der Lösung komplexer Datenqualitätsprobleme aufzeigt. Darüber hinaus definieren wir Qualitätsmaße für Bedingungen basierend auf Sensitivität und Genauigkeit. Wir stellen effiziente Algorithmen vor, die überdeckende und vollständige Bedingungen innerhalb vorgegebener Schwellwerte finden. Unsere Algorithmen wählen nicht nur die Werte der Bedingungen, sondern finden auch die Bedingungsattribute automatisch. Abschließend zeigen wir, dass unser Ansatz effizient sinnvolle und hilfreiche Ergebnisse für den vorgestellten Anwendungsfall liefert.
7

Le Linked Data à l'université : la plateforme LinkedWiki / Linked Data at university : the LinkedWiki platform

Rafes, Karima 25 January 2019 (has links)
Le Center for Data Science de l’Université Paris-Saclay a déployé une plateforme compatible avec le Linked Data en 2016. Or, les chercheurs rencontrent face à ces technologies de nombreuses difficultés. Pour surmonter celles-ci, une approche et une plateforme appelée LinkedWiki, ont été conçues et expérimentées au-dessus du cloud de l’université (IAAS) pour permettre la création d’environnements virtuels de recherche (VRE) modulaires et compatibles avec le Linked Data. Nous avons ainsi pu proposer aux chercheurs une solution pour découvrir, produire et réutiliser les données de la recherche disponibles au sein du Linked Open Data, c’est-à-dire du système global d’information en train d’émerger à l’échelle du Web. Cette expérience nous a permis de montrer que l’utilisation opérationnelle du Linked Data au sein d’une université est parfaitement envisageable avec cette approche. Cependant, certains problèmes persistent, comme (i) le respect des protocoles du Linked Data et (ii) le manque d’outils adaptés pour interroger le Linked Open Data avec SPARQL. Nous proposons des solutions à ces deux problèmes. Afin de pouvoir vérifier le respect d’un protocole SPARQL au sein du Linked Data d’une université, nous avons créé l’indicateur SPARQL Score qui évalue la conformité des services SPARQL avant leur déploiement dans le système d’information de l’université. De plus, pour aider les chercheurs à interroger le LOD, nous avons implémenté le démonstrateur SPARQLets-Finder qui démontre qu’il est possible de faciliter la conception de requêtes SPARQL à l’aide d’outils d’autocomplétion sans connaissance préalable des schémas RDF au sein du LOD. / The Center for Data Science of the University of Paris-Saclay deployed a platform compatible with Linked Data in 2016. Because researchers face many difficulties utilizing these technologies, an approach and then a platform we call LinkedWiki were designed and tested over the university’s cloud (IAAS) to enable the creation of modular virtual search environments (VREs) compatible with Linked Data. We are thus able to offer researchers a means to discover, produce and reuse the research data available within the Linked Open Data, i.e., the global information system emerging at the scale of the internet. This experience enabled us to demonstrate that the operational use of Linked Data within a university is perfectly possible with this approach. However, some problems persist, such as (i) the respect of protocols and (ii) the lack of adapted tools to interrogate the Linked Open Data with SPARQL. We propose solutions to both these problems. In order to be able to verify the respect of a SPARQL protocol within the Linked Data of a university, we have created the SPARQL Score indicator which evaluates the compliance of the SPARQL services before their deployments in a university’s information system. In addition, to help researchers interrogate the LOD, we implemented a SPARQLets-Finder, a demonstrator which shows that it is possible to facilitate the design of SPARQL queries using autocompletion tools without prior knowledge of the RDF schemas within the LOD.
8

O controle de autoridade no consórcio VIAF / The authority control in the consortium VIAF

Romanetto, Luiza de Menezes [UNESP] 24 January 2017 (has links)
Submitted by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-22T14:24:28Z No. of bitstreams: 1 romanetto_lm_me_mar.pdf: 2455878 bytes, checksum: 25b85b9fc9e1542b5eccb4e8abac7954 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-22T14:25:59Z (GMT) No. of bitstreams: 1 romanetto_lm_me_mar.pdf: 2455878 bytes, checksum: 25b85b9fc9e1542b5eccb4e8abac7954 (MD5) / Made available in DSpace on 2017-02-22T14:26:03Z (GMT). No. of bitstreams: 1 romanetto_lm_me_mar.pdf: 2455878 bytes, checksum: 25b85b9fc9e1542b5eccb4e8abac7954 (MD5) Previous issue date: 2017-01-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O controle de autoridade garante consistência aos sistemas de informação. Adotado na catalogação, o tema recebeu maior visibilidade durante a década de 1980 com a automação dos catálogos. Desde então, surgiram projetos direcionados ao intercâmbio e compartilhamento de dados de autoridade. O Virtual International Authority File (VIAF) é um consórcio de cooperação internacional entre bibliotecas e agências nacionais que disponibiliza arquivos de autoridade no Linking Open Data. Este estudo foi desenvolvido com o intuito de responder ao problema de pesquisa: como o controle de autoridade no VIAF contribui para a efetivação da Web Semântica de modo a proporcionar maior qualidade aos sistemas de busca e recuperação da informação? Para isso, foram definidos os objetivos: 1) descrever sobre os princípios, técnicas e padrões que proporcionam o controle de autoridade na catalogação; 2) analisar o controle de autoridade de nomes para pessoas, instituições e geográfico; 3) apresentar o VIAF, sua origem, as tecnologias envolvidas em sua estrutura e sua perspectiva de contribuição para a Web Semântica. A metodologia compreende um estudo qualitativo, de natureza aplicada, com objetivos exploratórios. Os resultados obtidos apresentam a abrangência internacional do VIAF e a caracterização sobre as tecnologias e a forma pela qual são estabelecidos os registros de autoridade no consórcio. Os registros de autoridade estabelecidos no VIAF compreendem a agregação de vocabulários de valor estabelecidos pelas principais agências catalogadoras no mundo, que têm sido adotados em Linking Open Data. / Authority control ensures consistency in information systems. Adopted in the cataloging, the theme received greater visibility from the automation of the catalogs, during the decade of 1980 with automation of catalogs. Since then emerged projects cooperation targeted to exchange and cooperation of authority data. The Virtual International Authority File (VIAF) is consortium of international cooperation between library and national agencies what provides authority file in the linking open data. This study was developed with intent to respond to research problem: how does the authority control in VIAF contributes to the effectiveness from the semantic web so provide higher quality to the search systems and information retrieval? For this were defined the objectives: 1) to describe about the principles, techniques and standards what provide authority control in cataloging; 2) to analyze authority control of names for people, institutions and geographical; 3) present the VIAF, your origin, the technologies involved in your structure and its perspective of contribution to the Semantic Web. The methodology comprises a qualitative study of an applied nature with exploratory objectives. The obtained results present the international scope of VIAF and characterization about the technologies and the way in which are established authority records in the consortium. The authority established the VIAF records comprise the vocabulary of aggregate value established by the main catalogers agencies in the world. What stands out to the importance relevance of the consortium to the international community.
9

Návrh postupu tvorby aplikace pro Linked Open Data / The proposal of application development process for Linked Open Data

Budka, Michal January 2014 (has links)
This thesis deals with the issue of Linked Open Data. The goal of this thesis is to introduce the reader to this issue as a whole and to the possibility of using Linked Open Data for developing useful applications by proposing a new development process focusing on such applications. The theoretical part offers an insight into the issue of Open Data, Linked Open Data and the NoSQL database systems and their usability in this field. It focuses mainly on graph database systems and compares them with relational database systems using predefined criteria. Additionally, the goal of this thesis is to develop an application using the proposed development process, which provides a tool for data presentation and statistical visualisation for open data sets published by the Supreme Audit Office and the Czech Trade Inspection. The application is mainly developed for the purpose of verifying the proposed development process and to demonstrate the connectivity of open data published by two different organizations.The thesis includes the process of selecting a development methodology, which is then used for optimising work on the implementation of the resulting application and the process of selecting a graph database system, that is used to store and modify open data for the purposes of the application.
10

Využití propojených dat na webu ke tvorbě strategické znalostní hry / The use of linked open data for strategic knowledge game creation

Turečková, Šárka January 2015 (has links)
The general theme of this thesis was the use of linked open data for a creation of games. This thesis specifically addressed the issue of usage of DBpedia for automatic question generation suitable for use in games. Within that are proposed suitable ways of selecting wanted objects from DBpedia and ways of obtaining and processing relevant information from them. Including a method for estimating renown of individual objects. Some of methods are then applied to create a program for a question generation from the data obtained through DBpedia during the run of the application. The real possibility of using these questions generated from DBpedia for gaming purposes is subsequently proved by the design, prototype and tests of a knowledge strategic multiplayer game. The paper also summarizes all the major issues and possible complications from using the data obtained through DBpedia or DBpedia Live endpoints. Current challenges and opportunities for mutual utilization of games and LOD are also briefly discussed.

Page generated in 0.0374 seconds