Return to search

Using DevOps principles to continuously monitor RDF data quality

One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software development and semantic web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real world usage scenario is discussed, outlining
the benefit of the usage of such a pipeline.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:15940
Date01 August 2017
CreatorsMeissner, Roy, Junghanns, Kurt
PublisherUniversität Leipzig
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/acceptedVersion, doc-type:article, info:eu-repo/semantics/article, doc-type:Text
Rightsinfo:eu-repo/semantics/openAccess
Relation10.1145/2993318.2993351, 978-1-4503-4752-5, 10.1145/2993318.2993351

Page generated in 0.0018 seconds