• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Qualité contextuelle des données : détection et nettoyage guidés par la sémantique des données / Contextual data quality : Detection and cleaning guided by data semantics

Ben salem, Aïcha 31 March 2015 (has links)
De nos jours, les applications complexes telles que l'extraction de connaissances, la fouille de données, le E-learning ou les applications web utilisent des données hétérogènes et distribuées. Dans ce contexte, la qualité de toute décision dépend de la qualité des données utilisées. En effet, avec l'absence de données riches, précises et fiables, une organisation peut prendre potentiellement de mauvaises décisions. L'objectif de cette thèse consiste à assister l'utilisateur dans sa démarche qualité. Il s'agit de mieux extraire, mélanger, interpréter et réutiliser les données. Pour cela, il faut rattacher aux données leurs sens sémantiques, leurs types, leurs contraintes et leurs commentaires. La première partie s'intéresse à la reconnaissance sémantique du schéma d'une source de données. Elle permet d'extraire la sémantique des données à partir de toutes les informations disponibles, incluant les données et les métadonnées. Elle consiste, d'une part, à classifier les données en leur attribuant une catégorie et éventuellement une sous-catégorie, et d'autre part, à établir des relations inter colonnes et de découvrir éventuellement la sémantique de la source de données manipulée. Ces liens inter colonnes une fois détectés offrent une meilleure compréhension de la source ainsi que des alternatives de correction des données. En effet, cette approche permet de détecter de manière automatique un grand nombre d'anomalies syntaxiques et sémantiques. La deuxième partie consiste à nettoyer les données en utilisant les rapports d'anomalies fournis par la première partie. Elle permet une correction intra colonne (homogénéisation des données), inter colonnes (dépendances sémantique) et inter lignes (élimination des doublons et similaire). Tout au long de ce processus, des recommandations ainsi que des analyses sont proposées à l'utilisateur. / Nowadays, complex applications such as knowledge extraction, data mining, e-learning or web applications use heterogeneous and distributed data. The quality of any decision depends on the quality of the used data. The absence of rich, accurate and reliable data can potentially lead an organization to make bad decisions.The subject covered in this thesis aims at assisting the user in its quality ap-proach. The goal is to better extract, mix, interpret and reuse data. For this, the data must be related to its semantic meaning, data types, constraints and comments.The first part deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.The second part is the data cleansing using the reports on anomalies returned by the first part. It allows corrections to be made within a column itself (data homogeni-zation), between columns (semantic dependencies), and between lines (eliminating duplicates and similar data). Throughout all this process, recommendations and analyses are provided to the user.
192

Kvalita kmenových dat a datová synchronizace v segmentu FMCG / Master Data Quality and Data Synchronization in FMCG

Tlučhoř, Tomáš January 2013 (has links)
This master thesis deals with a topic of master data quality at retailers and suppliers of fast moving consumer goods. The objective is to map a flow of product master data in FMCG supply chain and identify what is the cause bad quality of the data. Emphasis is placed on analyzing a listing process of new item at retailers. Global data synchronization represents one of the tools to increase efficiency of listing process and improve master data quality. Therefore another objective is to clarify the cause of low adoption of global data synchronization at Czech market. The thesis also suggests some measures leading to better master data quality in FMCG and expansion of global data synchronization in Czech Republic. The thesis consists of theoretical and practical part. Theoretical part defines several terms and explores supply chain operation and communication. It also covers theory of data quality and its governance. Practical part is focused on objectives of the thesis. Accomplishment of those objectives is based on results of a survey among FMCG suppliers and retailers in Czech Republic. The thesis contributes to enrichment of academic literature that does not focus on master data quality in FMCG and global data synchronization very much at the moment. Retailers and suppliers of FMCG can use the results of the thesis as an inspiration to improve the quality of their master data. A few methods of achieving better data quality are introduced. The thesis has been assigned by non-profit organization GS1 Czech Republic that can use the results as one of the supporting materials for development of next global data synchronization strategy.
193

Komplexní řízení kvality dat a informací / Towards Complex Data and Information Quality Management

Pejčoch, David January 2010 (has links)
This work deals with the issue of Data and Information Quality. It critically assesses the current state of knowledge within tvarious methods used for Data Quality Assessment and Data (Information) Quality improvement. It proposes new principles where this critical assessment revealed some gaps. The main idea of this work is the concept of Data and Information Quality Management across the entire universe of data. This universe represents all data sources which respective subject comes into contact with and which are used under its existing or planned processes. For all these data sources this approach considers setting the consistent set of rules, policies and principles with respect to current and potential benefits of these resources and also taking into account the potential risks of their use. An imaginary red thread that runs through the text, the importance of additional knowledge within a process of Data (Information) Quality Management. The introduction of a knowledge base oriented to support the Data (Information) Quality Management (QKB) is therefore one of the fundamental principles of the author proposed a set of best
194

Product Information Management / Product Information Management

Antonov, Anton January 2012 (has links)
Product Information Management (PIM) is a field that deals with the product master data management and combines into one base the experience and the principles of data integration and data quality. Product Information Management merges the specific attributes of products across all channels in the supply chain. By unification, centralization and standardization of product information into one platform, quality and timely information with added value can be achieved. The goal of the theoretical part of the thesis is to construct a picture of the PIM, to place the PIM into a broader context, to define and describe various parts of the PIM solution, to describe the main differences in characteristics between the product data and data about clients and to summarize the available information on the administration and management of knowledge bases of the PIM data quality relevant for solving practical problems. The practical part of the thesis focuses on designing the structure, the content and the method of filling the knowledge base of the Product Information Management solution in the environment of the DataFlux software tools from SAS Institute. The practical part of the thesis further incorporates the analysis of the real product data, the design of definitions and objects of the knowledge base, the creation of a reference database and the testing of the knowledge base with the help of specially designed web services.
195

MDM of Product Data / MDM produktovych dat (MDM of Product Data)

Čvančarová, Lenka January 2012 (has links)
This thesis is focused on Master Data Management of Product Data. At present, most publications on the topic of MDM take into account customer data, and a very limited number of sources focus solely on product data. Some resources actually do attempt to cover MDM in full-depth. Even those publications are typically are very customer oriented. The lack of Product MDM oriented literature became one of the motivations for this thesis. Another motivation was to outline and analyze specifics of Product MDM in context of its implementation and software requirements for a vendor of MDM application software. For this I chose to create and describe a methodology for implementing MDM of product data. The methodology was derived from personal experience on projects focused on MDM of customer data, which was applied on findings from the theoretical part of this thesis. By analyzing product data characteristics and their impacts on MDM implementation as well as their requirements for application software, this thesis helps vendors of Customer MDM to understand the challenges of Product MDM and therefore to embark onto the product data MDM domain. Moreover this thesis can also serve as an information resource for enterprises considering adopting MDM of product data into their infrastructure.
196

Um estudo sobre qualidade de dados em biodiversidade: aplicação a um sistema de digitalização de ocorrências de espécies / A study about data quality in biodiversity: application to a species ocurrences digitization system

Allan Koch Veiga 09 February 2012 (has links)
Para o combate da atual crise de sustentabilidade ambiental, diversos estudos sobre a biodiversidade e o meio ambiente têm sido realizados com o propósito de embasar estratégias eficientes de conservação e uso de recursos naturais. Esses estudos são fundamentados em avaliações e monitoramentos da biodiversidade que ocorrem por meio da coleta, armazenamento, análise, simulação, modelagem, visualização e intercâmbio de um volume expressivo de dados sobre a biodiversidade em amplo escopo temporal e espacial. Dados sobre ocorrências de espécies são um tipo de dado de biodiversidade particularmente importante, pois são amplamente utilizados em diversos estudos. Contudo, para que as análises e os modelos gerados a partir desses dados sejam confiáveis, os dados utilizados devem ser de alta qualidade. Assim, para melhorar a Qualidade de Dados (QD) sobre ocorrências de espécies, o objetivo deste trabalho foi realizar um estudo sobre QD aplicado a dados de ocorrências de espécies que permitisse avaliar e melhorar a QD por meio de técnicas e recursos de prevenção a erros. O estudo foi aplicado a um Sistema de Informação (SI) de digitalização de dados de ocorrências de espécies, o Biodiversity Data Digitizer (BDD), desenvolvido no âmbito dos projetos da Inter-American Biodiversity Information Network Pollinators Thematic Network (IABIN-PTN) e BioAbelha FAPESP. Foi realizada uma revisão da literatura sobre dados de ocorrências de espécies e sobre os seus domínios de dados mais relevantes. Para os domínios de dados identificados como mais importantes (táxon, geoespacial e localização), foi realizado um estudo sobre a Avaliação da QD, no qual foi definido um conceito de QD em relação a cada domínio de dados por meio da identificação, definição e inter-relação de dimensões de QD (aspectos) importantes e de problemas que afetam essas dimensões. Embasado nesse estudo foram identificados recursos computacionais que permitissem melhorar a QD por meio da redução de erros. Utilizando uma abordagem de Gerenciamento da QD de prevenção a erros, foram identificados 13 recursos computacionais que auxiliam na prevenção de 8 problemas de QD, proporcionando, assim, uma melhoria da acurácia, precisão, completude, consistência, credibilidade da fonte e confiabilidade de dados taxonômicos, geoespaciais e de localização de ocorrências de espécies. Esses recursos foram implementados em duas ferramentas integradas ao BDD. A primeira é a BDD Taxon Tool. Essa ferramenta facilita a entrada de dados taxonômicos de ocorrências livres de erros por meio de, entre outros recursos, técnicas de fuzzy matching e sugestões de nomes e de hierarquias taxonômicas baseados no Catalog of Life. A segunda ferramenta, a BDD Geo Tool, auxilia o preenchimento de dados geoespaciais e de localização de ocorrências de espécies livres de erros por meio de técnicas de georeferenciamento a partir de descrição em linguagem natural da localização, de georeferenciamento reverso e de mapas interativos do Google Earth, entre outros recursos. Este trabalho demonstrou que com a implementação de determinados recursos computacionais em SI, problemas de QD podem ser reduzidos por meio da prevenção a erros. Como consequência, a QD em domínios de dados específicos é melhorada em relação a determinadas dimensões de QD. / For fighting the current environment sustainability crisis, several studies on biodiversity and the environment have been conducted in order to support efficient strategies for conservation and sustainable use of natural resources. These studies are based on assessment and monitoring of biodiversity that occur by means of the collection, storage, analysis, simulation, modeling, visualization and sharing of a significant volume of biodiversity data in broad temporal and spatial scale. Species occurrences data are a particularly important type of biodiversity data because they are widely used in various studies. Nevertheless, for the analyzing and modeling obtained from these data to be reliable, the data used must be high quality. Thus, to improve the Data Quality (DQ) of species occurrences, the aim of this work was to conduct a study about DQ applied to species occurrences data that allowed assessing and improving the DQ using techniques and resources to prevent errors. This study was applied to an Information System (IS) designed to digitize species occurrences, the Biodiversity Data Digitizer (BDD), that was developed in the scope of the Inter-American Biodiversity Information Network Pollinators Thematic Network (IABIN-PTN) and BioAbelha FAPESP projects. A literature review about species occurrences data and about the most relevant data domains was conducted. For the most important data domains identified (taxon, geospatial and location), a study on the DQ Assessment was performed, in which important DQ dimensions (aspects) and problems that affect theses dimensions were identified, defined and interrelated. Based upon this study, computational resources were identified that would allow improving the DQ by reducing errors. Using the errors preventing DQ Management approach, 13 computing resources to support the prevention of 8 DQ problems were identified, thus providing an improvement of accuracy, precision, completeness, consistency, credibility of source and believability of taxonomic, geospatial and location data of species occurrences. These resources were implemented in two tools integrated to the BDD IS. The first tool is the BDD Taxon Tool. This tool facilitates the entrance of error-free taxonomic data of occurrences by means of fuzzy matching techniques and suggestions for taxonomic names and hierarchies based on Catalog of Life, among other resources. The second tool, the BDD Geo Tool, helps to fill in error-free geospatial and location data about species occurrence by means of georeferencing techniques from natural language description of location, reverse georeferencing and Google Earth interactive maps, among other resources. This work showed that with the development of certain computing resources integrated to an IS, DQ problems are reduced by preventing errors. As a result of reducing some problems in particular, the DQ in specific data domains is improved for certain DQ dimensions.
197

Sampling, qualification and analysis of data streams / Échantillonnage, qualification et analyse des flux de données

El Sibai, Rayane 04 July 2018 (has links)
Un système de surveillance environnementale collecte et analyse continuellement les flux de données générés par les capteurs environnementaux. L'objectif du processus de surveillance est de filtrer les informations utiles et fiables et d'inférer de nouvelles connaissances qui aident l'exploitant à prendre rapidement les bonnes décisions. L'ensemble de ce processus, de la collecte à l'analyse des données, soulève deux problèmes majeurs : le volume de données et la qualité des données. D'une part, le débit des flux de données générés n'a pas cessé d'augmenter sur les dernières années, engendrant un volume important de données continuellement envoyées au système de surveillance. Le taux d'arrivée des données est très élevé par rapport aux capacités de traitement et de stockage disponibles du système de surveillance. Ainsi, un stockage permanent et exhaustif des données est très coûteux, voire parfois impossible. D'autre part, dans un monde réel tel que les environnements des capteurs, les données sont souvent de mauvaise qualité, elles contiennent des valeurs bruitées, erronées et manquantes, ce qui peut conduire à des résultats défectueux et erronés. Dans cette thèse, nous proposons une solution appelée filtrage natif, pour traiter les problèmes de qualité et de volume de données. Dès la réception des données des flux, la qualité des données sera évaluée et améliorée en temps réel en se basant sur un modèle de gestion de la qualité des données que nous proposons également dans cette thèse. Une fois qualifiées, les données seront résumées en utilisant des algorithmes d'échantillonnage. En particulier, nous nous sommes intéressés à l'analyse de l'algorithme Chain-sample que nous comparons à d'autres algorithmes de référence comme l'échantillonnage probabiliste, l'échantillonnage déterministe et l'échantillonnage pondéré. Nous proposons aussi deux nouvelles versions de l'algorithme Chain-sample améliorant sensiblement son temps d'exécution. L'analyse des données du flux est également abordée dans cette thèse. Nous nous intéressons particulièrement à la détection des anomalies. Deux algorithmes sont étudiés : Moran scatterplot pour la détection des anomalies spatiales et CUSUM pour la détection des anomalies temporelles. Nous avons conçu une méthode améliorant l'estimation de l'instant de début et de fin de l'anomalie détectée dans CUSUM. Nos travaux ont été validés par des simulations et aussi par des expérimentations sur deux jeux de données réels et différents : Les données issues des capteurs dans le réseau de distribution de l'eau potable fournies dans le cadre du projet Waves et les données relatives au système de vélo en libre-service (Velib). / An environmental monitoring system continuously collects and analyzes the data streams generated by environmental sensors. The goal of the monitoring process is to filter out useful and reliable information and to infer new knowledge that helps the network operator to make quickly the right decisions. This whole process, from the data collection to the data analysis, will lead to two keys problems: data volume and data quality. On the one hand, the throughput of the data streams generated has not stopped increasing over the last years, generating a large volume of data continuously sent to the monitoring system. The data arrival rate is very high compared to the available processing and storage capacities of the monitoring system. Thus, permanent and exhaustive storage of data is very expensive, sometimes impossible. On the other hand, in a real world such as sensor environments, the data are often dirty, they contain noisy, erroneous and missing values, which can lead to faulty and defective results. In this thesis, we propose a solution called native filtering, to deal with the problems of quality and data volume. Upon receipt of the data streams, the quality of the data will be evaluated and improved in real-time based on a data quality management model that we also propose in this thesis. Once qualified, the data will be summarized using sampling algorithms. In particular, we focus on the analysis of the Chain-sample algorithm that we compare against other reference algorithms such as probabilistic sampling, deterministic sampling, and weighted sampling. We also propose two new versions of the Chain-sample algorithm that significantly improve its execution time. Data streams analysis is also discussed in this thesis. We are particularly interested in anomaly detection. Two algorithms are studied: Moran scatterplot for the detection of spatial anomalies and CUSUM for the detection of temporal anomalies. We have designed a method that improves the estimation of the start time and end time of the anomaly detected in CUSUM. Our work was validated by simulations and also by experimentation on two real and different data sets: The data issued from sensors in the water distribution network provided as part of the Waves project and the data relative to the bike sharing system (Velib).
198

Využití podnikových dat k zabezpečování kvality výrobku / Use of company data to ensure product quality

Gruber, Jakub January 2021 (has links)
The task of the thesis is a theoretical analysis and description of the use of company data. Emphasis is placed on the system analysis of the problem. The specific production process and the data available from it are evaluated, which help to find a technical and economic evaluation.
199

Linked Data Quality Assessment and its Application to Societal Progress Measurement

Zaveri, Amrapali 17 April 2015 (has links)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
200

Using DevOps principles to continuously monitor RDF data quality

Meissner, Roy, Junghanns, Kurt 01 August 2017 (has links)
One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software development and semantic web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real world usage scenario is discussed, outlining the benefit of the usage of such a pipeline.

Page generated in 0.0635 seconds