• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 6
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Measurement properties of respondent-defined rating-scales : an investigation of individual characteristics and respondent choices

Chami-Castaldi, Elisa January 2010 (has links)
It is critical for researchers to be confident of the quality of survey data. Problems with data quality often relate to measurement method design, through choices made by researchers in their creation of standardised measurement instruments. This is known to affect the way respondents interpret and respond to these instruments, and can result in substantial measurement error. Current methods for removing measurement error are post-hoc and have been shown to be problematic. This research proposes that innovations can be made through the creation of measurement methods that take respondents' individual cognitions into consideration, to reduce measurement error in survey data. Specifically, the aim of the study was to develop and test a measurement instrument capable of having respondents individualise their own rating-scales. A mixed methodology was employed. The qualitative phase provided insights that led to the development of the Individualised Rating-Scale Procedure (IRSP). This electronic measurement method was then tested in a large multi-group experimental study, where its measurement properties were compared to those of Likert-Type Rating-Scales (LTRSs). The survey included pre-validated psychometric constructs which provided a baseline for comparing the methods, as well as to explore whether certain individual characteristics are linked to respondent choices. Structural equation modelling was used to analyse the survey data. Whilst no strong associations were found between individual characteristics and respondent choices, the results demonstrated that the IRSP is reliable and valid. This study has produced a dynamic measurement instrument that accommodates individual-level differences, not addressed by typical fixed rating-scales.
12

Kvalita kmenových dat a datová synchronizace v segmentu FMCG / Master Data Quality and Data Synchronization in FMCG

Tlučhoř, Tomáš January 2013 (has links)
This master thesis deals with a topic of master data quality at retailers and suppliers of fast moving consumer goods. The objective is to map a flow of product master data in FMCG supply chain and identify what is the cause bad quality of the data. Emphasis is placed on analyzing a listing process of new item at retailers. Global data synchronization represents one of the tools to increase efficiency of listing process and improve master data quality. Therefore another objective is to clarify the cause of low adoption of global data synchronization at Czech market. The thesis also suggests some measures leading to better master data quality in FMCG and expansion of global data synchronization in Czech Republic. The thesis consists of theoretical and practical part. Theoretical part defines several terms and explores supply chain operation and communication. It also covers theory of data quality and its governance. Practical part is focused on objectives of the thesis. Accomplishment of those objectives is based on results of a survey among FMCG suppliers and retailers in Czech Republic. The thesis contributes to enrichment of academic literature that does not focus on master data quality in FMCG and global data synchronization very much at the moment. Retailers and suppliers of FMCG can use the results of the thesis as an inspiration to improve the quality of their master data. A few methods of achieving better data quality are introduced. The thesis has been assigned by non-profit organization GS1 Czech Republic that can use the results as one of the supporting materials for development of next global data synchronization strategy.
13

Product Information Management / Product Information Management

Antonov, Anton January 2012 (has links)
Product Information Management (PIM) is a field that deals with the product master data management and combines into one base the experience and the principles of data integration and data quality. Product Information Management merges the specific attributes of products across all channels in the supply chain. By unification, centralization and standardization of product information into one platform, quality and timely information with added value can be achieved. The goal of the theoretical part of the thesis is to construct a picture of the PIM, to place the PIM into a broader context, to define and describe various parts of the PIM solution, to describe the main differences in characteristics between the product data and data about clients and to summarize the available information on the administration and management of knowledge bases of the PIM data quality relevant for solving practical problems. The practical part of the thesis focuses on designing the structure, the content and the method of filling the knowledge base of the Product Information Management solution in the environment of the DataFlux software tools from SAS Institute. The practical part of the thesis further incorporates the analysis of the real product data, the design of definitions and objects of the knowledge base, the creation of a reference database and the testing of the knowledge base with the help of specially designed web services.
14

MDM of Product Data / MDM produktovych dat (MDM of Product Data)

Čvančarová, Lenka January 2012 (has links)
This thesis is focused on Master Data Management of Product Data. At present, most publications on the topic of MDM take into account customer data, and a very limited number of sources focus solely on product data. Some resources actually do attempt to cover MDM in full-depth. Even those publications are typically are very customer oriented. The lack of Product MDM oriented literature became one of the motivations for this thesis. Another motivation was to outline and analyze specifics of Product MDM in context of its implementation and software requirements for a vendor of MDM application software. For this I chose to create and describe a methodology for implementing MDM of product data. The methodology was derived from personal experience on projects focused on MDM of customer data, which was applied on findings from the theoretical part of this thesis. By analyzing product data characteristics and their impacts on MDM implementation as well as their requirements for application software, this thesis helps vendors of Customer MDM to understand the challenges of Product MDM and therefore to embark onto the product data MDM domain. Moreover this thesis can also serve as an information resource for enterprises considering adopting MDM of product data into their infrastructure.
15

Linked Data Quality Assessment and its Application to Societal Progress Measurement

Zaveri, Amrapali 17 April 2015 (has links)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
16

Using DevOps principles to continuously monitor RDF data quality

Meissner, Roy, Junghanns, Kurt 01 August 2017 (has links)
One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software development and semantic web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real world usage scenario is discussed, outlining the benefit of the usage of such a pipeline.
17

A novel approach for the improvement of error traceability and data-driven quality predictions in spindle units

Rangaraju, Adithya January 2021 (has links)
The lack of research on the impact of component degradation on the surface quality of machine tool spindles is limited and the primary motivation for this research. It is common in the manufacturing industry to replace components even if they still have some Remaining Useful Life (RUL), resulting in an ineffective maintenance strategy. The primary objective of this thesis is to design and construct an Exchangeable Spindle Unit (ESU) test stand that aims at capturing the influence of the failure transition of components during machining and its effects on the quality of the surface. Current machine tools cannot be tested with extreme component degradation, especially the spindle, since the degrading elements can lead to permanent damage, and machine tools are expensive to repair. The ESU substitutes and decouples the machine tool spindle to investigate the influence of deteriorated components on the response so that the machine tool spindle does not take the degrading effects. Data-driven quality control is another essential factor which many industries try to implement in their production line. In a traditional manufacturing scenario, quality inspections are performed to check if the parameters measured are within the nominal standards at the end of a production line or between processes. A significant flaw in the traditional approach is its inability to map the degradation of components to quality. Condition monitoring techniques can resolve this problem and help identify defects early in production. This research focuses on two objectives. The first one aims at capturing the component degradation by artificially inducing imbalance into the ESU shaft and capturing the excitation behavior during machining with an end mill tool. Imbalance effects are quantified by adding mass onto the ESU spindle shaft. The varying effects of the mass are captured and characterized using vibration signals. The second objective is to establish a correlation between the surface quality of the machined part with the characterized vibrations signals by Bagged Ensemble Tree (BET) machine learning models. The results show a good correlation between the surface roughness and the accelerometer signals. A comparison study between a balanced and imbalanced spindle along with its resultant surface quality is presented in this research. / Bristen på forskning om inverkan av komponentnedbrytning på ytkvaliteten hos verktygsmaskiner är begränsad och den primära motivationen för denna forskning. Det är vanligt inom tillverkningsindustrin att byta ut komponenter även om de fortfarande har en viss återstående livslängd, vilket resulterar i en ineffektiv underhållsstrategi. Det primära syftet med denna avhandling är att designa och konstruera en utbytbar spindelenhetstestsats som syftar till att fånga inverkan av komponentbrottsövergång under bearbetning och dess effekter på ytkvaliteten. Nuvarande verktygsmaskiner kan inte testas med extrem komponentnedbrytning, speciellt spindeln, eftersom de nedbrytande elementen kan leda till permanenta skador och verktygsmaskiner är dyra att reparera. Den utbytbara spindelenheten ersätter och kopplar bort verktygsmaskinens spindel för att undersöka effekten av försämrade komponenter på responsen så att verktygsmaskinens spindel inte absorberar de nedbrytande effekterna. Datadriven kvalitetskontroll är en annan viktig faktor som många industrier försöker implementera i sin produktionslinje. I ett traditionellt tillverkningsscenario utförs kvalitetsinspektioner för att kontrollera om de uppmätta parametrarna ligger inom de nominella normerna i slutet av en produktionslinje eller mellan processer. En betydande brist med det traditionella tillvägagångssättet är dess oförmåga att kartlägga komponenternas försämring till kvalitet. Tillståndsövervakningstekniker kan lösa detta problem och hjälpa till att identifiera defekter tidigt i produktionsprocessen. Denna forskning fokuserar på två mål. Den första syftar till att fånga komponentnedbrytning genom att artificiellt inducera obalans i axeln på den utbytbara spindelenheten och fånga excitationsbeteendet under bearbetning med ett fräsverktyg. Obalanseffekter kvantifieras genom att tillföra massa till spindelaxeln på den utbytbara spindelenheten. Massans varierande effekter fångas upp och karakteriseras med hjälp av vibrationssignaler. Det andra målet är att etablera en korrelation mellan ytkvaliteten hos den bearbetade delen med de karakteriserade vibrationssignalerna från Bagged Ensemble Tree maskininlärningsmodeller. Resultaten visar en god korrelation mellan ytjämnheten och accelerometerns signaler. En jämförande studie mellan en balanserad och obalanserad spindel tillsammans med dess resulterande ytkvalitet presenteras i denna forskning.
18

Metodika projektů zajištění kvality a testování datových migrací Deloitte ČR / Data migration and quality assurance and testing projects methodology of Deloitte CZ

Pospíšil, Marek January 2011 (has links)
The main purpose of the thesis is to introduce a method for data migration quality assurance or completeness and accuracy testing of migrated data. Method will become part of the knowledge base of Deloitte Czech Republic for projects in the Enterprise Risk Services department. Data migration quality assurance projects carried by Deloitte in the Czech Republic have its own specifics. Although there exists methodology "Systems Development Playbook" which also includes a data migration methodology, the problem especially of the Prague branch is the fact that the procedures and methods for consulting projects in the area of data migration are not described in current methodological documentations including specifics of the Czech and Slovak projects. This represents a risk of inconsistencies in the implementation of this type of consulting projects in case of key employees leaving. Improving procedures and optimization of human resources engagement in data migration projects can't be measurably compared among projects, if there don't exists a basic methodology against which specific projects can be measured. The objectives of the work are achieved by consolidating experience from past projects involving data migration within Deloitte Czech Republic and designing improvement of existing processes by integrating information from external sources and internal sources of the global Deloitte Touche Tohmatsu.
19

Datová kvalita v prostředí otevřených a propojitelných dat / Data quality on the context of open and linked data

Tomčová, Lucie January 2014 (has links)
The master thesis deals with data quality in the context of open and linked data. One of the goals is to define specifics of data quality in this context. The specifics are perceived mainly with orientation to data quality dimensions (i. e. data characteristics which we study in data quality) and possibilities of their measurement. The thesis also defines the effect on data quality that is connected with data transformation to linked data; the effect if defined with consideration to possible risks and benefits that can influence data quality. The list of metrics verified on real data (open linked data published by government institution) is composed for the data quality dimensions that are considered to be relevant in context of open and linked data. The thesis points to the need of recognition of differences that are specific in this context when assessing and managing data quality. At the same time, it offers possibilities for further study of this question and it presents subsequent directions for both theoretical and practical evolution of the topic.
20

Řešení Business Intelligence / Business Intelligence Solutions

Dzimko, Miroslav January 2017 (has links)
Diploma thesis presents an evaluation of the current state of the company system, identification of critical areas and areas suitable for improvement. Based on the theoretical knowledge and analysis results, commercial Business Intelligence software is designed to enhance the quality and efficiency of the company's decision-support system and the introduction of an advanced Quality Culture system. The thesis reveals critical locations in the corporate environment and opens up space to design improvements to the system.

Page generated in 0.0745 seconds