Spelling suggestions: "subject:"data quality."" "subject:"mata quality.""
131 |
Assessment of the quality of acute flaccid paralysis surveillance data in the World Health Organization African RegionShaba, Keith January 2012 (has links)
Magister Public Health - MPH / Poliomyelitis (polio) is an infectious disease of high public health importance. In 1988, the World Health Organization (WHO) set the goal of polio eradication worldwide through the Global Polio Eradication Initiative (GPEI). A threeyear period of zero indigenous wild poliovirus in all countries, in the presence of highquality acute flaccid paralysis (AFP) surveillance, is the basis of an independent commission’s determination of when a WHO region or a country can be certified as polio free. AFP surveillance being one of the critical elements in polio eradication campaign, aims to report and investigate all cases of acute flaccid paralysis occurring in children aged less than 15 years using clinical, epidemiological and laboratory methods. The information collected is cleaned and entered, into a database and maintained in EPI Info format at the WHO country office of each of the 46 countries, the three sub regional offices or Inter country Support Teams (IST) offices and the WHO African Regional Office. In addition, data from sixteen polio laboratories in various African countries maintain records of the laboratory findings and results of confirmed polio cases. The quality of data generated through AFP surveillance and maintained in the African regional data base has not been critically and systematically reviewed and documented. This study therefore was designed to gather information and document the quality of AFP data base, a key component of the global polio eradication effort. A cross-sectional descriptive study involving the retrospective review of clinical and laboratory databases of AFP surveillance over a five year period (2004 - 2008) was designed. In this study, databases of CIFs containing clinical and laboratory data from AFP cases reported from all 46 countries of the WHO African Region comprising of 57,619 clinical and 59,843 laboratory records were critically reviewed.
|
132 |
Data quality assurance for strategic decision making in Abu Dhabi's public organisationsAlketbi, Omar January 2014 (has links)
Data quality is an important aspect of an organisation’s strategies for supporting decision makers in reaching the best decisions possible and consequently attaining the organisation’s objectives. In the case of public organisations, decisions ultimately concern the public and hence further diligence is required to make sure that these decisions do, for instance, preserve economic resources, maintain public health, and provide national security. The decision making process requires a wealth of information in order to achieve efficient results. Public organisations typically acquire great amounts of data generated by public services. However, the vast amount of data stored in public organisations’ databases may be one of the main reasons for inefficient decisions made by public organisations. Processing vast amounts of data and extracting accurate information are not easy tasks. Although technology helps in this respect, for example, the use of decision support systems, it is not sufficient for improving decisions to a significant level of assurance. The research proposed using data mining to improve results obtained by decision support systems. However, more considerations are needed than the mere technological aspects. The research argues that a complete data quality framework is needed in order to improve data quality and consequently the decision making process in public organisations. A series of surveys conducted in seven public organisations in Abu Dhabi Emirate of the United Arab Emirates contributed to the design of a data quality framework. The framework comprises elements found necessary to attain the quality of data reaching decision makers. The framework comprises seven elements ranging from technical to human-based found important to attain data quality in public organisations taking Abu Dhabi public organisations as the case. The interaction and integration of these elements contributes to the quality of data reaching decision makers and hence to the efficiency of decisions made by public organisations. The framework suggests that public organisations may need to adopt a methodological basis to support the decision making process. This includes more training courses and supportive bodies of the organisational units, such as decision support centres, information security and strategic management. The framework also underscores the importance of acknowledging human and cultural factors involved in the decision making process. Such factors have implications for how training and raising awareness are implemented to lead to effective methods of system development.
|
133 |
A Model for Managing Data IntegrityMallur, Vikram January 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view.
In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
|
134 |
Postup zavádění Data Governance / Data Governance Implementing MethodSlouková, Anna January 2008 (has links)
This thesis refers to Data Governance issue and the way of implementing this program. It is logically devided into two parts -- theoretical and practical one. The teoretical part represented by first chapter summarises actual findings about the Data Governance program, it explains what is hidden behind the term Data Governance, cause for Data Governance initiatives emergence, it itemizes particular parts of which the program is composed and basic, mostly software tools, that are necessary for successful program run. Practical part consists of second and third chapter. The second chapter contains enumeration of various types of outputs that grow up either during the implementation of program or in its run itself. It categorizes and deals in detail with processes and activities, organizational structure of the program, dokuments, used metrics and KPIs and IS/IT tools. Third chapter describes the process of implementing the program into an enterprise in detail. It is devided into four consequential phases -- assessment of current state, design, implementation and run of the program. In every chapter, there are inputs, outputs, detailed decomposition into particular activities with references to document tepmlates that are used during theese activities, risks and resources introduced. In two attachments of this thesis, there are two helpful documents -- general document teplate and teplate of a role description -- that serve to better implementation of Data Governance program.
|
135 |
Datová kvalita, integrita a konsolidace dat v BI / Data Quality, Data intagrity and Data Consolidation in BISmolík, Ondřej January 2008 (has links)
This thesis fights with the data quality in business intelligence. We present basic principles for building data warehouse to achieve the highest data quality. We also present some data clearing methods as deviation detection or name-address clearing. This work also deals with origin of erroneous data and prevention of their generation. In second part of this thesis we show presented methods and principles on real example of data warehouse and we suggest how to get sales data from our business partners or customers.
|
136 |
Koncept zavedení Data Governance / Data governance implementation conceptUllrichová, Jana January 2016 (has links)
This master´s thesis discusses concept of implementation for data governance. The theoretical part of this thesis is about data governance. It explains why data are important for company, describes definitoons of data governance, its history, its components, its principles and processes and fitting in company. Theoretical part is amended with examples of data governance failures and banking specifics. The main goal of this thesis is to create a concept for implementing data governance and its implementation in real company. That is what practical part consists of.
|
137 |
Metódy monitorovania kvality dát spracovávaných systémami pre podporu rozhodovania / Data Quality Monitoring Methods Applied to Data Processed by Decision Support SystemsHološková, Kristína January 2011 (has links)
The business data could be considered as the raw material for decision-making process, for the development of corporate strategies and overall running of the business. Therefore, adequate attention should be paid to quality of the data. The main goal of the diploma thesis is elaboration of a specific framework for data quality assurance, which combines three theoretical concepts: time series analysis, data screening and data profiling -- business-specific data profiles are monitored by data screening during the data warehouse ETL (extract, transform and load) process and results are afterwards compared with the values predicted by time series analysis. Achievement of this goal is based on the analysis of "data quality" in literature, exact problem definition and selection of appropriate means for its solution. Moreover, the thesis is analysing alternative solutions available on the market and comparing their functionality with the functionality of own framework, as well.
|
138 |
Measuring Core Outcomes from Metabolic Chart-Abstracted Data for Medium-Chain Acyl-CoA Dehydrogenase (MCAD) DeficiencyIverson, Ryan 01 December 2020 (has links)
Background: Generating evidence to inform care for pediatric medium-chain acyl-CoA dehydrogenase (MCAD) deficiency requires sustainable and integrated measurement of priority outcomes. Methods: From an existing Canadian cohort study, we evaluated the quality of metabolic clinic chart-abstracted data for measuring core outcomes for pediatric MCAD deficiency. We then modelled variation in emergency department (ED) use, in association with disease severity, child age, and distance to care. Results: Children with MCAD deficiency visit the metabolic clinic at least annually on average but we identified data quality challenges related to inconsistent definitions of core outcomes and missing information in patient charts. Rates of ED use were highest among children aged 6 to 12 months, with more severe disease, and living closest to care. Conclusion: While measuring core outcomes through the metabolic clinic for children with MCAD deficiency is feasible, harmonized data collection is needed to evaluate care and further understand ED use.
|
139 |
Multivariate Time-Series Data Requirements in Deep Learning ModelsChalla, Harshitha 01 October 2021 (has links)
No description available.
|
140 |
Quality Assurance of RDB2RDF MappingsWestphal, Patrick 27 February 2018 (has links)
Today, the Web of Data evolved to a semantic information network containing large amounts of data. Since such data may stem from different sources, ranging from automatic extraction processes to extensively curated knowledge bases, its quality also varies. Thus, currently research efforts are made to find methodologies and approaches to measure the data quality in the Web of Data. Besides the option to consider the actual data in a quality assessment, taking the process of data generation into account is another possibility, especially for extracted data. An extraction approach that gained popularity in the last years is the mapping of relational databases to RDF (RDB2RDF). By providing definitions of how RDF should be generated from relational database content, huge amounts of data can be extracted automatically. Unfortunately, this also means that single errors in the mapping definitions can affect a considerable portion of the generated data. Thus, from a quality assurance point of view, the assessment of these RDB2RDF mapping definitions is important to guarantee high quality RDF data. This is not covered by recent quality research attempts in depth and is examined in this thesis. After a structured evaluation of existing approaches, a quality assessment methodology and quality dimensions of importance for RDB2RDF mappings are proposed. The formalization of this methodology is used to define 43 metrics to characterize the quality of an RDB2RDF mapping project. These metrics are also implemented for a software prototype of the proposed methodology, which is used in a practical evaluation of three different datasets that are generated applying the RDB2RDF approach.
|
Page generated in 0.2597 seconds