• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Electronic Health Record (EHR) Data Quality and Type 2 Diabetes Mellitus Care

Wiley, Kevin Keith, Jr. 06 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Due to frequent utilization, high costs, high prevalence, and negative health outcomes, the care of patients managing type 2 diabetes mellitus (T2DM) remains an important focus for providers, payers, and policymakers. The challenges of care delivery, including care fragmentation, reliance on patient self-management behaviors, adherence to care management plans, and frequent medical visits are well-documented in the literature. T2DM management produces numerous clinical data points in the electronic health record (EHR) including laboratory test values and self-reported behaviors. Recency or absence of these data may limit providers’ ability to make effective treatment decisions for care management. Increasingly, the context in which these data are being generated is changing. Specifically, telehealth usage is increasing. Adoption and use of telehealth for outpatient care is part of a broader trend to provide care at-a-distance, which was further accelerated by the COVID-19 pandemic. Despite unknown implications for patients managing T2DM, providers are increasingly using telehealth tools to complement traditional disease management programs and have adapted documentation practices for virtual care settings. Evidence suggests the quality of data documented during telehealth visits differs from that which is documented during traditional in-person visits. EHR data of differential quality could have cascading negative effects on patient healthcare outcomes. The purpose of this dissertation is to examine whether and to what extent levels of EHR data quality are associated with healthcare outcomes and if EHR data quality is improved by using health information technologies. This dissertation includes three studies: 1) a cross-sectional analysis that quantifies the extent to which EHR data are timely, complete, and uniform among patients managing T2DM with and without a history of telehealth use; 2) a panel analysis to examine associations between primary care laboratory test ages (timeliness) and subsequent inpatient hospitalizations and emergency department admissions; and 3) a panel analysis to examine associations between patient portal use and EHR data timeliness.
62

Response Latency in Survey Research: A Strategy for Detection and Avoidance of Satisficing Behaviors

Wanich, Wipada 23 September 2010 (has links)
No description available.
63

Effects of questionnaire and fieldwork characteristics on call outcome rates and data quality in a monthly CATI survey

Marton, Krisztina 20 July 2004 (has links)
No description available.
64

The impact of hospital command centre on patient flow and data quality: findings from the UK NHS

Mebrahtu, T.F., McInerney, C.D., Benn, J., McCrorie, C., Granger, J., Lawton, T., Sheikh, N., Habli, I., Randell, Rebecca, Johnson, O.A. 20 September 2023 (has links)
Yes / In the last six years, hospitals in developed countries have been trialling the use of command centres for improving organisational efficiency and patient care. However, the impact of these Command Centres has not been systematically studied in the past. Methods: It is a retrospective population based study. Participants were patients who visited Bradford Royal Infirmary Hospital, accident and emergency (A&E) department, between Jan 01, 2018 and August 31, 2021. Outcomes were patient flow (measured as A&E waiting time, length of stay and clinician seen time)and data quality (measured by the proportion of missing treatment and assessment dates and valid transition between A&E care stages).Interrupted time-series segmented regression and process mining were used for analysis. Results: A&E transition time from patient arrival to assessment by a clinician marginally improved during the intervention period; there was a decrease of 0.9 minutes (95% CI: 0.35 to 1.4), 3 minutes (95% CI: 2.4 to 3.5), 9.7 minutes (95% CI: 8.4 to 11.0) and 3.1 minutes (95% CI: 2.7 to 3.5) during ‘patient flow program’, ‘command centre display roll-in’, ‘command centre activation’ and ‘hospital wide training program’, respectively. However, the transition time from patient treatment until conclusion of consultation showed an increase of 11.5 minutes (95% CI: 9.2 to 13.9), 12.3 minutes (95% CI: 8.7 to 15.9), 53.4 minutes (95% CI: 48.1 to 58.7) and 50.2 minutes (95% CI: 47.5 to 52.9) for the respective four post-intervention periods. Further, length of stay was not significantly impacted; the change was -8.8hrs (95% CI: -17.6 to 0.08), -8.9hrs (95% CI: -18.6 to 0.65), -1.67hrs (95% CI: -10.3 to 6.9) and -0.54hrs (95% CI: -13.9 to 12.8) during the four respective post intervention periods. It was a similar pattern for the waiting and clinician seen times. Data quality as measured by the proportion of missing dates of records was generally poor (treatment date=42.7% and clinician seen date=23.4%) and did not significantly improve during the intervention periods. Conclusion: The findings of the study suggest that a command centre package that includes process change and software technology does not appear to have consistent positive impact on patient safety and data quality based on the indicators and data we used. Therefore, hospitals considering introducing a Command Centre should not assume there will be benefits in patient flow and data quality. / This project is funded by the National Institute for Health Research Health Service and Delivery Research Programme (NIHR129483).
65

INFORMATION SYSTEM CONTEXTUAL DATA QUALITY: A CASE STUDY

Davenport, Daniel Lee 01 January 2006 (has links)
This dissertation describes a case study comparing the effectiveness of twoinformation systems that assess the quality of surgical care, the National SurgicalQuality Improvement Program (NSQIP) and the University HealthSystemConsortium Clinical Database (UHCCD). For the comparison, it develops aframework for assessing contextual data quality (CDQ) from the decision maker'sperspective. The differences in quality assessment systems to be studied areposited to be due to the differing contexts in which the data is encoded,transformed and managed impacting data quality for the purpose of surgicalquality assessment.Healthcare spending in the United States has risen faster than the rate of inflationfor over a decade and currently stands at about fifteen percent of the GrossDomestic Product. This has brought enormous pressures on the healthcareindustry to reduce costs while maintaining or improving quality. Numeroussystems to measure healthcare quality have been, and are being, developedincluding the two being studied. A more precise understanding of the differencesbetween these two systems' effectiveness in the assessment of surgical healthcarequality informs decisions nationally regarding hospital accreditation and qualitybasedreimbursements to hospitals.The CDQ framework elaborated is also applicable to executive informationsystems, data warehouses, web portals, and other information systems that drawinformation from disparate systems. Decision makers are more frequently havingdata available from across functional and hierarchical areas within organizationsand data quality issues have been identified in these systems unrelated to thesystem performance from which the data comes.The propositions explored and substantiated here are that workgroup contextinfluences data selection and definition, the data entry and encoding process,managerial control and feedback, and data transformation in information systems.These processes in turn influence contextual data quality relative to a particulardecision model.The study is a cross-sectional retrospective review of archival quality datagathered on 26,322 surgical patients at the University of Kentucky Hospital alongwith interviews of process owners in each system. The quality data includepatient risk/severity factors and outcome data recorded in the National SurgeryQuality Improvement Program (NSQIP) database and the UniversityHealthSystem Consortium Clinical Database (UHCCD).
66

An assessment of data quality in routine health information systems in Oyo State, Nigeria

Adejumo, Adedapo January 2017 (has links)
Magister Public Health - MPH / Ensuring that routine health information systems provide good quality information for informed decision making and planning in health systems remain a major priority in several countries and health systems. The lack of use of health information or use of poor quality data in health care and systems results in inadequate assessments and evaluation of health care and result in weak and poorly functioning health systems. The Nigerian health system like in many developing countries has challenges with the building blocks of the health system with a weak Health Information System. Although the quality of data in the Nigerian routine health information system has been deemed poor in some reports and studies, there is little research based evidence of the current state of data quality in the country as well as factors that may influence data quality in routine health information systems. This study explored the data quality of routine health information generated from health facilities in Oyo State, Nigeria, providing the state of data quality of the routine health information. This study was a cross sectional descriptive study taking a retrospective look at paper based and electronic data records in the National Health Management Information System in Nigeria. A mixed methodology approaches with quantitative to assess the quality of data within the health information system and qualitative methods to identify factors influencing the quality of health information at the health facilities in the district. Assessment of the quality of information was done using a structured evaluation tool looking at completeness, accuracy and consistency of routine health statistics generated at these health facilities. A multistage sampling method was used in the quantitative component of the research. For the qualitative component of the research, purposive sampling was done to select respondents from each health facility to describe the factors influencing data quality. The study found incomplete and inaccurate data in facility paper summaries as well as in the electronic databases storing aggregate information from the facility data.
67

Benchmark nástrojů pro řízení datové kvality / Data Quality Tools Benchmark

Černý, Jan January 2013 (has links)
Companies all around the world are wasting their funds due to the poor data quality. Rationally speaking as the volume of processed data increase, the volume of error data increase too. This diploma thesis explains what is it data quality about, what are the causes of data quality errors, the impact of poor data and the way it can be measured. If you can measure it, you can improve it. This is where data quality tools are used. There are vendors that offer commercial solutions and there are also vendors that offer open-source solutions of data quality tools. Comparing DataCleaner (open-source tool) with DataFlux (commercial tool) using defined criteria this diploma thesis proves that those two tools could be equal in terms of data profiling, data enhancement and data monitoring. DataFlux is slightly better in standardization and data validation. Data deduplication is not included in tested version of DataCleaner, although DataCleaner's vendor claimed it should be. One of the biggest obstacles why companies don't buy data quality tools could be its price. At this moment, it is possible to consider DataCleaner as an inexpensive solution for companies looking for data profiling tool. If Human Inference added data deduplication to DataCleaner, it could be also possible to consider it as an inexpensive solution covers whole data quality process.
68

Big Data Governance / Big Data Governance

Blahová, Leontýna January 2016 (has links)
This master thesis is about Big Data Governance and about software, which is used for this purposes. Because Big Data are huge opportunity and also risk, I wanted to map products which can be easily use for Data Quality and Big Data Governance in one platform. This thesis is not only on theoretical knowledge level, but also evaluates five key products (from my point of view). I defined requirements for every kind of domain and then I set up the weights and points. The main objective is to evaluate software capabilities and compere them.
69

Design and implementation of a workflow for quality improvement of the metadata of scientific publications

Wolff, Stefan 07 November 2023 (has links)
In this paper, a detailed workflow for analyzing and improving the quality of metadata of scientific publications is presented and tested. The workflow was developed based on approaches from the literature. Frequently occurring types of errors from the literature were compiled and mapped to the data-quality dimensions most relevant for publication data – completeness, correctness, and consistency – and made measurable. Based on the identified data errors, a process for improving data quality was developed. This process includes parsing hidden data, correcting incorrectly formatted attribute values, enriching with external data, carrying out deduplication, and filtering erroneous records. The effectiveness of the workflow was confirmed in an exemplary application to publication data from Open Researcher and Contributor ID (ORCID), with 56\% of the identified data errors corrected. The workflow will be applied to publication data from other source systems in the future to further increase its performance.
70

Datová kvalita, integrita a konsolidace dat v BI / Data quality, data integrity and consolidation of data in BI

Dražil, Michal January 2008 (has links)
This thesis deals with the areas of enterprise data quality, data integrity and data consolidation from the perspective of Business Intelligence (BI), which is currently experiencing significant growth. The aim of this thesis is to provide a comprehensive view of the data quality in terms of BI, to analyze problems in the area of data quality control and to propose options to address them. Moreover, the thesis aims to analyze and assess the features of specialized software tools for data quality. Last but not least aim of this thesis is to identify the critical success factors in the field of data quality in CRM and BI projects. The thesis is divided into two parts. The first (theoretical) part deals with data quality, data integrity and consolidation of data in relation to BI trying to identify key issues, which are related to these areas. The second (practical) part of the thesis deals at first with the features of software tools for data quality and offers their fundamental summary as well as the tools breakdown. This part also provides basic comparison of the few selected software products specialized at the corporate data quality assurance. The practical part hereafter describes addressing the data quality within the specific BI/CRM project conducted by Clever Decision Ltd. This thesis is intended primarily for BI and data quality experts, as well as the others who are interested in these disciplines. The main contribution of this thesis is that it provides comprehensive view not only of data quality itself, but also deals with the issues that are directly related to the corporate data quality assurance. This thesis can serve as a sort of guidance for one of the first implementation phases in the BI projects, which deals with the data integration, data consolidation and solving problems in the area of data quality.

Page generated in 0.062 seconds