• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

The Use of Big Data in Process Management : A Literature Study and Survey Investigation

Ephraim, Ekow Esson, Sehic, Sanel January 2021 (has links)
In recent years there has been an increasing interest in understanding how organizations can utilize big data in their process management to create value and improve their processes. This is due to new challenges for process management which have arisen from increasing competition and the complexity of large data sets due to technological advancements. These large data sets have been described by scholars as big data which involves data that are so complex traditional data analysis software are not sufficient in managing or analyzing them. Because of the complexity of handling such great volumes of data there is a big gap in practical examples where organizations have incorporated big data in their process management. Therefore, in order to fill relevant gaps and contribute to advancements in this field, this thesis will explore how big data can contribute to improved process management. Hence, the aim of this thesis entailed investigating how, why and to what extent big data is used in process management. As well as to outline the purpose and challenges of using big data in process management. This was accomplished through a literature review and a survey, respectively, in order to understand how big data had previously been used to create value and improve processes in organizations. From the extensive literature review, an analysis matrix of how big data is used in process management is provided through the intersections between big data and process management dimensions. The analysis matrix showed that most of the instances in which big data was used in process management were in process analysis & improvement and process control & agility. Simply put, organizations used big data in specific activities involved in process management but not in a holistic manner. Furthermore, the limited findings from the survey indicate that the main challenges and purposes of big data use in Swedish organizations are the complexity of handling data and making statistically better decisions, respectively.
262

Amélioration de la qualité des données : correction sémantique des anomalies inter-colonnes / Improved data quality : correction of semantic inter-column anomalies

Zaidi, Houda 01 February 2017 (has links)
La qualité des données présente un grand enjeu au sein d'une organisation et influe énormément sur la qualité de ses services et sur sa rentabilité. La présence de données erronées engendre donc des préoccupations importantes autour de cette qualité. Ce rapport traite la problématique de l'amélioration de la qualité des données dans les grosses masses de données. Notre approche consiste à aider l'utilisateur afin de mieux comprendre les schémas des données manipulées, mais aussi définir les actions à réaliser sur celles-ci. Nous abordons plusieurs concepts tels que les anomalies des données au sein d'une même colonne, et les anomalies entre les colonnes relatives aux dépendances fonctionnelles. Nous proposons dans ce contexte plusieurs moyens de pallier ces défauts en nous intéressons à la performance des traitements ainsi opérés. / Data quality represents a major challenge because the cost of anomalies can be very high especially for large databases in enterprises that need to exchange information between systems and integrate large amounts of data. Decision making using erroneous data has a bad influence on the activities of organizations. Quantity of data continues to increase as well as the risks of anomalies. The automatic correction of these anomalies is a topic that is becoming more important both in business and in the academic world. In this report, we propose an approach to better understand the semantics and the structure of the data. Our approach helps to correct automatically the intra-column anomalies and the inter-columns ones. We aim to improve the quality of data by processing the null values and the semantic dependencies between columns.
263

Cancer reporting: timeliness analysis and process reengineering

Jabour, Abdulrahman M. 09 November 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Introduction: Cancer registries collect tumor-related data to monitor incident rates and support population-based research. A common concern with using population-based registry data for research is reporting timeliness. Data timeliness have been recognized as an important data characteristic by both the Centers for Disease Control and Prevention (CDC) and the Institute of Medicine (IOM). Yet, few recent studies in the United States (U.S.) have systemically measured timeliness. The goal of this research is to evaluate the quality of cancer data and examine methods by which the reporting process can be improved. The study aims are: 1- evaluate the timeliness of cancer cases at the Indiana State Department of Health (ISDH) Cancer Registry, 2- identify the perceived barriers and facilitators to timely reporting, and 3- reengineer the current reporting process to improve turnaround time. Method: For Aim 1: Using the ISDH dataset from 2000 to 2009, we evaluated the reporting timeliness and subtask within the process cycle. For Aim 2: Certified cancer registrars reporting for ISDH were invited to a semi-structured interview. The interviews were recorded and qualitatively analyzed. For Aim 3: We designed a reengineered workflow to minimize the reporting timeliness and tested it using simulation. Result: The results show variation in the mean reporting time, which ranged from 426 days in 2003 to 252 days in 2009. The barriers identified were categorized into six themes and the most common barrier was accessing medical records at external facilities. We also found that cases reside for a few months in the local hospital database while waiting for treatment data to become available. The recommended workflow focused on leveraging a health information exchange for data access and adding a notification system to inform registrars when new treatments are available.
264

Examining Opioid-related Overdose Events in Dayton, OH using Police, Emergency Medical Services and Coroner’s Data

Pan, Yuhan 06 October 2020 (has links)
No description available.
265

THE PERCEIVED AND REAL VALUE OF HEALTH INFORMATION EXCHANGE IN PUBLIC HEALTH SURVEILLANCE

Dixon, Brian Edward 22 August 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Public health agencies protect the health and safety of populations. A key function of public health agencies is surveillance or the ongoing, systematic collection, analysis, interpretation, and dissemination of data about health-related events. Recent public health events, such as the H1N1 outbreak, have triggered increased funding for and attention towards the improvement and sustainability of public health agencies’ capacity for surveillance activities. For example, provisions in the final U.S. Centers for Medicare and Medicaid Services (CMS) “meaningful use” criteria ask that physicians and hospitals report surveillance data to public health agencies using electronic laboratory reporting (ELR) and syndromic surveillance functionalities within electronic health record (EHR) systems. Health information exchange (HIE), organized exchange of clinical and financial health data among a network of trusted entities, may be a path towards achieving meaningful use and enhancing the nation’s public health surveillance infrastructure. Yet the evidence on the value of HIE, especially in the context of public health surveillance, is sparse. In this research, the value of HIE to the process of public health surveillance is explored. Specifically, the study describes the real and perceived completeness and usefulness of HIE in public health surveillance activities. To explore the real value of HIE, the study examined ELR data from two states, comparing raw, unedited data sent from hospitals and laboratories to data enhanced by an HIE. To explore the perceived value of HIE, the study examined public health, infection control, and HIE professionals’ perceptions of public health surveillance data and information flows, comparing traditional flows to HIE-enabled ones. Together these methods, along with the existing literature, triangulate the value that HIE does and can provide public health surveillance processes. The study further describes remaining gaps that future research and development projects should explore. The data collected in the study show that public health surveillance activities vary dramatically, encompassing a wide range of paper and electronic methods for receiving and analyzing population health trends. Few public health agencies currently utilize HIE-enabled processes for performing surveillance activities, relying instead on direct reporting of information from hospitals, physicians, and laboratories. Generally HIE is perceived well among public health and infection control professionals, and many of these professionals feel that HIE can improve surveillance methods and population health. Human and financial resource constraints prevent additional public health agencies from participating in burgeoning HIE initiatives. For those agencies that do participate, real value is being added by HIEs. Specifically, HIEs are improving the completeness and semantic interoperability of ELR messages sent from clinical information systems. New investments, policies, and approaches will be necessary to increase public health utilization of HIEs while improving HIEs’ capacity to deliver greater value to public health surveillance processes.
266

Evaluating Data Quality in a Data Warehouse Environment / Utvärdering av datakvalitet i ett datalager

Redgert, Rebecca January 2017 (has links)
The amount of data accumulated by organizations have grown significantly during the last couple of years, increasing the importance of data quality. Ensuring data quality for large amounts of data is a complicated task, but crucial to subsequent analysis. This study investigates how to maintain and improve data quality in a data warehouse. A case study of the errors in a data warehouse was conducted at the Swedish company Kaplan, and resulted in guiding principles on how to improve the data quality. The investigation was done by manually comparing data from the source systems to the data integrated in the data warehouse and applying a quality framework based on semiotic theory to identify errors. The three main guiding principles given are (1) to implement a standardized format for the source data, (2) to implement a check prior to integration where the source data are reviewed and corrected if necessary, and (3) to create and implement specific database integrity rules. Further work is encouraged on establishing a guide for the framework on how to best perform a manual approach for comparing data, and quality assurance of source data. / Mängden data som ackumulerats av organisationer har ökat betydligt under de senaste åren, vilket har ökat betydelsen för datakvalitet. Att säkerställa datakvalitet för stora mängder data är en komplicerad uppgift, men avgörande för efterföljande analys. Denna studie undersöker hur man underhåller och förbättrar datakvaliteten i ett datalager. En fallstudie av fel i ett datalager på det svenska företaget Kaplan genomfördes och resulterade i riktlinjer för hur datakvaliteten kan förbättras. Undersökningen gjordes genom att manuellt jämföra data från källsystemen med datat integrerat i datalagret och genom att tillämpa ett kvalitetsramverk grundat på semiotisk teori för att kunna identifiera fel. De tre huvudsakliga riktlinjerna som gavs är att (1) implementera ett standardiserat format för källdatat, (2) genomföra en kontroll före integration där källdatat granskas och korrigeras vid behov, och (3) att skapa och implementera specifika databasintegritetsregler. Vidare forskning uppmuntras för att skapa en guide till ramverket om hur man bäst jämför data genom en manuell undersökning, och kvalitetssäkring av källdata.
267

Mining Vehicle Classifications from Archived Loop Detector Data

Huang, Bo January 2014 (has links)
No description available.
268

Calibration of the Liquid Argon Calorimeter and Search for Stopped Long-Lived Particles

Morgenstern, Stefanie 06 May 2021 (has links)
This thesis dives into the three main aspects of today's experimental high energy physics: detector operation and data preparation, reconstruction and identification of physics objects, and physics analysis.The symbiosis of these is the key to reach a better understanding of the underlying principles of nature. Data from proton-proton collisions at a centre-of-mass energy of 13 TeV collected by the ATLAS detector during 2015-2018 are used. In the context of detector operation and data preparation, the data quality assessment for the Liquid Argon calorimeter of the ATLAS experiment is improved by an adaptive monitoring of noisy channels and mini noise bursts allowing an assessment of their impact on the measured data at an early stage. Besides data integrity, a precise energy calibration of electrons, positrons and photons is essential for many physics analyses and requires an excellent understanding of the detector. Corrections of detector non-uniformities originating from gaps in between the Liquid Argon calorimeter modules and non-nominal high voltage settings are derived and successfully recover the homogeneity in the energy measurement. A further enhancement is reached by introducing the azimuthal position of the electromagnetic cluster in the calibration algorithm. Additionally, a novel approach to exploit tracking information in the historically purely calorimeter-based energy calibration for electrons and positrons is presented. Considering the track momentum results in an about 30% better energy resolution for low-pT electrons and positrons. The described optimisation of the energy calibration is especially beneficial for precision measurements which are one way to test and challenge our current knowledge of the Standard Model of particle physics. Another path is the hunt for new particles, here presented by a search for stopped long-lived particles suggested by many theoretical models. This analysis targets gluinos which are sufficiently long-lived to form quasi-stable states and come to rest in the detector. Their eventual decay results in large energy deposits in the calorimeters. The special nature of the expected signature requires the exploration of non-standard datasets and reconstruction methods. Further, non-collision backgrounds are dominant for this search which are investigated in detail. In the context of simplified supersymmetric models an expected signal sensitivity of more than 3σ for gluinos with a mass up to 1.2 TeV and a lifetime of 100 μs is achieved.
269

DATA INTEGRITY IN THE HEALTHCARE INDUSTRY: ANALYZING THE EFFECTIVENESS OF DATA SECURITY IN GOOD DATA AND RECORD MANAGEMENT PRACTICES (A CASE STUDY OF COMPUTERIZING THE COMPETENCE MATRIX FOR A QUALITY CONTROL DRUG LABORATORY)

Marcel C Okezue (12522565) 06 October 2022 (has links)
<p>  </p> <p>This project analyzes the concept of time efficiency in the data management process associated with the personnel training and competence assessments in the quality control (QC) laboratory of Nigeria’s foods and drugs authority (NAFDAC). The laboratory administrators are encumbered with a lot of mental and paper-based record keeping because the personnel training data is managed manually. Consequently, the personnel training and competence assessments in the laboratory are not efficiently conducted. The Microsoft Excel spreadsheet provided by a Purdue doctoral dissertation as a remedial to this is found to be deficient in handling operations in database tables. As a result, hence doctoral dissertation did not appropriately address the inefficiencies.</p> <p>The problem addressed by this study is the operational inefficiency that results from the manual or Excel-based personnel training data management process in the NAFDAC laboratory. The purpose, therefore, is to reduce the time it essentially takes to generate, obtain, manipulate, exchange, and securely store the personnel competence training and assessment data. To do this, the study developed a software system that is integrated with a relational database management system (RDBMS) to improve the manual/Microsoft Excel-based data management procedure. This project examines the operational (time) efficiencies in using manual or Excel-based format in comparison with the new system that this project developed, as a method to ascertain its validity.</p> <p>The data used in this qualitative research is from literary sources and from simulating the distinction between the times spent in administering personnel training and competence assessment using the New system developed by this study and the Excel system by another project, respectively. The fundamental finding of this study is that the idea of improving the operational (time) efficiency in the personnel training and competence assessment process in the QC laboratory is valid. Doing that will reduce human errors, achieve enhanced time-efficient operation, and improve personnel training and competence assessment processes.</p> <p>Recommendations are made as to the procedure the laboratory administrator must adopt to take advantage of the new system. The study also recommended the steps for the potential research to extend the capability of this project. </p>
270

Data governance in big data : How to improve data quality in a decentralized organization / Datastyrning och big data

Landelius, Cecilia January 2021 (has links)
The use of internet has increased the amount of data available and gathered. Companies are investing in big data analytics to gain insights from this data. However, the value of the analysis and decisions made based on it, is dependent on the quality ofthe underlying data. For this reason, data quality has become a prevalent issue for organizations. Additionally, failures in data quality management are often due to organizational aspects. Due to the growing popularity of decentralized organizational structures, there is a need to understand how a decentralized organization can improve data quality. This thesis conducts a qualitative single case study of an organization currently shifting towards becoming data driven and struggling with maintaining data quality within the logistics industry. The purpose of the thesis is to answer the questions: • RQ1: What is data quality in the context of logistics data? • RQ2: What are the obstacles for improving data quality in a decentralized organization? • RQ3: How can these obstacles be overcome? Several data quality dimensions were identified and categorized as critical issues,issues and non-issues. From the gathered data the dimensions completeness, accuracy and consistency were found to be critical issues of data quality. The three most prevalent obstacles for improving data quality were data ownership, data standardization and understanding the importance of data quality. To overcome these obstacles the most important measures are creating data ownership structures, implementing data quality practices and changing the mindset of the employees to a data driven mindset. The generalizability of a single case study is low. However, there are insights and trends which can be derived from the results of this thesis and used for further studies and companies undergoing similar transformations. / Den ökade användningen av internet har ökat mängden data som finns tillgänglig och mängden data som samlas in. Företag påbörjar därför initiativ för att analysera dessa stora mängder data för att få ökad förståelse. Dock är värdet av analysen samt besluten som baseras på analysen beroende av kvaliteten av den underliggande data. Av denna anledning har datakvalitet blivit en viktig fråga för företag. Misslyckanden i datakvalitetshantering är ofta på grund av organisatoriska aspekter. Eftersom decentraliserade organisationsformer blir alltmer populära, finns det ett behov av att förstå hur en decentraliserad organisation kan arbeta med frågor som datakvalitet och dess förbättring. Denna uppsats är en kvalitativ studie av ett företag inom logistikbranschen som i nuläget genomgår ett skifte till att bli datadrivna och som har problem med att underhålla sin datakvalitet. Syftet med denna uppsats är att besvara frågorna: • RQ1: Vad är datakvalitet i sammanhanget logistikdata? • RQ2: Vilka är hindren för att förbättra datakvalitet i en decentraliserad organisation? • RQ3: Hur kan dessa hinder överkommas? Flera datakvalitetsdimensioner identifierades och kategoriserades som kritiska problem, problem och icke-problem. Från den insamlade informationen fanns att dimensionerna, kompletthet, exakthet och konsekvens var kritiska datakvalitetsproblem för företaget. De tre mest förekommande hindren för att förbättra datakvalité var dataägandeskap, standardisering av data samt att förstå vikten av datakvalitet. För att överkomma dessa hinder är de viktigaste åtgärderna att skapa strukturer för dataägandeskap, att implementera praxis för hantering av datakvalitet samt att ändra attityden hos de anställda gentemot datakvalitet till en datadriven attityd. Generaliseringsbarheten av en enfallsstudie är låg. Dock medför denna studie flera viktiga insikter och trender vilka kan användas för framtida studier och för företag som genomgår liknande transformationer.

Page generated in 0.0497 seconds