• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Calibration of the Liquid Argon Calorimeter and Search for Stopped Long-Lived Particles

Morgenstern, Stefanie 06 May 2021 (has links)
This thesis dives into the three main aspects of today's experimental high energy physics: detector operation and data preparation, reconstruction and identification of physics objects, and physics analysis.The symbiosis of these is the key to reach a better understanding of the underlying principles of nature. Data from proton-proton collisions at a centre-of-mass energy of 13 TeV collected by the ATLAS detector during 2015-2018 are used. In the context of detector operation and data preparation, the data quality assessment for the Liquid Argon calorimeter of the ATLAS experiment is improved by an adaptive monitoring of noisy channels and mini noise bursts allowing an assessment of their impact on the measured data at an early stage. Besides data integrity, a precise energy calibration of electrons, positrons and photons is essential for many physics analyses and requires an excellent understanding of the detector. Corrections of detector non-uniformities originating from gaps in between the Liquid Argon calorimeter modules and non-nominal high voltage settings are derived and successfully recover the homogeneity in the energy measurement. A further enhancement is reached by introducing the azimuthal position of the electromagnetic cluster in the calibration algorithm. Additionally, a novel approach to exploit tracking information in the historically purely calorimeter-based energy calibration for electrons and positrons is presented. Considering the track momentum results in an about 30% better energy resolution for low-pT electrons and positrons. The described optimisation of the energy calibration is especially beneficial for precision measurements which are one way to test and challenge our current knowledge of the Standard Model of particle physics. Another path is the hunt for new particles, here presented by a search for stopped long-lived particles suggested by many theoretical models. This analysis targets gluinos which are sufficiently long-lived to form quasi-stable states and come to rest in the detector. Their eventual decay results in large energy deposits in the calorimeters. The special nature of the expected signature requires the exploration of non-standard datasets and reconstruction methods. Further, non-collision backgrounds are dominant for this search which are investigated in detail. In the context of simplified supersymmetric models an expected signal sensitivity of more than 3σ for gluinos with a mass up to 1.2 TeV and a lifetime of 100 μs is achieved.
272

DATA INTEGRITY IN THE HEALTHCARE INDUSTRY: ANALYZING THE EFFECTIVENESS OF DATA SECURITY IN GOOD DATA AND RECORD MANAGEMENT PRACTICES (A CASE STUDY OF COMPUTERIZING THE COMPETENCE MATRIX FOR A QUALITY CONTROL DRUG LABORATORY)

Marcel C Okezue (12522565) 06 October 2022 (has links)
<p>  </p> <p>This project analyzes the concept of time efficiency in the data management process associated with the personnel training and competence assessments in the quality control (QC) laboratory of Nigeria’s foods and drugs authority (NAFDAC). The laboratory administrators are encumbered with a lot of mental and paper-based record keeping because the personnel training data is managed manually. Consequently, the personnel training and competence assessments in the laboratory are not efficiently conducted. The Microsoft Excel spreadsheet provided by a Purdue doctoral dissertation as a remedial to this is found to be deficient in handling operations in database tables. As a result, hence doctoral dissertation did not appropriately address the inefficiencies.</p> <p>The problem addressed by this study is the operational inefficiency that results from the manual or Excel-based personnel training data management process in the NAFDAC laboratory. The purpose, therefore, is to reduce the time it essentially takes to generate, obtain, manipulate, exchange, and securely store the personnel competence training and assessment data. To do this, the study developed a software system that is integrated with a relational database management system (RDBMS) to improve the manual/Microsoft Excel-based data management procedure. This project examines the operational (time) efficiencies in using manual or Excel-based format in comparison with the new system that this project developed, as a method to ascertain its validity.</p> <p>The data used in this qualitative research is from literary sources and from simulating the distinction between the times spent in administering personnel training and competence assessment using the New system developed by this study and the Excel system by another project, respectively. The fundamental finding of this study is that the idea of improving the operational (time) efficiency in the personnel training and competence assessment process in the QC laboratory is valid. Doing that will reduce human errors, achieve enhanced time-efficient operation, and improve personnel training and competence assessment processes.</p> <p>Recommendations are made as to the procedure the laboratory administrator must adopt to take advantage of the new system. The study also recommended the steps for the potential research to extend the capability of this project. </p>
273

Data governance in big data : How to improve data quality in a decentralized organization / Datastyrning och big data

Landelius, Cecilia January 2021 (has links)
The use of internet has increased the amount of data available and gathered. Companies are investing in big data analytics to gain insights from this data. However, the value of the analysis and decisions made based on it, is dependent on the quality ofthe underlying data. For this reason, data quality has become a prevalent issue for organizations. Additionally, failures in data quality management are often due to organizational aspects. Due to the growing popularity of decentralized organizational structures, there is a need to understand how a decentralized organization can improve data quality. This thesis conducts a qualitative single case study of an organization currently shifting towards becoming data driven and struggling with maintaining data quality within the logistics industry. The purpose of the thesis is to answer the questions: • RQ1: What is data quality in the context of logistics data? • RQ2: What are the obstacles for improving data quality in a decentralized organization? • RQ3: How can these obstacles be overcome? Several data quality dimensions were identified and categorized as critical issues,issues and non-issues. From the gathered data the dimensions completeness, accuracy and consistency were found to be critical issues of data quality. The three most prevalent obstacles for improving data quality were data ownership, data standardization and understanding the importance of data quality. To overcome these obstacles the most important measures are creating data ownership structures, implementing data quality practices and changing the mindset of the employees to a data driven mindset. The generalizability of a single case study is low. However, there are insights and trends which can be derived from the results of this thesis and used for further studies and companies undergoing similar transformations. / Den ökade användningen av internet har ökat mängden data som finns tillgänglig och mängden data som samlas in. Företag påbörjar därför initiativ för att analysera dessa stora mängder data för att få ökad förståelse. Dock är värdet av analysen samt besluten som baseras på analysen beroende av kvaliteten av den underliggande data. Av denna anledning har datakvalitet blivit en viktig fråga för företag. Misslyckanden i datakvalitetshantering är ofta på grund av organisatoriska aspekter. Eftersom decentraliserade organisationsformer blir alltmer populära, finns det ett behov av att förstå hur en decentraliserad organisation kan arbeta med frågor som datakvalitet och dess förbättring. Denna uppsats är en kvalitativ studie av ett företag inom logistikbranschen som i nuläget genomgår ett skifte till att bli datadrivna och som har problem med att underhålla sin datakvalitet. Syftet med denna uppsats är att besvara frågorna: • RQ1: Vad är datakvalitet i sammanhanget logistikdata? • RQ2: Vilka är hindren för att förbättra datakvalitet i en decentraliserad organisation? • RQ3: Hur kan dessa hinder överkommas? Flera datakvalitetsdimensioner identifierades och kategoriserades som kritiska problem, problem och icke-problem. Från den insamlade informationen fanns att dimensionerna, kompletthet, exakthet och konsekvens var kritiska datakvalitetsproblem för företaget. De tre mest förekommande hindren för att förbättra datakvalité var dataägandeskap, standardisering av data samt att förstå vikten av datakvalitet. För att överkomma dessa hinder är de viktigaste åtgärderna att skapa strukturer för dataägandeskap, att implementera praxis för hantering av datakvalitet samt att ändra attityden hos de anställda gentemot datakvalitet till en datadriven attityd. Generaliseringsbarheten av en enfallsstudie är låg. Dock medför denna studie flera viktiga insikter och trender vilka kan användas för framtida studier och för företag som genomgår liknande transformationer.
274

Data Quality Knowledge in Sport Informatics: A Scoping Review

Kremser, Wolfgang 14 October 2022 (has links)
As sport informatics research produces more and more digital data, effective data quality management becomes a necessity. This systematic scoping review investigates how data quality is currently understood in the field. Results show the lack of a common data quality model. Combining data quality approaches from related fields such as Ambient Assisted Living and eHealth could be the first step toward a data quality model for sport informatics. / Da in der Sportinformatikforschung immer mehr digitale Daten erzeugt werden, wird ein effektives Datenqualitätsmanagement zu einer Notwendigkeit. In dieser systematischen Übersichtsarbeit wird untersucht, wie Datenqualität derzeit in diesem Bereich verstanden wird. Die Ergebnisse zeigen das Fehlen eines gemeinsamen Datenqualitätsmodells. Die Kombination von Datenqualitätsansätzen aus verwandten Bereichen wie Ambient Assisted Living und eHealth könnte der erste Schritt zu einem Datenqualitätsmodell für die Sportinformatik sein.
275

The District Health Information System (DHIS) as a support mechanism for data quality improvement in Waterberg District, Limpopo: an exploration of staff experiences

Sibuyi, Idon Nkhenso 11 May 2015 (has links)
The purpose of this study was to explore and describe staff experiences in managing data and/or information when utilising the District Health Information System (DHIS) as a support mechanism for data quality improvement, including the strengths and weaknesses of current data management processes. It was also aimed to identify key barriers and to make recommendations on how data management can be strengthened. Key informants included in this study were those based at the district office (health programme managers and information officers) and at the primary health care (PHC) facilities (facility managers, clinical nurse practitioners and data capturers). An exploratory, descriptive and generic qualitative study was conducted. Consent was requested from each participant. Data were collected through semi-structured interviews. The study findings highlighted strengths, weaknesses and key barriers as experienced by the staff. Strengths, such as having data capturers and DHIS software at most if not all facilities, were highlighted. The weaknesses and key barriers highlighted were staff shortages of both clinical and health management information staff (HMIS), shortage of resources such as computers and Internet access, poor feedback, training needs and data quality issues. Most of the weaknesses and key barriers called for further and proper implementation of the District Health Management Information Systems (DHMIS) policy, the standard operating procedures (SOP), the eHealth strategy and training of the staff, due to the reported gaps between the policy and the reality and/or practice at the facility / Health Studies / M.A. (Public Health with specialisation in Medical Informatics)
276

Tagungsband zum 20. Interuniversitären Doktorandenseminar Wirtschaftsinformatik

25 January 2017 (has links) (PDF)
Das Interuniversitäre Doktorandenseminar Wirtschaftsinformatik ist eine regelmäßige Veranstaltung, in deren Rahmen Doktoranden der Universitäten Chemnitz, Dresden, Freiberg, Halle, Ilmenau, Jena und Leipzig ihr Promotionsprojekt präsentieren und sich den kritischen Fragen der anwesenden Professoren und Doktoranden aller beteiligten Universitäten stellen. Auf diese Weise erhalten die Promovierenden wertvolles Feedback zu Vorgehen, Methodik und inhaltlichen Aspekten ihrer Arbeit, welches sie für ihre Promotion nutzen können. Darüber hinaus bietet das Interuniversitäre Doktorandenseminar Wirtschaftsinformatik eine Plattform für eine fachliche Auseinandersetzung mit aktuellen Themen und sich ankündigenden Trends in der Forschung der Wirtschaftsinformatik. Zudem wird ein akademischer Diskurs über die Grenzen der jeweils eigenen Schwerpunkte der Professur hinaus ermöglicht. Das nunmehr 20. Jubiläum des Doktorandenseminars fand in Chemnitz statt. Der daraus entstandene Tagungsband enthält fünf ausgewählte Beiträge zu den Themenfeldern Service Engineering, Cloud-Computing, Geschäftsprozessmanagement, Requirements Engineering, Analytics und Datenqualität und zeigt damit anschaulich die Aktualität und Relevanz, aber auch die thematische Breite der gegenwärtigen Forschung im Bereich Wirtschaftsinformatik. / The inter-university PhD seminar Business Information Systems (“Interuniversitäres Doktorandenseminar Wirtschaftsinformatik”) is an annual one-day event which is organized by the Business Information Systems chairs of the universities of Chemnitz, Dresden, Freiberg, Halle, Ilmenau, Jena and Leipzig. It serves as a platform for PhD students to present their PhD topic and the current status of the thesis. Therefore, the seminar is a good opportunity to gain further knowledge and inspiration based on the feedback and questions of the participating professors and students. The 20th Interuniversitäre Doktorandenseminar Wirtschaftsinformatik took place in Chemnitz in October 2016. The resulting proceedings include five selected articles within the following topic areas: service engineering, cloud computing, business process management, requirements engineering, analytics und data quality. They illustrate the relevance as well as the broad range of topics in current business information systems research. In case of questions and comments, please use the contact details at the end of the articles.
277

Social networks, community-based development and empirical methodologies

Caeyers, Bet Helena January 2014 (has links)
This thesis consists of two parts: Part I (Chapters 2 and 3) critically assesses a set of methodological tools that are widely used in the literature and that are applied to the empirical analysis in Part II (Chapters 4 and 5). Using a randomised experiment, the first chapter compares pen-and-paper interviewing (PAPI) with computer-assisted personal interviewing (CAPI). We observe a large error count in PAPI, which is likely to introduce sample bias. We examine the effect of PAPI consumption measurement error on poverty analysis and compare both applications in terms of interview length, costs and respondents’ perceptions. Next, we formalise an unproven source of ordinary least squares estimation bias in standard linear-in-means peer effects models. Deriving a formula for the magnitude of the bias, we discuss its underlying parameters. We show when the bias is aggravated in models adding cluster fixed effects and how it affects inference and interpretation of estimation results. We reveal that two-stage least squares (2SLS) estimation strategies eliminate the bias and provide illustrative simulations. The results may explain some counter-intuitive findings in the social interaction literature. We then use the linear-in-means model to estimate endogenous peer effects on the awareness of a community-based development programme of vulnerable groups in rural Tanzania. We denote the geographically nearest neighbours set as the relevant peer group in this context and employ a popular 2SLS estimation strategy on a unique spatial household dataset, collected using CAPI, to identify significant average and heterogeneous endogenous peer effects. The final chapter investigates social network effects in decentralised food aid (free food and food for work) allocation processes in Ethiopia, in the aftermath of a serious drought. We find that food aid is responsive to need, as well as being targeted at households with less access to informal support. However, we also find strong correlations with political connections, especially in the immediate aftermath of the drought.
278

La démographie des centenaires québécois : validation des âges au décès, mesure de la mortalité et composante familiale de la longévité

Beaudry-Godin, Mélissa 06 1900 (has links)
L’explosion récente du nombre de centenaires dans les pays à faible mortalité n’est pas étrangère à la multiplication des études portant sur la longévité, et plus spécifiquement sur ses déterminants et ses répercussions. Alors que certains tentent de découvrir les gènes pouvant être responsables de la longévité extrême, d’autres s’interrogent sur l’impact social, économique et politique du vieillissement de la population et de l’augmentation de l’espérance de vie ou encore, sur l’existence d’une limite biologique à la vie humaine. Dans le cadre de cette thèse, nous analysons la situation démographique des centenaires québécois depuis le début du 20e siècle à partir de données agrégées (données de recensement, statistiques de l’état civil, estimations de population). Dans un deuxième temps, nous évaluons la qualité des données québécoises aux grands âges à partir d’une liste nominative des décès de centenaires des générations 1870-1894. Nous nous intéressons entre autres aux trajectoires de mortalité au-delà de cent ans. Finalement, nous analysons la survie des frères, sœurs et parents d’un échantillon de semi-supercentenaires (105 ans et plus) nés entre 1890 et 1900 afin de se prononcer sur la composante familiale de la longévité. Cette thèse se compose de trois articles. Dans le cadre du premier, nous traitons de l’évolution du nombre de centenaires au Québec depuis les années 1920. Sur la base d’indicateurs démographiques tels le ratio de centenaires, les probabilités de survie et l’âge maximal moyen au décès, nous mettons en lumière les progrès remarquables qui ont été réalisés en matière de survie aux grands âges. Nous procédons également à la décomposition des facteurs responsables de l’augmentation du nombre de centenaires au Québec. Ainsi, au sein des facteurs identifiés, l’augmentation de la probabilité de survie de 80 à 100 ans s’inscrit comme principal déterminant de l’accroissement du nombre de centenaires québécois. Le deuxième article traite de la validation des âges au décès des centenaires des générations 1870-1894 d’origine canadienne-française et de confession catholique nés et décédés au Québec. Au terme de ce processus de validation, nous pouvons affirmer que les données québécoises aux grands âges sont d’excellente qualité. Les trajectoires de mortalité des centenaires basées sur les données brutes s’avèrent donc représentatives de la réalité. L’évolution des quotients de mortalité à partir de 100 ans témoigne de la décélération de la mortalité. Autant chez les hommes que chez les femmes, les quotients de mortalité plafonnent aux alentours de 45%. Finalement, dans le cadre du troisième article, nous nous intéressons à la composante familiale de la longévité. Nous comparons la survie des frères, sœurs et parents des semi-supercentenaires décédés entre 1995 et 2004 à celle de leurs cohortes de naissance respectives. Les différences de survie entre les frères, sœurs et parents des semi-supercentenaires sous observation et leur génération « contrôle » s’avèrent statistiquement significatives à un seuil de 0,01%. De plus, les frères, sœurs, pères et mères des semi-supercentenaires ont entre 1,7 (sœurs) et 3 fois (mères) plus de chance d’atteindre 90 ans que les membres de leur cohorte de naissance correspondante. Ainsi, au terme de ces analyses, il ne fait nul doute que la longévité se concentre au sein de certaines familles. / The recent rise in the number of centenarians within low mortality countries has led to multiple studies conducted on longevity, and more specifically on its determinants and repercussions. Some are trying to identify genes that could be responsible for extreme longevity. Others are studying the social, economic and political impact of the rise in life expectancy and population aging, or questioning themselves about the existence of a biological limit to the human life span. In this thesis, we first study the demographic situation of centenarians from Quebec using aggregated data (census data, vital statistics, and population estimations). Then, we evaluate the quality of Quebec data at the oldest ages using the death records of centenarians belonging to the 1870-1894 birth cohorts. We are particularly interested in the mortality trajectories beyond 100 years old. Finally, we analyze the survival of siblings and parents of a semi-supercentenarians (105 years and over) sample in order to assess the familial component of longevity. The thesis is divided into three articles. In the first article, we study the evolution of the centenarian population from the 1920s in Quebec. With demographic indicators such as the centenarian ratio, the survival probabilities and the maximal age at death, we try to demonstrate the remarkable progress realised in old age mortality. We also analyze the determinants of the increase in the number of centenarians in Quebec. Among the factors identified, the improvement in late mortality is the main determinant of the increase of the number of centenarians in Quebec. The second article deals with the validation of the ages at death of French-Canadian centenarians born in Quebec between 1870-1894. The validation results confirm that Quebec data at the highest ages at death are of very good quality. Therefore, the measure of centenarian mortality based on all death records is representative of the true trends. The evolution of age-specific life table death rates beyond 100 years old assesses the mortality deceleration at the highest ages. Among men and women, the death rates reach a plateau at around 45%. Finally, in the third article, we study the familial predisposition for longevity. We compare the survival probabilities of siblings and parents of semi-supercentenarians deceased between 1995 and 2004 to those of their birth cohort-matched counterparts. The survival differences between the siblings and parents of semi-supercentenarians and their respective birth cohorts are statistically significant at a 0,01% level of significance. The siblings and parents have a 1,7 to 3 times greater probability of survival from age 50 to 90 then members of their respective birth cohorts. These findings support the existence of a substantial familial component to longevity.
279

Approches bio-informatiques appliquées aux technologies émergentes en génomique

Lemieux Perreault, Louis-Philippe 02 1900 (has links)
Les études génétiques, telles que les études de liaison ou d’association, ont permis d’acquérir une plus grande connaissance sur l’étiologie de plusieurs maladies affectant les populations humaines. Même si une dizaine de milliers d’études génétiques ont été réalisées sur des centaines de maladies ou autres traits, une grande partie de leur héritabilité reste inexpliquée. Depuis une dizaine d’années, plusieurs percées dans le domaine de la génomique ont été réalisées. Par exemple, l’utilisation des micropuces d’hybridation génomique comparative à haute densité a permis de démontrer l’existence à grande échelle des variations et des polymorphismes en nombre de copies. Ces derniers sont maintenant détectables à l’aide de micropuce d’ADN ou du séquençage à haut débit. De plus, des études récentes utilisant le séquençage à haut débit ont permis de démontrer que la majorité des variations présentes dans l’exome d’un individu étaient rares ou même propres à cet individu. Ceci a permis la conception d’une nouvelle micropuce d’ADN permettant de déterminer rapidement et à faible coût le génotype de plusieurs milliers de variations rares pour un grand ensemble d’individus à la fois. Dans ce contexte, l’objectif général de cette thèse vise le développement de nouvelles méthodologies et de nouveaux outils bio-informatiques de haute performance permettant la détection, à de hauts critères de qualité, des variations en nombre de copies et des variations nucléotidiques rares dans le cadre d’études génétiques. Ces avancées permettront, à long terme, d’expliquer une plus grande partie de l’héritabilité manquante des traits complexes, poussant ainsi l’avancement des connaissances sur l’étiologie de ces derniers. Un algorithme permettant le partitionnement des polymorphismes en nombre de copies a donc été conçu, rendant possible l’utilisation de ces variations structurales dans le cadre d’étude de liaison génétique sur données familiales. Ensuite, une étude exploratoire a permis de caractériser les différents problèmes associés aux études génétiques utilisant des variations en nombre de copies rares sur des individus non reliés. Cette étude a été réalisée avec la collaboration du Wellcome Trust Centre for Human Genetics de l’University of Oxford. Par la suite, une comparaison de la performance des algorithmes de génotypage lors de leur utilisation avec une nouvelle micropuce d’ADN contenant une majorité de marqueurs rares a été réalisée. Finalement, un outil bio-informatique permettant de filtrer de façon efficace et rapide des données génétiques a été implémenté. Cet outil permet de générer des données de meilleure qualité, avec une meilleure reproductibilité des résultats, tout en diminuant les chances d’obtenir une fausse association. / Genetic studies, such as linkage and association studies, have contributed greatly to a better understanding of the etiology of several diseases. Nonetheless, despite the tens of thousands of genetic studies performed to date, a large part of the heritability of diseases and traits remains unexplained. The last decade experienced unprecedented progress in genomics. For example, the use of microarrays for high-density comparative genomic hybridization has demonstrated the existence of large-scale copy number variations and polymorphisms. These are now detectable using DNA microarray or high-throughput sequencing. In addition, high-throughput sequencing has shown that the majority of variations in the exome are rare or unique to the individual. This has led to the design of a new type of DNA microarray that is enriched for rare variants that can be quickly and inexpensively genotyped in high throughput capacity. In this context, the general objective of this thesis is the development of methodological approaches and bioinformatics tools for the detection at the highest quality standards of copy number polymorphisms and rare single nucleotide variations. It is expected that by doing so, more of the missing heritability of complex traits can then be accounted for, contributing to the advancement of knowledge of the etiology of diseases. We have developed an algorithm for the partition of copy number polymorphisms, making it feasible to use these structural changes in genetic linkage studies with family data. We have also conducted an extensive study in collaboration with the Wellcome Trust Centre for Human Genetics of the University of Oxford to characterize rare copy number definition metrics and their impact on study results with unrelated individuals. We have conducted a thorough comparison of the performance of genotyping algorithms when used with a new DNA microarray composed of a majority of very rare genetic variants. Finally, we have developed a bioinformatics tool for the fast and efficient processing of genetic data to increase quality, reproducibility of results and to reduce spurious associations.
280

The District Health Information System (DHIS) as a support mechanism for data quality improvement in Waterberg District, Limpopo: an exploration of staff experiences

Sibuyi, Idon Nkhenso 11 May 2015 (has links)
The purpose of this study was to explore and describe staff experiences in managing data and/or information when utilising the District Health Information System (DHIS) as a support mechanism for data quality improvement, including the strengths and weaknesses of current data management processes. It was also aimed to identify key barriers and to make recommendations on how data management can be strengthened. Key informants included in this study were those based at the district office (health programme managers and information officers) and at the primary health care (PHC) facilities (facility managers, clinical nurse practitioners and data capturers). An exploratory, descriptive and generic qualitative study was conducted. Consent was requested from each participant. Data were collected through semi-structured interviews. The study findings highlighted strengths, weaknesses and key barriers as experienced by the staff. Strengths, such as having data capturers and DHIS software at most if not all facilities, were highlighted. The weaknesses and key barriers highlighted were staff shortages of both clinical and health management information staff (HMIS), shortage of resources such as computers and Internet access, poor feedback, training needs and data quality issues. Most of the weaknesses and key barriers called for further and proper implementation of the District Health Management Information Systems (DHMIS) policy, the standard operating procedures (SOP), the eHealth strategy and training of the staff, due to the reported gaps between the policy and the reality and/or practice at the facility / Health Studies / M. A. (Public Health with specialisation in Medical Informatics)

Page generated in 0.074 seconds