• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A model based approach for determining data quality metrics in combustion pressure measurement. A study into a quantitative based improvement in data quality

Rogers, David R. January 2014 (has links)
This thesis details a process for the development of reliable metrics that could be used to assess the quality of combustion pressure measurement data - important data used in the development of internal combustion engines. The approach that was employed in this study was a model based technique, in conjunction with a simulation environment - producing data based models from a number of strategically defined measurement points. A simulation environment was used to generate error data sets, from which models of calculated result responses were built. This data was then analysed to determine the results with the best response to error stimulation. The methodology developed allows a rapid prototyping phase where newly developed result calculations may be simulated, tested and evaluated quickly and efficiently. Adopting these newly developed processes and procedures, allowed an effective evaluation of several groups of result classifications, with respect to the major sources of error encountered in typical combustion measurement procedures. In summary, the output gained from this work was that certain result groups could be stated as having an unreliable response to error simulation and could therefore be discounted quickly. These results were clearly identifiable from the data and hence, for the given errors, alternative methods to identify the error sources are proposed within this thesis. However, other results had a predictable response to certain error stimuli, hence; it was feasible to state the possibility of using these results in data quality assessment, or at least establishing any boundaries surrounding their application for this usage. Interactions in responses were also clearly visible using the model based sensitivity analysis as proposed. The output of this work provides a solid foundation of information from which further work and investigation would be feasible, in order to achieve an ultimate goal of a full set of metrics from which combustion data quality could be accurately and objectively assessed.
212

Customer Usage and User-Experienced Quality of NVDB Bicycle Data / Kunders användning och användarupplevd kvalitet av cykeldata i NVDB

Eriksson, Linnea January 2023 (has links)
The national road database, NVDB, contains data on Swedish roads, streets, bicycle paths, and their attributes. Ensuring good data quality of bicycle data is important since it can help develop the bicycle infrastructure and strengthen the role of cycling in the transport system. The project aimed to investigate the usage and user-experienced quality of the bicycle data in NVDB. One objective was to identify how customers are using the data, to determine if data products, documentation, and distribution are sufficient for the customers’ usage. The project also aimed to identify problems users of NVDB bicycle data are experiencing, regarding availability, interpretability, completeness, and thematic uncertainty. Nine semi-structured interviews with users of NVDB bicycle data were carried out. Five categories of usage were identified: bikeability mapping, development of bicycle networks in built-up areas, development of recreational routes, network analysis, and cartography. The user-experienced problems identified were mainly related to completeness and interpretability.
213

Power Grid Partitioning and Monitoring Methods for Improving Resilience

Biswas, Shuchismita 20 August 2021 (has links)
This dissertation aims to develop decision-making tools that aid power grid operators in mitigating extreme events. Two distinct areas are focused on: a) improving grid performance after a severe disturbance, and b) enhancing grid monitoring to facilitate timely preventive actions. The first part of the dissertation presents a proactive islanding strategy to split the bulk power transmission system into smaller self-adequate islands in order to arrest the propagation of cascading failures after an event. Heuristic methods are proposed to determine in what sequence should the island boundary lines be disconnected such that there are no operation constraint violations. The idea of optimal partitioning is further extended to the distribution network. A planning problem for determining which parts of the existing distribution grid can be converted to microgrids is formulated. This partitioning formulation addresses safety limits, uncertainties in load and generation, availability of grid-forming units, and topology constraints such as maintaining network radiality. Microgrids help maintain energy supply to critical loads during grid outages, thereby improving resilience. The second part of the dissertation focuses on wide-area monitoring using Phasor Measurement Unit (PMU) data. Strategies for data imputation and prediction exploiting the spatio-temporal correlation in PMU measurements are outlined. A deep-learning-based methodology for identifying the location of temporary power systems faults is also illustrated. As severe weather events become more frequent, and the threats from coordinated cyber intrusions increase, formulating strategies to reduce the impact of such events on the power grid becomes important; and the approaches outlined in this work can find application in this context. / Doctor of Philosophy / The modern power grid faces multiple threats, including extreme-weather events, solar storms, and potential cyber-physical attacks. Towards the larger goal of enhancing power systems resilience, this dissertation develops strategies to mitigate the impact of such extreme events. The proposed schemes broadly aim to- a) improve grid performance in the immediate aftermath of a disruptive event, and b) enhance grid monitoring to identify precursors of impending failures. To improve grid performance after a disruption, we propose a proactive islanding strategy for the bulk power grid, aimed at arresting the propagation of cascading failures. For the distribution network, a mixed-integer linear program is formulated for identifying optimal sub-networks with load and distributed generators that may be retrofitted to operate as self-adequate microgrids, if supply from the bulk power systems is lost. To address the question of enhanced monitoring, we develop model-agnostic, computationally efficient recovery algorithms for archived and streamed data from Phasor Measurement Units (PMU) with data drops and additive noise. PMUs are highly precise sensors that provide high-resolution insight into grid dynamics. We also illustrate an application where PMU data is used to identify the location of temporary line faults.
214

A STUDY ON THE IMPACT OF PREPROCESSING STEPS ON MACHINE LEARNING MODEL FAIRNESS

Sathvika Kotha (18370548) 17 April 2024 (has links)
<p dir="ltr">The success of machine learning techniques in widespread applications has taught us that with respect to accuracy, the more data, the better the model. However, for fairness, data quality is perhaps more important than quantity. Existing studies have considered the impact of data preprocessing on the accuracy of ML model tasks. However, the impact of preprocessing on the fairness of the downstream model has neither been studied nor well understood. Throughout this thesis, we conduct a systematic study of how data quality issues and data preprocessing steps impact model fairness. Our study evaluates several preprocessing techniques for several machine learning models trained over datasets with different characteristics and evaluated using several fairness metrics. It examines different data preparation techniques, such as changing categories into numbers, filling in missing information, and smoothing out unusual data points. The study measures fairness using standards that check if the model treats all groups equally, predicts outcomes fairly, and gives similar chances to everyone. By testing these methods on various types of data, the thesis identifies which combinations of techniques can make the models both accurate and fair.The empirical analysis demonstrated that preprocessing steps like one-hot encoding, imputation of missing values, and outlier treatment significantly influence fairness metrics. Specifically, models preprocessed with median imputation and robust scaling exhibited the most balanced performance across fairness and accuracy metrics, suggesting a potential best practice guideline for equitable ML model preparation. Thus, this work sheds light on the importance of data preparation in ML and emphasizes the need for careful handling of data to support fair and ethical use of ML in society.</p>
215

Management practices and digital strategies for enhanced ESG reporting quality

Ulvtorp, Hanne January 2024 (has links)
In this research study, triangulation is employed utilizing quantitative and qualitative methods, including content analysis, a perception survey, and expert interviews to find key themes and patterns in management and digital strategies for ESG reporting. The primary focus centered on sequential research, applying the emerging themes from the content analysis to the survey and interview creation. The research questions address organizational challenges with ESG reporting (1), the influence of digital strategies on reporting reliability (2), and management practices that impact stakeholders’ perception of quality, credibility, and transparency in ESG reporting (3). The findings reveal that organizations need to prepare and restructure to meet intensifying ESG reporting requirements. Digital strategies and solutions emerged as fundamental variables that influence the success and quality of ESG reporting practices. To achieve this, data streamlining, normalization, assurance, and verification processes are crucial for enhancing data traceability and credibility across the value chain. Additionally, the empirical findings found that management and communication practices influence stakeholder perception significantly. Therefore, organizations must improve their disclosure practices in transparency and openness to ultimately impact stakeholder perception of organizational communication. The research findings suggest organizations adopt a holistic approach to integrating ESG practices into business models and operational activities. The findings emphasize the urgent need for any organization to comply with ESG reporting practices and continuously improve ESG performance. In conclusion, this study advocates for proactive management practices to maintain a competitive advantage through improving environmental and social business practices.
216

Qualitätssicherung von Datenpublikationen bei Data Journals und Forschungsdatenrepositorien

Kindling, Maxi 22 February 2023 (has links)
Die Qualitätssicherung von Forschungsdaten ist im Kontext offener Wissenschaft ein wichtiges Thema. Sollen geteilte Daten dabei unterstützen, Forschungsergebnisse nachzuvollziehen und die Nachnutzung von Daten ermöglicht werden, bestehen entsprechende Anforderungen an ihre Qualität. Bei Datenqualität und Qualitätssicherung im Kontext von Datenpublikationen handelt es sich allerdings um komplexe und divers verwendete Konzepte. Bislang wird die Qualitätssicherung von Datenpublikationen punktuell ausführlich beschrieben, jedoch fehlt eine Betrachtung, die die möglichen Maßnahmen systematisch beschreibt. Darüber, wie einzelne Maßnahmen bei Repositorien verbreitet sind, ist ebenfalls kaum etwas bekannt. In der Dissertation wird herausgearbeitet, wie Qualität und Qualitätssicherung für Forschungsdaten definiert und systematisiert werden können. Auf dieser Basis wird ein theoretischer Ansatz für die Systematisierung qualitätssichernder Maßnahmen erarbeitet. Er dient als Grundstruktur für die Untersuchung von Data Journals und Repositorien. Dazu werden Guidelines von 135 Data Journals und Zertifizierungsdokumente von 99 Repositorien analysiert, die das Zertifikat CoreTrustSeal in der Version 2017–2019 erhalten haben. Die Analysen zeigen, wie Datenqualität in Data Journal Guidelines und durch Repositorien definiert wird und geben einen Einblick in die Praxis der Qualitätssicherung bei Repositorien. Die Ergebnisse bilden die Grundlage für eine Umfrage zur Verbreitung qualitätssichernder Maßnahmen, die auch offene Prozesse der Qualitätssicherung, Verantwortlichkeiten und die transparente Dokumentation der Datenqualität berücksichtigt. An der Umfrage im Jahr 2021 nahmen 332 Repositorien teil, die im Verzeichnis re3data indexiert sind. Die Ergebnisse der Untersuchungen zeigen den Status quo der Qualitätssicherung und die Definition von Datenqualität bei Data Journals und Forschungsdatenrepositorien auf. Sie zeigen außerdem, dass Repositorien mit vielfältigen Maßnahmen zur Qualitätssicherung von Datenpublikationen beitragen. Die Ergebnisse fließen in ein Framework für die Qualitätssicherung von Datenpublikationen in Repositorien ein. / Quality assurance of research data is an important issue in open science. To enable transparency in research and data reuse, shared data have to meet quality requirements. However, the concepts of data quality and quality assurance are ubiquitous, yet elusive. Quality assurance practices have been researched for data publications in Data Journals, but not systematically for research data repositories. This dissertation elaborates how quality and quality assurance for research data can be defined and systematized. On this basis, a theoretical approach for quality assurance is developed. It is used for the analysis of quality assurance practices at data journals and research data repositories. For this purpose, guidelines of 135 data journals and certification documents of 99 repositories that have received the CoreTrustSeal certificate 2017–2019 are investigated. The analyses show how data quality is defined in data journal guidelines and by repositories and provide insight into repository quality assurance practices. The results informed a questionnaire that aims at analyzing prevalence of data quality assurance at research data repositories. The survey also covered aspects such open measures of quality assurance, responsibilities and transparent quality documentation. 332 repositories indexed in the re3data registry participated in the 2021 online survey. The results of this dissertations analyses indicate the status quo of quality assurance measures and definitions of data quality at data journals and research data repositories. Furthermore, they also show that repositories contribute to the quality assurance of data publications with a variety of measures. The results are incorporated into a framework for quality assurance of data publications at research data repositories.
217

Factors influencing the quality of data for tuberculosis control programme in Oshakati District, Namibia

Kagasi, Linda Vugutsa 11 1900 (has links)
This study investigated factors influencing the quality of data for the Tuberculosis (TB) control programme in Oshakati District in Namibia. A quantitative, cross-sectional descriptive survey was conducted using 50 nurses who were sampled from five departments in Oshakati State Hospital. Data was collected by means of a self-administered questionnaire. The results indicated that the majority (90%) of the respondents agreed that TB training improved correct recording and reporting. Sixty percent of the respondents agreed that TB trainings influenced the rate of incomplete records in the unit, while 26% of the respondents disagreed with this statement. This indicates that TB trainings influence the quality of data reported in the TB programme as it influences correct recording and completeness of data at operational level. Participants’ knowledge on TB control guidelines, in particular the use of TB records to, used to capture the core TB indicators influenced the quality of data in the programme. The attitudes and practises of respondents affected implementation of TB guidelines hence, influencing the quality of data in the programme. The findings related to the influence of the quality of data in the TB programme and its effect to decision-making demonstrated a positive relationship (p=0.0023) between the attitudes of study participant on the use of data collected for decision-making. Knowledge, attitudes and practice are the main factors influencing the quality of data in the TB control programme in Oshakati District. / Health Studies / M.A. (Public Health)
218

Factors affecting antiretroviral therapy patients' data quality at Princess Marina Hospital pharmacy in Botswana

Tesema, Hana Tsegaye 04 June 2015 (has links)
AIM: This study aimed to explore the factors influencing antiretroviral therapy patients` data quality at Princess Marina Hospital Pharmacy in Botswana. METHODS: A phenomenological approach was adopted in this study. Specifically, Interpretative Phenomenological Analysis qualitative design was used to explore the factors influencing antiretroviral therapy patients` data quality at Princess Marina Hospital Pharmacy in Botswana. Data were collected using a semi-structured interview format on 18 conveniently selected pharmacy staff. Data were analysed using Smith’s (2005) Interpretative Phenomenological Analysis framework. RESULT: Five thematic categories emerged from data analysis: data capturing: an extra task, knowledge and experience of IPMS, training and education, mentoring and supervision, and data quality: impact on patients’ care. The findings of this study have implications for practice, training and research. CONCLUSION: Pharmacy staff had limited knowledge of IPMS and its utilisation in data capturing. Such limitations have implications in the context of the quality of data captured / Health Studies / M.A. (Health Studies)
219

Development of artificial intelligence-based in-silico toxicity models : data quality analysis and model performance enhancement through data generation

Malazizi, Ladan January 2008 (has links)
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
220

Les limites de l'ACV. Etude de la soutenabilité d'un biodiesel issu de l'huile de palme brésilienne / The LCA limits. A study of the sustainability of a biodiesel produced from brazilian palm oil

Bicalho, Tereza 22 October 2013 (has links)
L’analyse de cycle de vie (ACV), telle qu’elle est pratiquée aujourd’hui, peut conduire à des résultats biaisés. L’utilisation de cet outil s’avère particulièrement sensible dans des cadres réglementaires. En effet, au lieu d’inciter les entreprises à réduire leurs impacts sur l’environnement, les certifications obtenues à partir des ACV risquent de produire un effet contraire : comme elles tendent à récompenser des moyennes industrielles plutôt que les résultats propres aux entreprises, elles peuvent détruire toute incitation pour ces dernières à agir correctement sur le plan environnemental. Dans cette thèse nous proposons des éléments de réflexion en matière de gestion pouvant être utiles à l’évolution de l’ACV à partir d’une étude de cas sur l’évaluation de la soutenabilité d’une filière biodiesel issu d’huile de palme brésilienne dans le cadre de la Directive EnR. Trois principaux résultats émergent de ce travail doctoral. Le premier se rapporte à la réflexion que nous menons sur l’évaluation de la durabilité imposée par la Directive EnR. Le deuxième renvoie aux réponses concrètes sur l’évaluation de la filière biodiesel évaluée à l’égard de la Directive, notamment par rapport aux émissions de gaz à effet de serre. Le troisième résultat concerne l’identification des besoins latents en matière d’évaluation de qualité des données d’ACV / Life cycle analysis (LCA), as it is currently applied, can lead to biased results. The use of LCA information is particularly sensitive when taken in the context of government regulatory frameworks. Indeed, instead of encouraging companies to reduce their impact on the environment, certifications obtained through LCA studies may produce the opposite effect: as they tend to reward industry averages rather than enterprise-specific results they can destroy all incentive for companies to reduce their environmental impacts. In this thesis we propose an in-depth analysis of management aspects in LCA and discuss how they could contribute to produce good quality LCA studies. For this, a case study was conducted on the sustainability evaluation of a biodiesel produced from Brazilian palm oil within the framework of the Renewable Energy Directive (RED). Three main findings emerge from this doctoral work. The first refers to the analysis of the sustainability evaluation required by RED with a particular emphasis on its application to the Brazilian context of palm oil production. The second refers to the concrete answers produced from the biodiesel evaluated, particularly with respect to greenhouse gas emissions. The third result concerns the identification of latent needs in terms of LCA data quality assessment

Page generated in 0.0483 seconds