• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Exploring the potential for secondary uses of Dementia Care Mapping (DCM) data for improving the quality of dementia care

Khalid, Shehla, Surr, Claire A., Neagu, Daniel, Small, Neil A. 30 March 2017 (has links)
Yes / The reuse of existing datasets to identify mechanisms for improving healthcare quality has been widely encouraged. There has been limited application within dementia care. Dementia Care Mapping (DCM) is an observational tool in widespread use, predominantly to assess and improve quality of care in single organisations. DCM data has the potential to be used for secondary purposes to improve quality of care. However, its suitability for such use requires careful evaluation. This study conducted in-depth interviews with 29 DCM users to identify issues, concerns and challenges regarding the secondary use of DCM data. Data was analysed using modified Grounded Theory. Major themes identified included the need to collect complimentary contextual data in addition to DCM data, to reassure users regarding ethical issues associated with storage and reuse of care related data and the need to assess and specify data quality for any data that might be available for secondary analysis. / This study was funded by the Faculty of Health Studies, University of Bradford.
162

Hidden labour: The skilful work of clinical audit data collection and its implications for secondary use of data via integrated health IT

McVey, Lynn, Alvarado, Natasha, Greenhalgh, J., Elshehaly, Mai, Gale, C.P., Lake, J., Ruddle, R.A., Dowding, D., Mamas, M., Feltbower, R., Randell, Rebecca 26 July 2021 (has links)
Yes / Secondary use of data via integrated health information technology is fundamental to many healthcare policies and processes worldwide. However, repurposing data can be problematic and little research has been undertaken into the everyday practicalities of inter-system data sharing that helps explain why this is so, especially within (as opposed to between) organisations. In response, this article reports one of the most detailed empirical examinations undertaken to date of the work involved in repurposing healthcare data for National Clinical Audits. Methods: Fifty-four semi-structured, qualitative interviews were carried out with staff in five English National Health Service hospitals about their audit work, including 20 staff involved substantively with audit data collection. In addition, ethnographic observations took place on wards, in ‘back offices’ and meetings (102 hours). Findings were analysed thematically and synthesised in narratives. Results: Although data were available within hospital applications for secondary use in some audit fields, which could, in theory, have been auto-populated, in practice staff regularly negotiated multiple, unintegrated systems to generate audit records. This work was complex and skilful, and involved cross-checking and double data entry, often using paper forms, to assure data quality and inform quality improvements. Conclusions: If technology is to facilitate the secondary use of healthcare data, the skilled but largely hidden labour of those who collect and recontextualise those data must be recognised. Their detailed understandings of what it takes to produce high quality data in specific contexts should inform the further development of integrated systems within organisations.
163

SILE: A Method for the Efficient Management of Smart Genomic Information

León Palacio, Ana 25 November 2019 (has links)
[ES] A lo largo de las últimas dos décadas, los datos generados por las tecnologías de secuenciación de nueva generación han revolucionado nuestro entendimiento de la biología humana. Es más, nos han permitido desarrollar y mejorar nuestro conocimiento sobre cómo los cambios (variaciones) en el ADN pueden estar relacionados con el riesgo de sufrir determinadas enfermedades. Actualmente, hay una gran cantidad de datos genómicos disponibles de forma pública, que son consultados con frecuencia por la comunidad científica para extraer conclusiones significativas sobre las asociaciones entre los genes de riesgo y los mecanismos que producen las enfermedades. Sin embargo, el manejo de esta cantidad de datos que crece de forma exponencial se ha convertido en un reto. Los investigadores se ven obligados a sumergirse en un lago de datos muy complejos que están dispersos en más de mil repositorios heterogéneos, representados en múltiples formatos y con diferentes niveles de calidad. Además, cuando se trata de resolver una tarea en concreto sólo una pequeña parte de la gran cantidad de datos disponibles es realmente significativa. Estos son los que nosotros denominamos datos "inteligentes". El principal objetivo de esta tesis es proponer un enfoque sistemático para el manejo eficiente de datos genómicos inteligentes mediante el uso de técnicas de modelado conceptual y evaluación de calidad de los datos. Este enfoque está dirigido a poblar un sistema de información con datos que sean lo suficientemente accesibles, informativos y útiles para la extracción de conocimiento de valor. / [CA] Al llarg de les últimes dues dècades, les dades generades per les tecnologies de secuenciació de nova generació han revolucionat el nostre coneixement sobre la biologia humana. És mes, ens han permès desenvolupar i millorar el nostre coneixement sobre com els canvis (variacions) en l'ADN poden estar relacionats amb el risc de patir determinades malalties. Actualment, hi ha una gran quantitat de dades genòmiques disponibles de forma pública i que són consultats amb freqüència per la comunitat científica per a extraure conclusions significatives sobre les associacions entre gens de risc i els mecanismes que produeixen les malalties. No obstant això, el maneig d'aquesta quantitat de dades que creix de forma exponencial s'ha convertit en un repte i els investigadors es veuen obligats a submergir-se en un llac de dades molt complexes que estan dispersos en mes de mil repositoris heterogenis, representats en múltiples formats i amb diferents nivells de qualitat. A m\és, quan es tracta de resoldre una tasca en concret només una petita part de la gran quantitat de dades disponibles és realment significativa. Aquests són els que nosaltres anomenem dades "intel·ligents". El principal objectiu d'aquesta tesi és proposar un enfocament sistemàtic per al maneig eficient de dades genòmiques intel·ligents mitjançant l'ús de tècniques de modelatge conceptual i avaluació de la qualitat de les dades. Aquest enfocament està dirigit a poblar un sistema d'informació amb dades que siguen accessibles, informatius i útils per a l'extracció de coneixement de valor. / [EN] In the last two decades, the data generated by the Next Generation Sequencing Technologies have revolutionized our understanding about the human biology. Furthermore, they have allowed us to develop and improve our knowledge about how changes (variants) in the DNA can be related to the risk of developing certain diseases. Currently, a large amount of genomic data is publicly available and frequently used by the research community, in order to extract meaningful and reliable associations among risk genes and the mechanisms of disease. However, the management of this exponential growth of data has become a challenge and the researchers are forced to delve into a lake of complex data spread in over thousand heterogeneous repositories, represented in multiple formats and with different levels of quality. Nevertheless, when these data are used to solve a concrete problem only a small part of them is really significant. This is what we call "smart" data. The main goal of this thesis is to provide a systematic approach to efficiently manage smart genomic data, by using conceptual modeling techniques and the principles of data quality assessment. The aim of this approach is to populate an Information System with data that are accessible, informative and actionable enough to extract valuable knowledge. / This thesis was supported by the Research and Development Aid Program (PAID-01-16) under the FPI grant 2137. / León Palacio, A. (2019). SILE: A Method for the Efficient Management of Smart Genomic Information [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/131698 / Premios Extraordinarios de tesis doctorales
164

An analysis of semantic data quality defiencies in a national data warehouse: a data mining approach

Barth, Kirstin 07 1900 (has links)
This research determines whether data quality mining can be used to describe, monitor and evaluate the scope and impact of semantic data quality problems in the learner enrolment data on the National Learners’ Records Database. Previous data quality mining work has focused on anomaly detection and has assumed that the data quality aspect being measured exists as a data value in the data set being mined. The method for this research is quantitative in that the data mining techniques and model that are best suited for semantic data quality deficiencies are identified and then applied to the data. The research determines that unsupervised data mining techniques that allow for weighted analysis of the data would be most suitable for the data mining of semantic data deficiencies. Further, the academic Knowledge Discovery in Databases model needs to be amended when applied to data mining semantic data quality deficiencies. / School of Computing / M. Tech. (Information Technology)
165

Strategies to Improve Data Quality for Forecasting Repairable Spare Parts

Eguasa, Uyi Harrison 01 January 2016 (has links)
Poor input data quality used in repairable spare parts forecasting by aerospace small and midsize enterprises (SME) suppliers results in poor inventory practices that manifest into higher costs and critical supply shortage risks. Guided by the data quality management (DQM) theory as the conceptual framework, the purpose of this exploratory multiple case study was to identify the key strategies that the aerospace SME repairable spares suppliers use to maximize their input data quality used in forecasting repairable spare parts. The multiple case study comprised of a census sample of 6 forecasting business leaders from aerospace SME repairable spares suppliers located in the states of Florida and Kansas. The sample was collected via semistructured interviews and supporting documentation from the consenting participants and organizational websites. Eight core themes emanated from the application of the content data analysis process coupled with methodological triangulation. These themes were labeled as establish data governance, identify quality forecast input data sources, develop a sustainable relationship and collaboration with customers and vendors, utilize a strategic data quality system, conduct continuous input data quality analysis, identify input data quality measures, incorporate continuous improvement initiatives, and engage in data quality training and education. Of the 8 core themes, 6 aligned to the DQM theory's conceptual constructs while 2 surfaced as outliers. The key implication of the research toward positive social change may include the increased situational awareness for SME forecasting business leaders to focus on enhancing business practices for input data quality to forecast repairable spare parts to attain sustainable profits.
166

Measurement properties of respondent-defined rating-scales : an investigation of individual characteristics and respondent choices

Chami-Castaldi, Elisa January 2010 (has links)
It is critical for researchers to be confident of the quality of survey data. Problems with data quality often relate to measurement method design, through choices made by researchers in their creation of standardised measurement instruments. This is known to affect the way respondents interpret and respond to these instruments, and can result in substantial measurement error. Current methods for removing measurement error are post-hoc and have been shown to be problematic. This research proposes that innovations can be made through the creation of measurement methods that take respondents' individual cognitions into consideration, to reduce measurement error in survey data. Specifically, the aim of the study was to develop and test a measurement instrument capable of having respondents individualise their own rating-scales. A mixed methodology was employed. The qualitative phase provided insights that led to the development of the Individualised Rating-Scale Procedure (IRSP). This electronic measurement method was then tested in a large multi-group experimental study, where its measurement properties were compared to those of Likert-Type Rating-Scales (LTRSs). The survey included pre-validated psychometric constructs which provided a baseline for comparing the methods, as well as to explore whether certain individual characteristics are linked to respondent choices. Structural equation modelling was used to analyse the survey data. Whilst no strong associations were found between individual characteristics and respondent choices, the results demonstrated that the IRSP is reliable and valid. This study has produced a dynamic measurement instrument that accommodates individual-level differences, not addressed by typical fixed rating-scales.
167

Factors affecting the performance of trainable models for software defect prediction

Bowes, David Hutchinson January 2013 (has links)
Context. Reports suggest that defects in code cost the US in excess of $50billion per year to put right. Defect Prediction is an important part of Software Engineering. It allows developers to prioritise the code that needs to be inspected when trying to reduce the number of defects in code. A small change in the number of defects found will have a significant impact on the cost of producing software. Aims. The aim of this dissertation is to investigate the factors which a ect the performance of defect prediction models. Identifying the causes of variation in the way that variables are computed should help to improve the precision of defect prediction models and hence improve the cost e ectiveness of defect prediction. Methods. This dissertation is by published work. The first three papers examine variation in the independent variables (code metrics) and the dependent variable (number/location of defects). The fourth and fifth papers investigate the e ect that di erent learners and datasets have on the predictive performance of defect prediction models. The final paper investigates the reported use of di erent machine learning approaches in studies published between 2000 and 2010. Results. The first and second papers show that independent variables are sensitive to the measurement protocol used, this suggests that the way data is collected a ects the performance of defect prediction. The third paper shows that dependent variable data may be untrustworthy as there is no reliable method for labelling a unit of code as defective or not. The fourth and fifth papers show that the dataset and learner used when producing defect prediction models have an e ect on the performance of the models. The final paper shows that the approaches used by researchers to build defect prediction models is variable, with good practices being ignored in many papers. Conclusions. The measurement protocols for independent and dependent variables used for defect prediction need to be clearly described so that results can be compared like with like. It is possible that the predictive results of one research group have a higher performance value than another research group because of the way that they calculated the metrics rather than the method of building the model used to predict the defect prone modules. The machine learning approaches used by researchers need to be clearly reported in order to be able to improve the quality of defect prediction studies and allow a larger corpus of reliable results to be gathered.
168

Historical aerial photographs and digital photogrammetry for landslide assessment

Walstra, Jan January 2006 (has links)
This study demonstrates the value of historical aerial photographs as a source for monitoring long-term landslide evolution, which can be unlocked by using appropriate photogrammetric methods. The understanding of landslide mechanisms requires extensive data records; a literature review identified quantitative data on surface movements as a key element for their analysis. It is generally acknowledged that, owing to the flexibility and high degree of automation of modern digital photogrammetric techniques, it is possible to derive detailed quantitative data from aerial photographs. In spite of the relative ease of such techniques, there is only scarce research available on data quality that can be achieved using commonly available material, hence the motivation of this study. In two landslide case-studies (the Mam Tor and East Pentwyn landslides) the different types of products were explored, that can be derived from historical aerial photographs. These products comprised geomorphological maps, automatically derived elevation models (DEMs) and displacement vectors. They proved to be useful and sufficiently accurate for monitoring landslide evolution. Comparison with independent survey data showed good consistency, hence validating the techniques used. A wide range of imagery was used in terms of quality, media and format. Analysis of the combined datasets resulted in improvements to the stochastic model and establishment of a relationship between image ground resolution and data accuracy. Undetected systematic effects provided a limiting constraint to the accuracy of the derived data, but the datasets proved insufficient to quantify each factor individually. An important advancement in digital photogrammetry is image matching, which allows automation of various stages of the working chain. However, it appeared that the radiometric quality of historical images may not always assure good results, both for extracting DEMs and vectors using automatic methods. It can be concluded that the photographic archive can provide invaluable data for landslide studies, when modern photogrammetric techniques are being used. As ever, independent and appropriate checks should always be included in any photogrammetric design.
169

Postoje adolescentů ve výzkumech veřejného mínění, kvalita a spolehlivost získaných dat / Adolescent's attitudes in public opinion research, data quality and reliability

Šlégrová, Petra January 2014 (has links)
The diploma thesis focuses on the youngest age cathegory of respondents in public opinion polls. The main goal is to examine character and quality of information about adolescent's attitudes and opinions obtained in public opinion polls that are held by The Public Opinion Research Centre. To achieve the main goal nonattitude is examined. The thesis will be divided into theoretical and practical part. Theoretical part stands on the basis of public opinion sociology and developmental psychology and the issue of attitude measurement is introduced along with adolescents developmental theory and characteristics. Practical part reflects the information summoned in theoretical part and test them on data collected by The Public Opinion Research Centre which were obtained in continuous research within project Our society. Analysis focuses on examination of nonresponse, don't know answers and neutral attitudes. Results are compared among all age groups.
170

Modèle d'estimation de l'imprécision des mesures géométriques de données géographiques / A model to estimate the imprecision of geometric measurements computed from geographic data.

Girres, Jean-François 04 December 2012 (has links)
De nombreuses applications SIG reposent sur des mesures de longueur ou de surface calculées à partir de la géométrie des objets d'une base de données géographiques (comme des calculs d'itinéraires routiers ou des cartes de densité de population par exemple). Cependant, aucune information relative à l'imprécision de ces mesures n'est aujourd'hui communiquée à l'utilisateur. En effet, la majorité des indicateurs de précision géométrique proposés porte sur les erreurs de positionnement des objets, mais pas sur les erreurs de mesure, pourtant très fréquentes. Dans ce contexte, ce travail de thèse cherche à mettre au point des méthodes d'estimation de l'imprécision des mesures géométriques de longueur et de surface, afin de renseigner un utilisateur dans une logique d'aide à la décision. Pour répondre à cet objectif, nous proposons un modèle permettant d'estimer les impacts de règles de représentation (projection cartographique, non-prise en compte du terrain, approximation polygonale des courbes) et de processus de production (erreur de pointé et généralisation cartographique) sur les mesures géométriques de longueur et de surface, en fonction des caractéristiques des données vectorielles évaluées et du terrain que ces données décrivent. Des méthodes d'acquisition des connaissances sur les données évaluées sont également proposées afin de faciliter le paramétrage du modèle par l'utilisateur. La combinaison des impacts pour produire une estimation globale de l'imprécision de mesure demeure un problème complexe et nous proposons des premières pistes de solutions pour encadrer au mieux cette erreur cumulée. Le modèle proposé est implémenté au sein du prototype EstIM (Estimation de l'Imprécision des Mesures) / Many GIS applications are based on length and area measurements computed from the geometry of the objects of a geographic database (such as route planning or maps of population density, for example). However, no information concerning the imprecision of these measurements is now communicated to the final user. Indeed, most of the indicators on geometric quality focuses on positioning errors, but not on measurement errors, which are very frequent. In this context, this thesis seeks to develop methods for estimating the imprecision of geometric measurements of length and area, in order to inform a user for decision support. To achieve this objective, we propose a model to estimate the impacts of representation rules (cartographic projection, terrain, polygonal approximation of curves) and production processes (digitizing error, cartographic generalisation) on geometric measurements of length and area, according to the characteristics and the spatial context of the evaluated objects. Methods for acquiring knowledge about the evaluated data are also proposed to facilitate the parameterization of the model by the user. The combination of impacts to produce a global estimation of the imprecision of measurement is a complex problem, and we propose approaches to approximate the cumulated error bounds. The proposed model is implemented in the EstIM prototype (Estimation of the Imprecision of Measurements)

Page generated in 0.0796 seconds