• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 6
  • 5
  • 2
  • 1
  • Tagged with
  • 51
  • 51
  • 19
  • 11
  • 9
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quantitative Evaluation of Software Quality Metrics in Open-Source Projects

Barkmann, Henrike January 2009 (has links)
<p>The validation of software quality metrics lacks statistical</p><p>significance. One reason for this is that the data collection</p><p>requires quite some effort. To help solve this problem,</p><p>we develop tools for metrics analysis of a large number of</p><p>software projects (146 projects with ca. 70.000 classes and</p><p>interfaces and over 11 million lines of code). Moreover, validation</p><p>of software quality metrics should focus on relevant</p><p>metrics, i.e., correlated metrics need not to be validated independently.</p><p>Based on our statistical basis, we identify correlation</p><p>between several metrics from well-known objectoriented</p><p>metrics suites. Besides, we present early results of</p><p>typical metrics values and possible thresholds.</p>
2

The Impact of Objective Quality Ratings on Patient Selection of Community Pharmacies: A Discrete Choice Experiment and Latent Class Analysis

Patterson, Julie A 01 January 2017 (has links)
Background: Pharmacy-related performance measures have gained significant attention in the transition to value-based healthcare. Pharmacy-level quality measures, including those developed by the Pharmacy Quality Alliance, are not yet publicly accessible. However, the publication of report cards for individual pharmacies has been discussed as a way to help direct patients towards high-quality pharmacies. This study aimed to measure the relative strength of patient preferences for community pharmacy attributes, including pharmacy quality. Additionally, this study aimed to identify and describe community pharmacy market segments based on patient preferences for pharmacy attributes. Methods: This study elicited patient preferences for community pharmacy attributes using a discrete choice experiment (DCE) among a sample of 773 adults aged 18 years and older. Six attributes were selected based on published literature, expert opinion, and pilot testing feedback. The attributes included hours of operation, staff friendliness/courtesy, pharmacist communication, pharmacist willingness to establish a personal relationship, overall quality, and a drug-drug interaction specific quality metric. Participants responded to a block of ten random choice tasks assigned by Sawtooth v9.2 and two fixed tasks, including a dominant and a hold-out scenario. The data were analyzed using conditional logit and latent class regression models, and Hierarchical Bayes estimates of individual-level utilities were used to compare preferences across demographic subgroups. Results: Among the 773 respondents who began the survey, 741 (95.9%) completed the DCE and demographic questionnaire. Overall, study participants expressed the strongest preferences for quality-related pharmacy attributes. The attribute importance values (AIVs) were highest for the specific, drug-drug interaction (DDI) quality measure, presented as, “The pharmacy ensured there were no patients who were dispensed two medications that can cause harm when taken together,” (40.3%) and the overall pharmacy quality measure (31.3%). The utility values for 5-star DDI and overall quality ratings were higher among women (83.0 and 103.8, respectively) than men (76.2 and 94.5, respectively), and patients with inadequate health literacy ascribed higher utility to pharmacist efforts to get to know their patients (26.0) than their higher literacy counterparts (16.3). The best model from the latent class analysis contained three classes, coined the Quality Class (67.6% of participants), the Relationship Class (28.3%), and the Convenience Class (4.2%). Conclusions: The participants in this discrete choice experiment exhibited strong preferences for pharmacies with higher quality ratings. This finding may reflect patient expectations of community pharmacists, namely that pharmacists ensure that patients are not harmed by the medications filled at their pharmacies. Latent class analysis revealed underlying heterogeneity in patient preferences for community pharmacy attributes.
3

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
4

Quantitative Evaluation of Software Quality Metrics in Open-Source Projects

Barkmann, Henrike January 2009 (has links)
The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known objectoriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.
5

Quality metrics in continuous delivery : A mixed approach

Jain, Aman, Aduri, Raghu ram January 2016 (has links)
Context. Continuous delivery deals with concept of deploying the user stories as soon as they are finished rather than waiting for the sprint to end. This concept increases the chances of early improvement to the software and provides the customer with a clear view of the final product that is expected from the software organization, but little research has been done on the quality of product developed and the ways to measure it. This research is conducted in the context of presenting a checklist of quality metrics that can be used by the practitioners to ensure good quality product delivery. Objectives. In this study, the authors strive towards the accomplishment of the following objectives: the first objective is to identify the quality metrics being used in agile approaches and continuous delivery by the organizations. The second objective is to evaluate the usefulness of the identified metrics, limitations of the metrics and identify new metrics. The final objective is to is to present and evaluate a solution i.e., checklist of metrics that can be used by practitioners to ensure quality of product developed using continuous delivery. Methods. To accomplish the objectives, the authors used mixture of approaches. First literature review was performed to identify the quality metrics being used in continuous delivery. Based on the data obtained from the literature review, the authors performed an online survey using a questionnaire posted over an online questionnaire hosting website. The online questionnaire was intended to find the usefulness of identified metrics, limitations of using metrics and also to identify new metrics based on the responses obtained for the online questionnaire. The authors conducted interviews and the interviews comprised of few close-ended questions and few open-ended questions which helped the authors to validate the usage of the metrics checklist. Results. Based on the LR performed at the start of the study, the authors obtained data regarding the background of continuous delivery, research performed over continuous delivery by various practitioners as well as a list of quality metrics used in continuous delivery. Later, the authors conducted an online survey using questionnaire that resulted in ranking the usefulness of quality metrics and identification of new metrics used in continuous delivery. Based on the data obtained from the online questionnaire, a checklist of quality metrics involved in continuous delivery was generated. Conclusions. Based on the interviews conducted to validate the checklist of metrics (generated as a result of the online questionnaire), the authors conclude that the checklist of metrics is fit for use in industry, but with some necessary changes made to the checklist based on the project requirements. The checklist will act as a reminder to the practitioners regarding the quality aspects that need to be measured during product development and maybe as a starting point while planning metrics that need to be measured during the project.
6

Metrics and Test Procedures for Data Quality Estimation in the Aeronautical Telemetry Channel

Hill, Terry 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / There is great potential in using Best Source Selectors (BSS) to improve link availability in aeronautical telemetry applications. While the general notion that diverse data sources can be used to construct a consolidated stream of "better" data is well founded, there is no standardized means of determining the quality of the data streams being merged together. Absent this uniform quality data, the BSS has no analytically sound way of knowing which streams are better, or best. This problem is further exacerbated when one imagines that multiple vendors are developing data quality estimation schemes, with no standard definition of how to measure data quality. In this paper, we present measured performance for a specific Data Quality Metric (DQM) implementation, demonstrating that the signals present in the demodulator can be used to quickly and accurately measure the data quality, and we propose test methods for calibrating DQM over a wide variety of channel impairments. We also propose an efficient means of encapsulating this DQM information with the data, to simplify processing by the BSS. This work leads toward a potential standardization that would allow data quality estimators and best source selectors from multiple vendors to interoperate.
7

Information quality assessment in e-learning systems

Alkhattabi, Mona Awad January 2010 (has links)
E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers' viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
8

Visual Quality Metrics Resulting from Dynamic Corneal Tear Film Topography

Solem, Cameron Cole, Solem, Cameron Cole January 2017 (has links)
The visual quality effects from the dynamic behavior of the tear film have been determined through measurements acquired with a high resolution Twyman-Green interferometer. The base shape of the eye has been removed to isolate the aberrations induced by the tear film. The measured tear film was then combined with a typical human eye model to simulate visual performance. Fourier theory has been implemented to calculate the incoherent point spread function, the modulation transfer function, and the subjective quality factor for this system. Analysis software has been developed for ease of automation for large data sets, and outputs movies have been made that display these visual quality metrics alongside the tear film. Post processing software was written to identify and eliminate bad frames. As a whole, this software creates the potential for increased intuition about the connection between blinks, tear film dynamics and visual quality.
9

Gestion des connaissances et externalisation informatique. Apports managériaux et techniques pour l'amélioration du processus de transition : Cas de l’externalisation informatique dans un EPST / Knowledge Management and IT Outsourcing. Managerial and technical inputs to improve the transition process

Grim-Yefsah, Malika 23 November 2012 (has links)
Le travail de recherche de cette thèse traite de la problématique de transfert de connaissances lors du processus de transition d’un projet informatique externalisé dans un EPST. En particulier, Comment transférer les connaissances, constituées des expériences-succès ou échecs passés, routines, assimilées et cumulées pendant la durée d’un projet externalisé par les membres d’une équipe sortante vers une nouvelle équipe entrante d’une manière efficiente ? Nous nous focalisons sur ce processus de transition en raison de son importance pour le succès de l’externalisation informatique, de sa complexité, de sa richesse théorique et le manque d’études dans ce domaine. Nous avons choisi d’approcher cette problématique par le biais de la gestion des connaissances. Dans un premier volet de cette thèse, nous nous sommes appuyées sur le paradigme Goal-Question-Metric proposant une démarche de définition de la qualité pour progresser de notre besoin opérationnel jusqu’à la définition des métriques d’évaluation de la robustesse utilisant des informations issues de l’analyse de réseaux informels sous-jacents aux activités effectuées dans le processus métier. Ces métriques permettent d’évaluer une partie de la qualité d’un processus métier en tenant compte de la connaissance tacite des acteurs du processus de transition. Dans un second volet de cette recherche, nous avons développés une méthode, en nous appuyant sur l’approche de capitalisation sur les connaissances et des mécanismes théoriques de transfert de connaissances, et un outil informatique pour mettre en œuvre ce processus de transfert de connaissances / The research of this thesis deals with the issue of knowledge transfer during the transition process of an IT project outsourced in EPST. In particular, How to transfer knowledge, experience and routines related to outsourced activities from outgoing team to a new incoming team? We focus on the transition due to its significance for outsourcing success, its complexity and theoretical richness, and its limited current understanding. We chose to approach this problem through knowledge management. In the first part of this thesis, based on the Goal-Question-Metric paradigm, we propose an approach for the definition of quality metrics covering the given operational requirements. The metrics we define take tacit knowledge into account, using information from the structural analysis of an informal network. In a second phase of this research, we developed a method, relying on capitalization on knowledge and theoretical mechanisms of knowledge transfer, and a tool to implement this process of knowledge transfer
10

Systems Modeling and Modularity Assessment for Embedded Computer Control Applications

Chen, Dejiu January 2004 (has links)
AbstractThe development of embedded computer control systems(ECS) requires a synergetic integration of heterogeneoustechnologies and multiple engineering disciplines. Withincreasing amount of functionalities and expectations for highproduct qualities, short time-to-market, and low cost, thesuccess of complexity control and built-in flexibility turn outto be one of the major competitive edges for many ECS products.For this reason, modeling and modularity assessment constitutetwo critical subjects of ECS engineering.In the development ofECS, model-based design is currently being exploited in most ofthe sub-systems engineering activities. However, the lack ofsupport for formalization and systematization associated withthe overall systems modeling leads to problems incomprehension, cross-domain communication, and integration oftechnologies and engineering activities. In particular, designchanges and exploitation of "components" are often risky due tothe inability to characterize components' properties and theirsystem-wide contexts. Furthermore, the lack of engineeringtheories for modularity assessment in the context of ECS makesit difficult to identify parameters of concern and to performearly system optimization. This thesis aims to provide a more complete basis for theengineering of ECS in the areas of systems modeling andmodularization. It provides solution domain models for embeddedcomputer control systems and the software subsystems. Thesemeta-models describe the key system aspects, design levels,components, component properties and relationships with ECSspecific semantics. By constituting the common basis forabstracting and relating different concerns, these models willalso help to provide better support for obtaining holisticsystem views and for incorporating useful technologies fromother engineering and research communities such as to improvethe process and to perform system optimization. Further, amodeling framework is derived, aiming to provide a perspectiveon the modeling aspect of ECS development and to codifyimportant modeling concepts and patterns. In order to extendthe scope of engineering analysis to cover flexibility relatedattributes and multi-attribute tradeoffs, this thesis alsoprovides a metrics system for quantifying componentdependencies that are inherent in the functional solutions.Such dependencies are considered as the key factors affectingcomplexity control, concurrent engineering, and flexibility.The metrics system targets early system-level design and takesinto account several domain specific features such asreplication and timing accuracy. Keywords:Domain-Specific Architectures, Model-basedSystem Design, Software Modularization and Components, QualityMetrics. / QC 20100524

Page generated in 0.0397 seconds