• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 6
  • 5
  • 2
  • 1
  • Tagged with
  • 52
  • 52
  • 19
  • 11
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Une méthode pour l'évaluation de la qualité des images 3D stéréoscopiques. / The objective and subjective quality of 3D images

Vlad, Raluca Ioana 02 December 2013 (has links)
Dans le contexte d'un intérêt grandissant pour les systèmes stéréoscopiques, mais sans méthodes reproductible pour estimer leur qualité, notre travail propose une contribution à la meilleure compréhension des mécanismes de perception et de jugement humains relatifs au concept multi-dimensionnel de qualité d'image stéréoscopique. Dans cette optique, notre démarche s'est basée sur un certain nombre d'outils : nous avons proposé un cadre adapté afin de structurer le processus d'analyse de la qualité des images stéréoscopiques, nous avons implémenté dans notre laboratoire un système expérimental afin de conduire plusieurs tests, nous avons crée trois bases de données d'images stéréoscopiques contenant des configurations précises et enfin nous avons conduit plusieurs expériences basées sur ces collections d'images. La grande quantité d'information obtenue par l'intermédiaire de ces expérimentations a été utilisée afin de construire un premier modèle mathématique permettant d'expliquer la perception globale de la qualité de la stéréoscopie en fonction des paramètres physiques des images étudiée. / In a context of ever-growing interest in stereoscopic systems, but where no standardized algorithmic methods of stereoscopic quality assessment exist, our work stands as a step forward in the understanding of the human perception and judgment mechanisms related to the multidimensional concept of stereoscopic image quality. We used a series of tools in order to perform in-depth investigations in this direction: we proposed an adapted framework to structure the process of stereoscopic quality assessment, we implemented a stereoscopic system in our laboratory for performing various tests, we created three stereoscopic datasets with precise structures, and we performed several experimental studies using these datasets. The numerous experimental data obtained were used in order to propose a first mathematical framework for explaining the overall percept of stereoscopic quality in function of the physical parameters of the stereoscopic images under study.
32

Aplicação de métricas de software na predição de características físicas de software embarcado / Application of software quality metrics to predict physical characteristics of embedded systems

Corrêa, Ulisses Brisolara January 2011 (has links)
A complexidade dos dispositivos embarcados propõe novos desafios para o desenvolvimento de software embarcado, além das tradicionais restrições físicas. Então, a avaliação da qualidade do software embarcado e seu impacto nessas propriedades tradicionais torna-se mais importante. Conceitos como reúso abstração, coesão, acoplamento, entre outros atributos de software têm sido usados como métricas de qualidade no domínio da engenharia de software. No entanto, elas não têm sido usadas no domínio do software embarcado. No desenvolvimento de sistemas embarcados outro conjunto de ferramentas é usado para estimar as propriedades físicas, tais como: consumo de energia, ocupação de memória e desempenho. Essas ferramentas geralmente envolvem custosos processos de síntese e simulação. Nos complexos dispositivos embarcados atuais deve-se confiar em ferramentas que possam ajudar na exploração do espaço de projeto ainda nos níveis mais altos de abstração, identificando a solução que representa a melhor estratégia de projeto em termos da qualidade de software, enquanto, simultaneamente, atenda aos requisitos físicos. Neste trabalho é apresentada uma análise da correlação entre métricas de qualidade de software, que podem ser extraídas antes do sistema ser sintetizado, e as métricas físicas do software embarcado. Usando uma rede neural nós investigamos o uso dessas correlações para predizer o impacto que uma determinada modificação no software trará às métricas físicas do mesmo software. Esta estimativa pode ser usada para guiar decisões em direção a melhoria das propriedades físicas dos sistemas embarcados, além de manter um equilíbrio em relação às métricas de software. / The complexity of embedded devices poses new challenges to embedded software development in addition to the traditional physical requirements. Therefore, the evaluation of the quality of embedded software and its impact on these traditional properties becomes increasingly relevant. Concepts such as reuse, abstraction, cohesion, coupling, and other software attributes have been used as quality metrics in the software engineering domain. However, they have not been used in the embedded software domain. In embedded systems development, another set of tools is used to estimate physical properties such as power consumption, memory footprint, and performance. These tools usually require costly synthesis-and-simulation design cycles. In current complex embedded devices, one must rely on tools that can help design space exploration at the highest possible level, identifying a solution that represents the best design strategy in terms of software quality, while simultaneously meeting physical requirements. We present an analysis of the cross-correlation between software quality metrics, which can be extracted before the final system is synthesized, and physical metrics for embedded software. Using a neural network, we investigate the use of these cross-correlations to predict the impact that a given modification on the software solution will have on embedded software physical metrics. This estimation can be used to guide design decisions towards improving physical properties of embedded systems, while maintaining an adequate trade-off regarding software quality.
33

Une approche pragmatique pour mesurer la qualité des applications à base de composants logiciels / A pragmatic approach to measure the quality of Component–Based Software Applications

Hamza, Salma 19 December 2014 (has links)
Ces dernières années, de nombreuses entreprises ont introduit la technologie orientée composant dans leurs développements logiciels. Le paradigme composant, qui prône l’assemblage de briques logiciels autonomes et réutilisables, est en effet une proposition intéressante pour diminuer les coûts de développement et de maintenance tout en augmentant la qualité des applications. Dans ce paradigme, comme dans tous les autres, les architectes et les développeurs doivent pouvoir évaluer au plus tôt la qualité de ce qu’ils produisent, en particulier tout au long du processus de conception et de codage. Les métriques sur le code sont des outils indispensables pour ce faire. Elles permettent, dans une certaine mesure, de prédire la qualité « externe » d’un composant ou d’une architecture en cours de codage. Diverses propositions de métriques ont été faites dans la littérature spécifiquement pour le monde composant. Malheureusement, aucune des métriques proposées n’a fait l’objet d’une étude sérieuse quant à leur complétude, leur cohésion et surtout quant à leur aptitude à prédire la qualité externe des artefacts développés. Pire encore, l’absence de prise en charge de ces métriques par les outils d’analyse de code du marché rend impossible leur usage industriel. En l’état, la prédiction de manière quantitative et « a priori » de la qualité de leurs développements est impossible. Le risque est donc important d’une augmentation des coûts consécutive à la découverte tardive de défauts. Dans le cadre de cette thèse, je propose une réponse pragmatique à ce problème. Partant du constat qu’une grande partie des frameworks industriels reposent sur la technologie orientée objet, j’ai étudié la possibilité d’utiliser certaines des métriques de codes "classiques", non propres au monde composant, pour évaluer les applications à base de composants. Parmi les métriques existantes, j’ai identifié un sous-ensemble d’entre elles qui, en s’interprétant et en s’appliquant à certains niveaux de granularité, peuvent potentiellement donner des indications sur le respect par les développeurs et les architectes des grands principes de l’ingénierie logicielle, en particulier sur le couplage et la cohésion. Ces deux principes sont en effet à l’origine même du paradigme composant. Ce sous-ensemble devait être également susceptible de représenter toutes les facettes d’une application orientée composant : vue interne d’un composant, son interface et vue compositionnelle au travers l’architecture. Cette suite de métrique, identifiée à la main, a été ensuite appliquée sur 10 applications OSGi open- source afin de s’assurer, par une étude de leur distribution, qu’elle véhiculait effectivement pour le monde composant une information pertinente. J’ai ensuite construit des modèles prédictifs de propriétés qualité externes partant de ces métriques internes : réutilisation, défaillance, etc. J’ai décidé de construire des modèles qui permettent de prédire l’existence et la fréquence des défauts et les bugs. Pour ce faire, je me suis basée sur des données externes provenant de l’historique des modifications et des bugs d’un panel de 6 gros projets OSGi matures (avec une période de maintenance de plusieurs années). Plusieurs outils statistiques ont été mis en œuvre pour la construction des modèles, notamment l’analyse en composantes principales et la régression logistique multivariée. Cette étude a montré qu’il est possible de prévoir avec ces modèles 80% à 92% de composants fréquemment buggés avec des rappels allant de 89% à 98%, selon le projet évalué. Les modèles destinés à prévoir l’existence d’un défaut sont moins fiables que le premier type de modèle. Ce travail de thèse confirme ainsi l’intérêt « pratique » d’user de métriques communes et bien outillées pour mesurer au plus tôt la qualité des applications dans le monde composant. / Over the past decade, many companies proceeded with the introduction of component-oriented software technology in their development environments. The component paradigm that promotes the assembly of autonomous and reusable software bricks is indeed an interesting proposal to reduce development costs and maintenance while improving application quality. In this paradigm, as in all others, architects and developers need to evaluate as soon as possible the quality of what they produce, especially along the process of designing and coding. The code metrics are indispensable tools to do this. They provide, to a certain extent, the prediction of the quality of « external » component or architecture being encoded. Several proposals for metrics have been made in the literature especially for the component world. Unfortunately, none of the proposed metrics have been a serious study regarding their completeness, cohesion and especially for their ability to predict the external quality of developed artifacts. Even worse, the lack of support for these metrics with the code analysis tools in the market makes it impossible to be used in the industry. In this state, the prediction in a quantitative way and « a priori » the quality of their developments is impossible. The risk is therefore high for obtaining higher costs as a consequence of the late discovery of defects. In the context of this thesis, I propose a pragmatic solution to the problem. Based on the premise that much of the industrial frameworks are based on object-oriented technology, I have studied the possibility of using some « conventional » code metrics unpopular to component world, to evaluate component-based applications. Indeed, these metrics have the advantage of being well defined, known, equipped and especially to have been the subject of numerous empirical validations analyzing the predictive power for imperatives or objects codes. Among the existing metrics, I identified a subset of them which, by interpreting and applying to specific levels of granularity, can potentially provide guidance on the compliance of developers and architects of large principles of software engineering, particularly on the coupling and cohesion. These two principles are in fact the very source of the component paradigm. This subset has the ability to represent all aspects of a component-oriented application : internal view of a component, its interface and compositional view through architecture. This suite of metrics, identified by hand, was then applied to 10 open-source OSGi applications, in order to ensure, by studying of their distribution, that it effectively conveyed relevant information to the component world. I then built predictive models of external quality properties based on these internal metrics : reusability, failure, etc. The development of such models and the analysis of their power are only able to empirically validate the interest of the proposed metrics. It is also possible to compare the « power » of these models with other models from the literature specific to imperative and/or object world. I decided to build models that predict the existence and frequency of defects and bugs. To do this, I relied on external data from the history of changes and fixes a panel of 6 large mature OSGi projects (with a maintenance period of several years). Several statistical tools were used to build models, including principal component analysis and multivariate logistic regression. This study showed that it is possible to predict with these models 80% to 92% of frequently buggy components with reminders ranging from 89% to 98%, according to the evaluated projects. Models for predicting the existence of a defect are less reliable than the first type of model. This thesis confirms thus the interesting « practice » of using common and well equipped metrics to measure at the earliest application quality in the component world.
34

Datová kvalita v prostředí databáze hospodářských informací / Data quality in the business information database environment

Cabalka, Martin January 2015 (has links)
This master thesis is concerned with the choice of suitable data quality dimensions for a particular database of economy information and proposes and implements metrics for its assessment. The aim of this paper is to define the term data quality in the context of economy information database and possible ways to measure it. Based on dimensions suitable to observe, a list of metrics was created and subsequently implemented in SQL query language, alternatively in a procedural extension Transact SQL. These metrics were also tested with the use of real data and the results were provided with a commentary. The main asset of this work is its complex processing of the data quality topic, from theoretical term definition to particular implementation of individual metrics. Finally, this study offers a variety of both theoretical and practical directions fort this issue to be further researched.
35

Quality Assessment for Halftone Images

Elmèr, Johnny January 2023 (has links)
Halftones are reproductions of images created through the process of halftoning. The goal of halftones is to create a replica of an image which, at a distance, looks nearly identical to the original. Several different methods for producing these halftones are available, three of which are error diffusion, DBS and IMCDP. To check whether a halftone would be perceived as of high quality there are two options: Subjective image quality assessments (IQA’s) and objective image quality (IQ) measurements. As subjective IQA’s often take too much time and resources, objective IQ measurements are preferred. But as there is no standard for which metric should be used when working with halftones, this brings the question of which one to use. For this project both online and on-location subjective testing was performed where observers were tasked with ranking halftoned images based on perceived image quality, the images themselves being chosen specifically to show a wide range of characteristics such as brightness and level of detail. The results of these tests were compiled and then compared to that of eight different objective metrics, the list of which is the following: MSE, PSNR, S-CIELAB, SSIM, BlurMetric, BRISQUE, NIQE and PIQE. The subjective and objective results were compared using Z-scores and showed that SSIM and NIQE were the objective metrics which most closely resembled the subjective results. The online and on-location subjective tests differed greatly for dark colour halftones and colour halftones containing smooth transitions, with a smaller variation for the other categories chosen. What did not change was the clear preference for DBS by both the observers and the objective IQ metrics, making it the better of the three methods tested. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
36

Protection de vidéo comprimée par chiffrement sélectif réduit / Protection of compressed video with reduced selective encryption

Dubois, Loïc 15 November 2013 (has links)
De nos jours, les vidéos et les images sont devenues un moyen de communication très important. L'acquisition, la transmission, l'archivage et la visualisation de ces données visuelles, que ce soit à titre professionnel ou privé, augmentent de manière exponentielle. En conséquence, la confidentialité de ces contenus est devenue un problème majeur. Pour répondre à ce problème, le chiffrement sélectif est une solution qui assure la confidentialité visuelle des données en ne chiffrant qu'une partie des données. Le chiffrement sélectif permet de conserver le débit initial et de rester conforme aux standards vidéo. Ces travaux de thèse proposent plusieurs méthodes de chiffrement sélectif pour le standard vidéo H.264/AVC. Des méthodes de réduction du chiffrement sélectif grâce à l'architecture du standard H.264/AVC sont étudiées afin de trouver le ratio de chiffrement minimum mais suffisant pour assurer la confidentialité visuelle des données. Les mesures de qualité objectives sont utilisées pour évaluer la confidentialité visuelle des vidéos chiffrées. De plus, une nouvelle mesure de qualité est proposée pour analyser le scintillement des vidéos au cours du temps. Enfin, une méthode de chiffrement sélectif réduit régulé par des mesures de qualité est étudiée afin d'adapter le chiffrement en fonction de la confidentialité visuelle fixée. / Nowadays, videos and images are major sources of communication for professional or personal purposes. Their number grow exponentially and the confidentiality of the content has become a major problem for their acquisition, transmission, storage, and display. In order to solve this problem, selective encryption is a solution which provides visual privacy by encrypting only a part of the data. Selective encryption preserves the initial bit-rate and maintains compliance with the syntax of the standard video. This Ph.D thesis offers several methods of selective encryption for H.264/AVC video standard. Reduced selective encryption methods, based on the H.264/AVC architecture, are studied in order to find the minimum ratio of encryption but sufficient to ensure visual privacy. Objective quality measures are used to assess the visual privacy of encrypted videos. In addition, a new quality measure is proposed to analyze the video flicker over time. Finally, a method for a reduced selective encryption regulated by quality measures is studied to adapt the encryption depending on the visual privacy fixed.
37

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
38

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
39

Image Dynamic Range Enhancement

Ozyurek, Serkan 01 September 2011 (has links) (PDF)
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the objective quality metrics and subjective ratings is studied in the experiments.
40

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.

Page generated in 0.0478 seconds