• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 7
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 30
  • 29
  • 28
  • 19
  • 15
  • 13
  • 12
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quality Assessment Methodologies of Post-Processed Images / Metodologie hodnocení kvality obrazu po post-processingu

Krasula, Lukas 20 January 2017 (has links)
Ces vingt dernières années, la grande majorité des travaux réalisés dans le domaine de l’analyse de la qualité a été consacrée à la quantification de la distortion engendrée par le traitement d’une image. Par conséquent, l’image originale était toujours considérée comme étant de la meilleure qualité possible. Dans ce genre de scénario, la notion de qualité peut être exprimée comme la fidélité de la version traitée à sa version de référence. Cependant, des algorithmes de post-traitement permettent d’ajuster les propriétés esthétiques d’une image afin d’améliorer la qualité perceptible. Dans ce cas, il n’existe pas d’image ayant la meilleure qualité possible, et l’approche classique basée sur la fidélité ne peut plus être utilisée. L’objectif de cette thèse est de corriger les méthodologies d’analyse de la qualité afin de résoudre les difficultés d’évaluation de qualité que soulève le post-traitement. Les algorithmes de post-traitement, en rapport avec le sujet de cette thèse, proviennent de deux groupes : l’amélioration d’image, caractérisée par l’accentuation du contraste, et les techniques de compression de la plage dynamique (également appelée mappage tonal). Les méthodologies de l’analyse de qualité applicables dans ces domaines, tant subjectives qu’objectives, y sont étudiées, et les solutions proposées permettent de surpasser les méthodes les plus récentes. De plus, une nouvelle méthodologie est présentée afin d’évaluer les performances des indicateurs de la qualité objective, corrigeant les défauts des méthodes actuellement utilisées. / The vast majority of the work done in the field of quality assessment during last two decades has been dedicated to the quantification of the distortion caused by the processing of an image. The original image was, therefore, always considered to be of the best possible quality. In this kind of scenario, the notion of quality can be expressed as the fidelity of the processed version to the reference. However, some post-processing algorithms enable to adjust aesthetic properties of an image in order to enhance the perceived quality. In such cases, the best possible quality image is not available and the classical fidelity approach is no longer applicable. The goal of this thesis is to revise the quality assessment methodologies to cope with the challenges brought by the post-processing into the quality evaluation. The post-processing algorithms, relevant to the topic of this thesis, come from two groups – image enhancement, represented by image sharpening, and dynamic range compression (also known as tone-mapping) techniques. Both subjective and objective quality assessment methodologies applicable in these areas are studied and the suitable solutions, outperforming the state-of-the-art methods, are proposed. Moreover, a novel methodology for evaluating the performance of objective quality metrics, overcoming the shortcomings of the currently available methods, is presented.
2

Tone Mapping by Interactive Evolution

Chisholm, Stephen B 08 October 2009 (has links)
Tone mapping is a computational task of significance in the context of displaying high dynamic range images on low dynamic range devices. While a number of tone mapping algorithms have been proposed and are in common use, there is no single operator that yields optimal results under all conditions. Moreover, obtaining satisfactory mappings often requires the manual tweaking of parameters. This thesis proposes interactive evolution as a computational tool for tone mapping. An evolution strategy that blends the results from several tone mapping operators while at the same time adapting their parameters is proposed. As well, the results are adapted such that such that approximately uniform perceptual distances between offspring candidate solutions and the parent are ensured. The introduction of a perceptually based step size adaptation technique enhances the control of the variability between newly generated offspring, when compared to parameter space step size adaptation.
3

HDR och Tone mapping i automatiserade tullsystem / HDR and Tone mapping in automated toll systems

Larsson, Kristian, Larsson, Michael January 2013 (has links)
This report is about how HDR (HighDynamicRange) can be created and used in combination with Tone mapping. This work has been carried out together with Kapsch TrafficCom AB in Jönköping. The objective of this project is to: Evaluate and investigate the effects given to pictures by HDR and tone mapping. Evaluate if the technology may lead to improvements in Kapsch’s systems. To construct a program which is able to handle some form of tone mapping or HDR-algorithm. These questions will be answered in this report: What kind of effects has HDR and tone mapping-algorithms on pictures? Can the HDR-technology give better data in Kapsch’s systems? The research method used in this report is called action research. This means the authors has investigated the technology by reading different documentations and by testing different algorithms to see what kind of result they give. The report describes some of the tests made to see if the technology is appropriate in Kapsch’s system.There is two smaller reports made by the authors which documenting some of the work.The first report describes the work with different settings for a camera to create pictures with HDR-quality. The second report describes the differences between tone mapping-algorithms and different file format. Both reports are included as appendices to this report.In the program created by the authors some larger library’s with standard functions for opening of JPEG-pictures was used. The chosen library’s was MFC and GDI+. The program is developed for a windows environment and handles functions like sharpening with unsharp mask, colour space conversion and tone mapping.
4

Tone Mapping by Interactive Evolution

Chisholm, Stephen B 08 October 2009 (has links)
Tone mapping is a computational task of significance in the context of displaying high dynamic range images on low dynamic range devices. While a number of tone mapping algorithms have been proposed and are in common use, there is no single operator that yields optimal results under all conditions. Moreover, obtaining satisfactory mappings often requires the manual tweaking of parameters. This thesis proposes interactive evolution as a computational tool for tone mapping. An evolution strategy that blends the results from several tone mapping operators while at the same time adapting their parameters is proposed. As well, the results are adapted such that such that approximately uniform perceptual distances between offspring candidate solutions and the parent are ensured. The introduction of a perceptually based step size adaptation technique enhances the control of the variability between newly generated offspring, when compared to parameter space step size adaptation.
5

Contrôle adaptatif local dans un capteur de vision CMOS / Local adaptive control in a sensor CMOS vision

Abbass, Hassan 04 July 2014 (has links)
L'avancement de la technologie durant ces dernières années a permis aux imageurs d'atteindre de très hautes résolutions. Ceci a rendu les images plus riches en détails. D'un autre côté, une autre limitation se présente à ce niveau; celle du nombre de bits limité après la conversion analogique numérique. De ce fait, la qualité de l'image peut être affectée. Pour remédier à cette limitation et garder une meilleure qualité de l'image en sortie de son système d'acquisition, l'information lumineuse doit être codée sur un grand nombre de bits et conservée durant tout le flot de traitement pour éviter l'intervention du bruit et la génération des artefacts en sortie du système. En outre, le traitement numérique de chaque pixel sera coûteux en consommation d'énergie et en occupation de surface silicium.Le travail effectué dans cette thèse consiste à étudier, concevoir et implémenter plusieurs fonctions et architectures de traitement d'image en électronique analogique ou mixte. L'implémentation de ces fonctions en analogique permet de décaler la conversion de l'information lumineuse en numérique vers une étape ultérieure. ceci permet de conserver un maximum de précision sur l'information traitée. Ces fonctions et leurs architectures ont un but d'améliorer la dynamique de fonctionnement des imageurs CMOS standard (à intégration), en utilisant des techniques à temps d'intégration variable, et des "tone mapping" locaux qui imitent le système de vision humaine.Les principes de fonctionnement, les émulations sous MATLAB, la conception et les simulations électriques ainsi que les résultats expérimentaux des techniques proposées sont présentés en détails dans ce manuscrit. / The technology progress in recent years has enabled imagers to reach a very high resolutions. This allows images to be more detailed and rich in information. On the other hand, the limited number of bites after the digital analogue conversion may drastically affect the quality of the image. To maintain the quality of the output image of the acquisition system, the luminous information should be (1) encoded on a large number of bits and (2) maintained throughout the processing flow so that to avoid noise interference and generating artifacts system output. However, the digital processing of each pixel will be energy consuming will occupy more surface silicon.The goal of this thesis is to study, design and implement several image processing functions as well as their architectures using analog and mixed electronic. Implementation of these functions shifts the analog to digital conversion to a subsequent step. This allows a maximum precision of the processed information. The proposed functions and their architectures improve the operational dynamics Standard CMOS imagers using (1) variable integration time techniques, and (2) "tone mapping" which mimics the human vision system.The experimental results based on emulations in Matlab and the electrical design show the novelty and the efficiency of the proposed method.
6

Survey and Evaluation of Tone Mapping Operators for HDR-video

Eilertsen, Gabriel, Unger, Jonas, Wanat, Robert, Mantiuk, Rafal January 2013 (has links)
This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images. / VPS
7

Time lapse HDR: time lapse photography with high dynamic range images

Clark, Brian Sean 29 August 2005 (has links)
In this thesis, I present an approach to a pipeline for time lapse photography using conventional digital images converted to HDR (High Dynamic Range) images (rather than conventional digital or film exposures). Using this method, it is possible to capture a greater level of detail and a different look than one would get from a conventional time lapse image sequence. With HDR images properly tone-mapped for display on standard devices, information in shadows and hot spots is not lost, and certain details are enhanced.
8

Pokročilý prohlížeč HDR obrazů / Advanced HDR image viewer

Wirth, Michal January 2017 (has links)
04.01.17 abstract.txt 1 file:///home/misa/Desktop/dp/abstract.txt The primary purpose of this thesis is to determine criteria for a high- dynamic range (HDR) image viewer accented by computer graphics artists and other users who work with HDR images produced by physically-based renderers on a daily basis. Also an overview of already existing solutions is present. Based on both of them, a new HDR viewer is designed and implemented giving an emphasis on its memory and performance efficiency. For these purposes two alternative image data layouts, Array-of-Structures (AoS) and Structure-of-Arrays (SoA), are discussed and their impact is measured on the speed of an algorithm for changing image saturation which has been selected as a representative part of whole tone mapping process of the viewer. It has turned out that the latter type of layout allows the algorithm to run about 3 times faster or more under the conditions of a defined testing environment. The thesis has two main contributions. First it gives the above users a tool which could help them when working with HDR images. Second it indicates that there may be a potential of significant speed-up of implementations of tone mapping algorithms.
9

A Psychophysical Evaluation of Inverse Tone Mapping Techniques.

Banterle, F., Ledda, P., Debattista, K., Bloj, Marina, Artussi, A., Chalmers, A. January 2009 (has links)
No / In recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.
10

Methods for improving the backward compatible High Dynamic Range compression / Méthodes pour améliorer la compression HDR (High Dynamic Range) rétro compatible

Gommelet, David 25 September 2018 (has links)
Ces dernières années, les contenus vidéos ont évolué très rapidement. En effet, les télévisions (TV) ont rapidement évolué vers l’Ultra Haute résolution (UHD), la Haute Fréquence d’images (HFR) ou la stéréoscopie (3D). La tendance actuelle est à l’imagerie à Haute Dynamique de luminance (HDR). Ces technologies permettent de reproduire des images beaucoup plus lumineuses que celles des écrans actuels. Chacune de ces améliorations représente une augmentation du coût de stockage et nécessite la création de nouveaux standards de compression vidéo, toujours plus performant. La majorité des consommateurs est actuellement équipé de TV ayant une Dynamique Standard (SDR) qui ne supportent pas les contenus HDR et ils vont lentement renouveler leurs écrans pour un HDR. Il est donc important de délivrer un signal HDR qui puisse être décodé par ces deux types d’écrans. Cette rétro compatibilité est rendue possible par un outil appelé TMO (Tone Mapping Operator) qui transforme un contenu HDR en une version SDR. Au travers de cette thèse, nous explorons de nouvelles méthodes pour améliorer la compression HDR rétro compatible. Premièrement, nous concevons un TMO qui optimise les performances d’un schéma de compression scalable où une couche de base et d’amélioration sont envoyées pour reconstruire les contenus HDR et SDR. Il est démontré que le TMO optimal dépend seulement de la couche SDR de base et que le problème de minimisation peut être séparé en deux étapes consécutives. Pour ces raisons, nous proposons ensuite un autre TMO conçu pour optimiser les performances d’un schéma de compression utilisant uniquement une couche de base mais avec un modèle amélioré et plus précis. Ces deux travaux optimisent des TMO pour images fixes. Par la suite, la thèse se concentre sur l’optimisation de TMO spécifiques à la vidéo. Cependant, on y démontre que l’utilisation d’une prédiction pondérée pour la compression SDR est aussi bon voir meilleur que d’utiliser un TMO optimisé temporellement. Pour ces raisons, un nouvel algorithme et de nouveaux modes de prédictions pondérées sont proposés pour gérer plus efficacement la large diversité des changements lumineux dans les séquences vidéos. / In recent years, video content evolved very quickly. Indeed, televisions (TV) quickly evolved to Ultra High Definition (UHD), High Frame Rate (HFR) or stereoscopy (3D). The recent trend is towards High Dynamic range (HDR). These new technologies allow the reproduction of much brighter images than for actual displays. Each of these improvements represents an increase in storage cost and therefore requires the creation of new video compression standards, always more efficient. The majority of consumers are currently equipped with Standard Dynamic Range (SDR) displays, that cannot handle HDR content. Consumers will slowly renew their display to an HDR one and it is therefore of great importance to deliver an HDR signal that can be decoded by both SDR and HDR displays. Such backward compatibility is provided by a tool called Tone Mapping Operator (TMO) which transforms an HDR content into an SDR version. In this thesis, we explore new methods to improve the backward compatible HDR compression. First, we design a Tone Mapping to optimize scalable compression scheme performances where a base and an enhancement layer are sent to reconstruct the SDR and HDR content. It is demonstrated that the optimum TMO only depends on the SDR base layer and that the minimization problem can be separated in two consecutive minimization steps. Based on these observations, we then propose another TMO designed to optimize the performances of compression schemes using only a base layer but with an enhanced and more precise model. Both of these works optimize TMO for still images. Thereafter, this thesis focuses on the optimization of video-specific TMO. However, we demonstrate that using a weighted prediction for the SDR compression is as good or even better than using a temporally optimized TMO. Therefore, we proposed a new weighted prediction algorithm and new weighted prediction modes to handle more efficiently the large diversity of brightness variations in video sequences.

Page generated in 0.0273 seconds