• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 25
  • 25
  • 20
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Example-guided image editing / Édition d'image guidée par exemple

Hristova, Hristina 20 October 2017 (has links)
Les contributions de cette thèse sont divisées en trois parties principales. Dans la partie 1, nous proposons une méthode locale utilisant une distribution GGM pour approcher les distributions des images en les subdivisant en groupe de pixels que nous appelons dorénavant clusters. L'idée principale consiste à déterminer quelle caractéristique (couleur, luminance) est plus représentative pour une image donnée. Puis nous utilisons cette caractéristique pour subdiviser l'image en clusters. Quatre stratégies de mise en correspondance des clusters de l'image d'entrée avec ceux de l'image cible sont proposées. Ces stratégies ont pour but de produire des images photoréalistes dont le style ressemble à celui de l'image cible (dans notre cas le style d'une image est défini en termes de couleur et luminosité). Nous étendons le principe de transfert de couleur au transfert simultané de couleur et de gradient. Afin de pouvoir décrire las distributions de couleur et de gradient par une seule distribution, nous adoptons le modèle MGGD (multivariate generalized Gaussian distributions). Nous proposons une nouvelle transformation de distribution MGGD pour des applications de traitement d'image telles que le transfert multi-dimensionnel de caractéristiques d'image, de couleur, etc. De plus, nous adoptons aussi un modèle de distribution plus précis (distribution Beta bornée) pour représenter des distributions de couleur et de luminosité. Nous proposons une transformation de distribution Beta qui permet d'effectuer un transfert de couleur entre images et qui s'avère plus performante que celles basées sur les distributions Gaussiennes. Dans la partie 2, nous introduisons une nouvelle méthode permettant de créer des images HDR à partir d'une paire d'images, l'une prise avec flash et l'autre pas. Notre méthode consiste en l'utilisation d'une fonction de luminosité (brightness) simulant la fonction de réponse d'une caméra, et d'une nouvelle fonction d'adaptation de couleur (CAT), appelée CAT bi-locale (bi-local CAT), permettant de reproduire les détails de l'image flash. Cette approche évite toutes les limitations inhérentes aux méthodes classiques de création d'images HDR. Dans la partie 3, nous exploitons le potentiel de notre adaptation bi-locale CAT pour diverses applications d'édition d'image telles que la suppression de bruit (dé-bruitage), suppression de flou, transfert de texture, etc. Nous introduisons notre nouveau filtre guidé dans lequel nous incorporons l'adaptation bi-locale CAT dans la partie 3. / This thesis addresses three main topics from the domain of image processing, i.e. color transfer, high-dynamic-range (HDR) imaging and guidance-based image filtering. The first part of this thesis is dedicated to color transfer between input and target images. We adopt cluster-based techniques and apply Gaussian mixture models to carry out a more precise color transfer. In addition, we propose four new mapping policies to robustly portray the target style in terms of two key features: color, and light. Furthermore, we exploit the properties of the multivariate generalized Gaussian distributions (MGGD). in order to transfer an ensemble of features between images simultaneously. The multi-feature transfer is carried out using our novel transformation of the MGGD. Despite the efficiency of the proposed MGGD transformation for multi-feature transfer, our experiments have shown that the bounded Beta distribution provides a much more precise model for the color and light distributions of images. To exploit this property of the Beta distribution, we propose a new color transfer method, where we model the color and light distributions by the Beta distribution and introduce a novel transformation of the Beta distribution. The second part of this thesis focuses on HDR imaging. We introduce a method for automatic creation of HDR images from only two images - flash and non-flash images. We mimic the camera response function by a brightness function and we recover details from the flash image using our new chromatic adaptation transform (CAT), called bi-local CAT. That way, we efficiently recover the dynamic range of the real-world scenes without compromising the quality of the HDR image (as our method is robust to misalignment). In the context of the HDR image creation, the bi-local CAT recovers details from the flash image, removes flash shadows and reflections. In the last part of this thesis, we exploit the potential of the bi-local CAT for various image editing applications such as image de-noising, image de-blurring, texture transfer, etc. We propose a novel guidance-based filter in which we embed the bi-local CAT. The proposed filter performs as good as (and for certain applications even better than) state-of-the art methods.
12

Génération d'images 3D HDR / Generation of 3D HDR images

Bonnard, Jennifer 11 December 2015 (has links)
L’imagerie HDR et l’imagerie 3D sont deux domaines dont l’évolution simultanée mais indépendante n’a cessé de croître ces dernières années. D’une part, l’imagerie HDR (High Dynamic Range) permet d’étendre la gamme dynamique de couleur des images conventionnelles dites LDR (Low Dynamic Range). D’autre part, l’imagerie 3D propose une immersion dans le film projeté avec cette impression de faire partie de la scène tournée. Depuis peu, ces deux domaines sont conjugués pour proposer des images ou vidéos 3D HDR mais peu de solutions viables existent et aucune n’est accessible au grand public. Dans ce travail de thèse, nous proposons une méthode de génération d’images 3D HDR pour une visualisation sur écrans autostéréoscopiques en adaptant une caméra multi-points de vue à l’acquisition d’expositions multiples. Pour cela, des filtres à densité neutre sont fixés sur les objectifs de la caméra. Ensuite, un appareillement des pixels homologues permet l’agrégation des pixels représentant le même point dans la scène acquise. Finalement, l’attribution d’une valeur de radiance est calculée pour chaque pixel du jeud’images considéré par moyenne pondérée des valeurs LDR des pixels homologues. Une étape supplémentaire est nécessaire car certains pixels ont une radiance erronée. Nous proposons une méthode basée surla couleur des pixels voisins puis deux méthodes basées sur la correction de la disparité des pixels dontla radiance est erronée. La première est basée sur la disparité des pixels du voisinage et la seconde sur la disparité calculée indépendamment sur chaque composante couleur. Ce pipeline permet la générationd’une image HDR par point de vue. Un algorithme de tone-mapping est ensuite appliqué à chacune d’elles afin qu’elles puissent être composées avec les filtres correspondants à l’écran autostéréoscopique considéré pour permettre la visualisation de l’image 3D HDR. / HDR imaging and 3D imaging are two areas in which the simultaneous but separate development has been growing in recent years. On the one hand, HDR (High Dynamic Range) imaging allows to extend the dynamic range of traditionnal images called LDR (Low Dynamic Range). On the other hand, 3Dimaging offers immersion in the shown film with the feeling to be part of the acquired scene. Recently, these two areas have been combined to provide 3D HDR images or videos but few viable solutions existand none of them is available to the public. In this thesis, we propose a method to generate 3D HDR images for autostereoscopic displays by adapting a multi-viewpoints camera to several exposures acquisition.To do that, neutral density filters are fixed on the objectives of the camera. Then, pixel matchingis applied to aggregate pixels that represent the same point in the acquired scene. Finally, radiance is calculated for each pixel of the set of images by using a weighted average of LDR values. An additiona lstep is necessary because some pixels have wrong radiance. We proposed a method based on the color of adjacent pixels and two methods based on the correction of the disparity of those pixels. The first method is based on the disparity of pixels of the neighborhood and the second method on the disparity independently calculated on each color channel. This pipeline allows the generation of 3D HDR image son each viewpoint. A tone-mapping algorithm is then applied on each of these images. Their composition with filters corresponding to the autostereoscopic screen used allows the visualization of the generated 3DHDR image.
13

Physical and computational models of the gloss exhibited by the human hair tress : a study of conventional and novel approaches to the gloss evaluation of human hair

Rizvi, Syed January 2013 (has links)
The evaluation of the gloss of human hair, following wet/dry chemical treatments such as bleaching, dyeing and perming, has received much scientific and commercial attention. Current gloss analysis techniques use constrained viewing conditions where the hair tresses are observed under directional lighting, within a calibrated presentation environment. The hair tresses are classified by applying computational models of the fibres' physical and optical attributes and evaluated by either a panel of human observers, or the computational modelling of gloss intensity distributions processed from captured digital images. The most popular technique used in industry for automatically assessing hair gloss is to digitally capture images of the hair tresses and produce a classification based upon the average gloss intensity distribution. Unfortunately, the results from current computational modelling techniques are often found to be inconsistent when compared to the panel discriminations of human observers. In order to develop a Gloss Evaluation System that produces the same judgements as those produced from both computational models and human psychophysical panel assessments, the human visual system has to be considered. An image based Gloss Evaluation System with gonio-capture capability has been developed, characterised and tested. A new interpretation of the interaction between reflection bands has been identified on the hair tress images and a novel method was developed to segment the diffuse, chroma and specular regions from the image of the hair tress. A new model has been developed, based on Hunter's contrast gloss approach, to quantify the gloss of the human hair tress. Furthermore, a large number of hair tresses have been treated with a range of hair products to simulate different levels of hair shine. The Tresses have been treated with different commercial products. To conduct a psychophysical experiment, one-dimensional scaling paired comparison test, a MATLAB GUI (Graphical user interface) was developed to display images of the hair tress on calibrated screen. Participants were asked to select the image that demonstrated the greatest gloss. To understand what users were attending to and how they used the different reflection bands in their quantification of the gloss of the human hair tress, the GUI was run on an Eye-Tracking System. The results of several gloss evaluation models were compared with the participants' choices from the psychophysical experiment. The novel gloss assessment models developed during this research correlated more closely with the participants' choices and were more sensitive to changes in gloss than the conventional models used in the study.
14

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
15

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
16

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
17

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
18

Image Dynamic Range Enhancement

Ozyurek, Serkan 01 September 2011 (has links) (PDF)
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the objective quality metrics and subjective ratings is studied in the experiments.
19

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
20

Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladas

Slomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.

Page generated in 0.1051 seconds