• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 6
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 76
  • 43
  • 31
  • 28
  • 27
  • 21
  • 16
  • 16
  • 16
  • 15
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Génération d'images 3D HDR / Generation of 3D HDR images

Bonnard, Jennifer 11 December 2015 (has links)
L’imagerie HDR et l’imagerie 3D sont deux domaines dont l’évolution simultanée mais indépendante n’a cessé de croître ces dernières années. D’une part, l’imagerie HDR (High Dynamic Range) permet d’étendre la gamme dynamique de couleur des images conventionnelles dites LDR (Low Dynamic Range). D’autre part, l’imagerie 3D propose une immersion dans le film projeté avec cette impression de faire partie de la scène tournée. Depuis peu, ces deux domaines sont conjugués pour proposer des images ou vidéos 3D HDR mais peu de solutions viables existent et aucune n’est accessible au grand public. Dans ce travail de thèse, nous proposons une méthode de génération d’images 3D HDR pour une visualisation sur écrans autostéréoscopiques en adaptant une caméra multi-points de vue à l’acquisition d’expositions multiples. Pour cela, des filtres à densité neutre sont fixés sur les objectifs de la caméra. Ensuite, un appareillement des pixels homologues permet l’agrégation des pixels représentant le même point dans la scène acquise. Finalement, l’attribution d’une valeur de radiance est calculée pour chaque pixel du jeud’images considéré par moyenne pondérée des valeurs LDR des pixels homologues. Une étape supplémentaire est nécessaire car certains pixels ont une radiance erronée. Nous proposons une méthode basée surla couleur des pixels voisins puis deux méthodes basées sur la correction de la disparité des pixels dontla radiance est erronée. La première est basée sur la disparité des pixels du voisinage et la seconde sur la disparité calculée indépendamment sur chaque composante couleur. Ce pipeline permet la générationd’une image HDR par point de vue. Un algorithme de tone-mapping est ensuite appliqué à chacune d’elles afin qu’elles puissent être composées avec les filtres correspondants à l’écran autostéréoscopique considéré pour permettre la visualisation de l’image 3D HDR. / HDR imaging and 3D imaging are two areas in which the simultaneous but separate development has been growing in recent years. On the one hand, HDR (High Dynamic Range) imaging allows to extend the dynamic range of traditionnal images called LDR (Low Dynamic Range). On the other hand, 3Dimaging offers immersion in the shown film with the feeling to be part of the acquired scene. Recently, these two areas have been combined to provide 3D HDR images or videos but few viable solutions existand none of them is available to the public. In this thesis, we propose a method to generate 3D HDR images for autostereoscopic displays by adapting a multi-viewpoints camera to several exposures acquisition.To do that, neutral density filters are fixed on the objectives of the camera. Then, pixel matchingis applied to aggregate pixels that represent the same point in the acquired scene. Finally, radiance is calculated for each pixel of the set of images by using a weighted average of LDR values. An additiona lstep is necessary because some pixels have wrong radiance. We proposed a method based on the color of adjacent pixels and two methods based on the correction of the disparity of those pixels. The first method is based on the disparity of pixels of the neighborhood and the second method on the disparity independently calculated on each color channel. This pipeline allows the generation of 3D HDR image son each viewpoint. A tone-mapping algorithm is then applied on each of these images. Their composition with filters corresponding to the autostereoscopic screen used allows the visualization of the generated 3DHDR image.
42

Metody pro vylepšení kvality digitálního obrazu / Methods for enhancing quality of digital images

Svoboda, Radovan January 2010 (has links)
With arrival of affordable digital technology we are increasingly coming into contact with digital images. Cameras are no longer dedicated devices, but part of almost every mobile phone, PDA and laptop. This paper discusses methods for enhancing quality of digital images with focus on removing noise, creating high dynamic range (HDR) images and extending depth of field (DOF). It contains familiarization with technical means for acquiring digital image, explains origin of image noise. Further attention is drawn to HDR, from explaining the term, physical basis, difference between HDR sensing and HDR displaying, to survey and historical development of methods dealing with creating HDR images. The next part is explaining DOF when displaying, physical basis of this phenomenon and review of methods used for DOF extension. The paper mentions problem of acquiring images needed for solving given tasks and designs method for acquiring images. Using it a database of test images for each task was created. Part of the paper also deals with design of a program, that implements discussed methods, for solving the given tasks. With help of proposed class imgmap, quality of output images is improved, by modifying maps of input images. The paper describes methods, improvements, means of setting parameters and their effects on algorithms and control of program using proposed GUI. Finally, comparison with free software for extending DOF takes place. The proposed software provides at least comparable results, the correct setting of parameters for specific cases allows to achieve better properties of the resulting image. Time requirements of image processing are worse because designed software was not optimised.
43

Physical and computational models of the gloss exhibited by the human hair tress : a study of conventional and novel approaches to the gloss evaluation of human hair

Rizvi, Syed January 2013 (has links)
The evaluation of the gloss of human hair, following wet/dry chemical treatments such as bleaching, dyeing and perming, has received much scientific and commercial attention. Current gloss analysis techniques use constrained viewing conditions where the hair tresses are observed under directional lighting, within a calibrated presentation environment. The hair tresses are classified by applying computational models of the fibres' physical and optical attributes and evaluated by either a panel of human observers, or the computational modelling of gloss intensity distributions processed from captured digital images. The most popular technique used in industry for automatically assessing hair gloss is to digitally capture images of the hair tresses and produce a classification based upon the average gloss intensity distribution. Unfortunately, the results from current computational modelling techniques are often found to be inconsistent when compared to the panel discriminations of human observers. In order to develop a Gloss Evaluation System that produces the same judgements as those produced from both computational models and human psychophysical panel assessments, the human visual system has to be considered. An image based Gloss Evaluation System with gonio-capture capability has been developed, characterised and tested. A new interpretation of the interaction between reflection bands has been identified on the hair tress images and a novel method was developed to segment the diffuse, chroma and specular regions from the image of the hair tress. A new model has been developed, based on Hunter's contrast gloss approach, to quantify the gloss of the human hair tress. Furthermore, a large number of hair tresses have been treated with a range of hair products to simulate different levels of hair shine. The Tresses have been treated with different commercial products. To conduct a psychophysical experiment, one-dimensional scaling paired comparison test, a MATLAB GUI (Graphical user interface) was developed to display images of the hair tress on calibrated screen. Participants were asked to select the image that demonstrated the greatest gloss. To understand what users were attending to and how they used the different reflection bands in their quantification of the gloss of the human hair tress, the GUI was run on an Eye-Tracking System. The results of several gloss evaluation models were compared with the participants' choices from the psychophysical experiment. The novel gloss assessment models developed during this research correlated more closely with the participants' choices and were more sensitive to changes in gloss than the conventional models used in the study.
44

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
45

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
46

Variable-aperture Photography

Hasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting. In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture. First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration. Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras. Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
47

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
48

Image Dynamic Range Enhancement

Ozyurek, Serkan 01 September 2011 (has links) (PDF)
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the objective quality metrics and subjective ratings is studied in the experiments.
49

Asynchroner CMOS–Bildsensor mit erweitertem Dynamikbereich und Unterdrückung zeitlich redundanter Daten

Matolin, Daniel 20 January 2011 (has links) (PDF)
Diese Arbeit befasst sich mit dem Entwurf eines asynchron arbeitenden, zeitbasierten CMOS–Bildsensors mit erhöhtem Dynamikbereich und Unterdrückung zeitlich redundanter Daten. Aufgrund immer kleinerer Strukturgrößen in modernen Prozessen zur Fertigung von Halbleitern und einer gleichzeitig physikalisch bedingt immer geringeren Skalierbarkeit konventioneller Bildsensoren wird es zunehmend möglich und praktikabel, Signalverarbeitungsansätze auf Pixelebene zu implementieren. Unter Berücksichtigung dieser Entwicklungen befasst sich die folgende Arbeit mit dem Entwurf eines neuartigen CMOS–Bildsensors mit nahezu vollständiger Unterdrückung zeitlich redundanter Daten auf Pixelebene. Jedes photosensitive Element in der Matrix arbeitet dabei vollkommen autonom. Es detektiert selbständig Änderungen in der Bestrahlung und gibt den Absolutwert nur beim Auftreten einer solchen Änderung mittels asynchroner Signalisierung nach außen. Darüber hinaus zeichnet sich der entwickelte Bildaufnehmer durch einen, gegenüber herkömmlichen Bildsensoren, deutlich erhöhten Dynamikbereich und eine niedrige Energieaufnahme aus, wodurch das Prinzip besonders für die Verwendung in Systemen für den mobilen Einsatz oder zur Durchführung von Überwachungsaufgaben geeignet ist. Die Realisierbarkeit des Konzepts wurde durch die erfolgreiche Implementierung eines entsprechenden Bildaufnehmers in einem Standard–CMOS–Prozess nachgewiesen. Durch die Größe des Designs von 304 x 240 Bildelementen, die den Umfang üblicher Prototypen-Realisierungen deutlich übersteigt, konnte speziell die Anwendbarkeit im Bereich größerer Sensorfelder gezeigt werden. Der Schaltkreis wurde erfolgreich getestet, wobei sowohl das Gesamtsystem als auch einzelne Schaltungsteile messtechnisch analysiert worden sind. Die nachgewiesene Bildqualität deckt sich dabei in guter Näherung mit den theoretischen Vorbetrachtungen.
50

High Dynamic Range Panoramic Imaging with Scene Motion

Silk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts. Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario. We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques. We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.

Page generated in 0.0414 seconds