• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 6
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 76
  • 43
  • 31
  • 28
  • 27
  • 21
  • 16
  • 16
  • 16
  • 15
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimizing The High Dynamic Range Imaging Pipeline

Akyuz, Ahmet Oguz 01 January 2007 (has links)
High dynamic range (HDR) imaging is a rapidly growing field in computer graphics and image processing. It allows capture, storage, processing, and display of photographic information within a scene-referred framework. The HDR imaging pipeline consists of the major steps an HDR image is expected to go through from capture to display. It involves various techniques to create HDR images, pixel encodings and file formats for storage, tone mapping for display on conventional display devices and direct display on HDR capable screens. Each of these stages have important open problems, which need to be addressed for a smoother transition to an HDR imaging pipeline. We addressed some of these important problems such as noise reduction in HDR imagery, preservation of color appearance, validation of tone mapping operators, and image display on HDR monitors. The aim of this thesis is thus, to present our findings and describe the research we have conducted within the framework of optimizing the HDR imaging pipeline.
32

Single Shot High Dynamic Range and Multispectral Imaging Based on Properties of Color Filter Arrays

Simon, Paul M. 16 May 2011 (has links)
No description available.
33

Development of High Speed High Dynamic Range Videography

Griffiths, David John 09 February 2017 (has links)
High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods. In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology. In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D.
34

Image Based Visualization Methods for Meteorological Data

Olsson, Björn January 2004 (has links)
Visualization is the process of constructing methods, which are able to synthesize interesting and informative images from data sets, to simplify the process of interpreting the data. In this thesis a new approach to construct meteorological visualization methods using neural network technology is described. The methods are trained with examples instead of explicitely designing the appearance of the visualization. This approach is exemplified using two applications. In the fist the problem to compute an image of the sky for dynamic weather, that is taking account of the current weather state, is addressed. It is a complicated problem to tie the appearance of the sky to a weather state. The method is trained with weather data sets and images of the sky to be able to synthesize a sky image for arbitrary weather conditions. The method has been trained with various kinds of weather and images data. The results show that this is a possible method to construct weather visaualizations, but more work remains in characterizing the weather state and further refinement is required before the full potential of the method can be explored. This approach would make it possible to synthesize sky images of dynamic weather using a fast and efficient empirical method. In the second application the problem of computing synthetic satellite images form numerical forecast data sets is addressed. In this case a mode is trained with preclassified satellite images and forecast data sets to be able to synthesize a satellite image representing arbitrary conditions. The resulting method makes it possible to visualize data sets from numerical weather simulations using synthetic satellite images, but could also be the basis for algorithms based on a preliminary cloud classification. / Report code: LiU-Tek-Lic-2004:66.
35

Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

Lluis-Gomez, Alexis L. January 2015 (has links)
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device.
36

HDR and the Colorist : How new technology affects professionals in the motion picture industry

Westling, Jonas January 2019 (has links)
By utilizing a Research through Design approach this master thesis studies how technological changes might affect professionals working in the motion picture industry, specifically; how the advent of HDR (High Dynamic Range) affects the colorist. The research questions formulated are the following; (1) How can color grading in HDR be approached? (2) What effect can HDR have on visual modality? (3) What specific affordances can HDR offer the colorist? (4) How can HDR affect the creative space of the colorist? Three of the research questions are derived from the theoretical framework applied in this master thesis; starting with the social semiotic implementation of the term modality (models of reality), the Gibsonian term affordance (possibilities for action and meaning making) and its use in communications research, and lastly; the concept of creative space in motion picture production. Analytic autoethnography was used to generate primary data by documenting the process of color grading a 13-minute short film, and also performing semistructured interviews with four colorists. Amongst other findings, this study found that HDR offers a wider range of modality expression than SDR (Standard Dynamic Range); regarding several visual modality markers. Four HDR-specific affordances were formulated; (1) color expandability, (2) highlight differentiability, (3) tonal rangeability, (4) brightness disturbability. Relating to the concept of creative space; the colorists expressed a concern that they will have to create multiple versions when delivering HDR, but not get a bigger budget for it, therefore having less time to spend on other aspects of color grading.
37

Objective Quality Assessment and Optimization for High Dynamic Range Image Tone Mapping

Ma, Kede 03 June 2014 (has links)
Tone mapping operators aim to compress high dynamic range (HDR) images to low dynamic range ones so as to visualize HDR images on standard displays. Most existing works were demonstrated on specific examples without being thoroughly tested on well-established and subject-validated image quality assessment models. A recent tone mapped image quality index (TMQI) made the first attempt on objective quality assessment of tone mapped images. TMQI consists of two fundamental building blocks: structural fidelity and statistical naturalness. In this thesis, we propose an enhanced tone mapped image quality index (eTMQI) by 1) constructing an improved nonlinear mapping function to better account for the local contrast visibility of HDR images and 2) developing an image dependent statistical naturalness model to quantify the unnaturalness of tone mapped images based on a subjective study. Experiments show that the modified structural fidelity and statistical naturalness terms in eTMQI better correlate with subjective quality evaluations. Furthermore, we propose an iterative optimization algorithm for tone mapping. The advantages of this algorithm are twofold: 1) eTMQI and TMQI can be compared in a more straightforward way; 2) better quality tone mapped images can be automatically generated by using eTMQI as the optimization goal. Numerical and subjective experiments demonstrate that eTMQI is a superior objective quality assessment metric for tone mapped images and consistently outperforms TMQI.
38

Processamento de imagens HDR utilizando filtros não lineares e decomposição multiescala

Rodrigues, Lídia Maria January 2014 (has links)
Orientador: Prof. Dr. André Guilherme Ribeiro Balan / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2014. / A fotografia é uma atividade em grande crescimento e desenvolvimento, não apenas entre prossionais, mas também para a sociedade como um todo. Espera-se que a imagem tomada de uma determinada cena seja tão real quanto possível, e, por sua vez, que os equipamentos existentes sejam capazes de obter e visualizar essas imagens, o mais el possível da cena que está sendo registrada. O trabalho desenvolvido e apresentado nesta dissertação busca fazer o levantamento e estudo de técnicas que manipulem imagens HDR (High Dynamic Range), ou seja, imagens que possuem grande quantidade de informações da cena que representa, a m de torná-las visualizáveis com todos os detalhes nela contidos de maneira mais real possível ou de forma artística. A manipulação necessária para tais imagens é realizada por meio do mapeamento das imagens HDR para imagens LDR (Low Dynamic Range). O mapeamento das imagens HDR pode ser realizado com operadores de tone mapping e, como abordado nesta dissertação, com a decomposição multiescala. A decomposição multiescala oferece resultados de alta qualidade, tornando se um método de grande importância para o área de Processamento de Imagens, pelo fato de dividir a imagem de entrada em camadas e manipulá-las individualmente, para depois restaurá-la. Neste trabalho são avaliados métodos de ltragem não linear e operadores de tone mapping que melhor se adequam ao processo de decomposição multiescala e ao método de decomposição multiescala juntamente com a aplicação de compressão das camadas obtidas no processo de decomposição, a m de obter imagens reais com aprimoramento e destaque de seus detalhes. Adicionalmente, é proposto um novo operador de tone mapping local baseado no operador local de Reinhard, com as mesmas características e, com ajuste de parâmetro, que obtém resultados mais robustos que o operador local de Reinhard. Com isso, novos parâmetros ou métodos são propostos para aumentar a qualidade das imagens obtidas. / Photography is an activity in huge growth and development, not only among professionals, but also to society as a whole. It is expected that an image taken of a certain scene be as real as possible and, for its turn, that the existing equipment could obtain and visualize those images as accurately as the scene being recorded. The work developed and presented in this dissertation seeks to do a survey and a study of techniques which manipulate HDR (High Dynamic Range) images, in other words, of images that have large amount of information of a scene that is represented, in order to turn the images viewable with all the details they have as accurately as possible or in an artistic format. The required manipulation of those images is held by the mapping from HDR images to LDR (Low Dynamic Range) images. The HDR images mapping can be done with tone mapping operators and, as discussed in this dissertation, with the multiscale decomposition.The multiscale decomposition oers high quality results, being a method of great relevance to the Image Processing area, by the fact that it divides the input image in layers and manipulates these individually, to restore the image, after that. In this work, the non-linear lter methods and tone mapping operators that best t the multiscale decomposition process and multiscale decomposition method along with the application of layers compression, are evaluated to obtain real images with improvement and details highlighted. Besides that, a new tone mapping operator is proposed, based on the Reinhard local operator, with the same characteristics and with parameter settings, which gives more robust results than the Reinhard local operator. Thus, new parameters or methods are suggested to increase the obtained image quality.
39

Real-time image based lighting with streaming HDR-light probe sequences

Hajisharif, Saghi January 2012 (has links)
This work presents a framework for shading of virtual objects using high dynamic range (HDR) light probe sequences in real-time. The method is based on using HDR environment map of the scene which is captured in an on-line process by HDR video camera as light probes. In each frame of the HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiance calculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporal coherence between frames to further smooth lighting variation over time. Our results show that the framework can achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in the real environment. We are using low-order spherical harmonics for representing both lighting and transfer functionsto avoid aliasing.
40

Example-guided image editing / Édition d'image guidée par exemple

Hristova, Hristina 20 October 2017 (has links)
Les contributions de cette thèse sont divisées en trois parties principales. Dans la partie 1, nous proposons une méthode locale utilisant une distribution GGM pour approcher les distributions des images en les subdivisant en groupe de pixels que nous appelons dorénavant clusters. L'idée principale consiste à déterminer quelle caractéristique (couleur, luminance) est plus représentative pour une image donnée. Puis nous utilisons cette caractéristique pour subdiviser l'image en clusters. Quatre stratégies de mise en correspondance des clusters de l'image d'entrée avec ceux de l'image cible sont proposées. Ces stratégies ont pour but de produire des images photoréalistes dont le style ressemble à celui de l'image cible (dans notre cas le style d'une image est défini en termes de couleur et luminosité). Nous étendons le principe de transfert de couleur au transfert simultané de couleur et de gradient. Afin de pouvoir décrire las distributions de couleur et de gradient par une seule distribution, nous adoptons le modèle MGGD (multivariate generalized Gaussian distributions). Nous proposons une nouvelle transformation de distribution MGGD pour des applications de traitement d'image telles que le transfert multi-dimensionnel de caractéristiques d'image, de couleur, etc. De plus, nous adoptons aussi un modèle de distribution plus précis (distribution Beta bornée) pour représenter des distributions de couleur et de luminosité. Nous proposons une transformation de distribution Beta qui permet d'effectuer un transfert de couleur entre images et qui s'avère plus performante que celles basées sur les distributions Gaussiennes. Dans la partie 2, nous introduisons une nouvelle méthode permettant de créer des images HDR à partir d'une paire d'images, l'une prise avec flash et l'autre pas. Notre méthode consiste en l'utilisation d'une fonction de luminosité (brightness) simulant la fonction de réponse d'une caméra, et d'une nouvelle fonction d'adaptation de couleur (CAT), appelée CAT bi-locale (bi-local CAT), permettant de reproduire les détails de l'image flash. Cette approche évite toutes les limitations inhérentes aux méthodes classiques de création d'images HDR. Dans la partie 3, nous exploitons le potentiel de notre adaptation bi-locale CAT pour diverses applications d'édition d'image telles que la suppression de bruit (dé-bruitage), suppression de flou, transfert de texture, etc. Nous introduisons notre nouveau filtre guidé dans lequel nous incorporons l'adaptation bi-locale CAT dans la partie 3. / This thesis addresses three main topics from the domain of image processing, i.e. color transfer, high-dynamic-range (HDR) imaging and guidance-based image filtering. The first part of this thesis is dedicated to color transfer between input and target images. We adopt cluster-based techniques and apply Gaussian mixture models to carry out a more precise color transfer. In addition, we propose four new mapping policies to robustly portray the target style in terms of two key features: color, and light. Furthermore, we exploit the properties of the multivariate generalized Gaussian distributions (MGGD). in order to transfer an ensemble of features between images simultaneously. The multi-feature transfer is carried out using our novel transformation of the MGGD. Despite the efficiency of the proposed MGGD transformation for multi-feature transfer, our experiments have shown that the bounded Beta distribution provides a much more precise model for the color and light distributions of images. To exploit this property of the Beta distribution, we propose a new color transfer method, where we model the color and light distributions by the Beta distribution and introduce a novel transformation of the Beta distribution. The second part of this thesis focuses on HDR imaging. We introduce a method for automatic creation of HDR images from only two images - flash and non-flash images. We mimic the camera response function by a brightness function and we recover details from the flash image using our new chromatic adaptation transform (CAT), called bi-local CAT. That way, we efficiently recover the dynamic range of the real-world scenes without compromising the quality of the HDR image (as our method is robust to misalignment). In the context of the HDR image creation, the bi-local CAT recovers details from the flash image, removes flash shadows and reflections. In the last part of this thesis, we exploit the potential of the bi-local CAT for various image editing applications such as image de-noising, image de-blurring, texture transfer, etc. We propose a novel guidance-based filter in which we embed the bi-local CAT. The proposed filter performs as good as (and for certain applications even better than) state-of-the art methods.

Page generated in 0.0527 seconds