• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 7
  • 7
  • 6
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 77
  • 45
  • 33
  • 28
  • 27
  • 23
  • 18
  • 16
  • 16
  • 15
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

"Magic Lantern" videodekodér pro fotoaparát Canon 5D / Magic Lantern Video Decoder for Canon 5D Camera

Škvařilová, Radka January 2015 (has links)
Tato práce představuje návrh na vytvoření dekodéru pro video zaznamenané pomocí softwaru Magic Lantern, který může být nainstalován na Canon 5D. Toto video je význačné pro svoji velikost 14-bitů v raw formátu a proto může produkovat velmi kvalitní výstup. Práce má za cíl rozdělit video do jednotlivých snímků, ve vhodném formátu, který umí pracovat také s formáty obrazů s vysokým dynamickým rozsahem.
32

Computer vision at low light

Abhiram Gnanasambandam (12863432) 14 June 2022 (has links)
<p>Imaging in low light is difficult because the number of photons arriving at the image sensor is low. This is a major technological challenge for applications such as surveillance and autonomous driving. Conventional CMOS image sensors (CIS) circumvent this issue by using techniques such as burst photography. However, this process is slow and it does not solve the underlying problem that the CIS cannot efficiently capture the signals arriving at the sensors. This dissertation focuses on solving this problem using a combination of better image sensors (Quanta Image Sensors) and computational imaging techniques.</p> <p><br></p> <p>The first part of the thesis involves understanding how the quanta image sensors work and how they can be used to solve the low light imaging problem. The second part is about the algorithms that can deal with images obtained in low light. The contributions in this part include – 1. Understanding and proposing solutions for the Poisson noise model, 2. Proposing a new machine learning scheme called student-teacher learning for helping neural networks deal with noise, and 3. Developing solutions that work not only for low light but also for a wide range of signal and noise levels. Using the ideas, we can solve a variety of applications in low light, such as color imaging, dynamic scene reconstruction, deblurring, and object detection.</p>
33

Uživatelské rozhraní systému pro práci s HDR obrazem / User Interface for HDR Tone Mapping System

Jedlička, Jan January 2021 (has links)
The goal of this thesis is to improve graphical user interface of Tone Mapping Studio(TMS) program. This program is being developed on the Faculty of Information Technology(FIT), Brno University of Technology (BUT) by doc. Ing. Martin Čadík, PhD. The current program is using framework Qt3 , which is old and not compatible with modern libraries. This program has to be rewritten to support current version Qt5. I will analyze other programs in the area of working with High Dynamic Range (HDR) images and video. Changes for improving the interface will be proposed and UX tests will be done. Second part will consist of comparing plug-ins for converting images to grayscale that already exists in TMS.
34

REAL-TIME EMBEDDED ALGORITHMS FOR LOCAL TONE MAPPING OF HIGH DYNAMIC RANGE IMAGES

Hassan, Firas January 2007 (has links)
No description available.
35

A Real-Time Implementation of Gradient Domain High Dynamic Range Compression Using a Local Poisson Solver

Vytla, Lavanya 20 May 2010 (has links)
No description available.
36

Image-based Material Editing

Khan, Erum 01 January 2006 (has links)
Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object's background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques.
37

Optimizing The High Dynamic Range Imaging Pipeline

Akyuz, Ahmet Oguz 01 January 2007 (has links)
High dynamic range (HDR) imaging is a rapidly growing field in computer graphics and image processing. It allows capture, storage, processing, and display of photographic information within a scene-referred framework. The HDR imaging pipeline consists of the major steps an HDR image is expected to go through from capture to display. It involves various techniques to create HDR images, pixel encodings and file formats for storage, tone mapping for display on conventional display devices and direct display on HDR capable screens. Each of these stages have important open problems, which need to be addressed for a smoother transition to an HDR imaging pipeline. We addressed some of these important problems such as noise reduction in HDR imagery, preservation of color appearance, validation of tone mapping operators, and image display on HDR monitors. The aim of this thesis is thus, to present our findings and describe the research we have conducted within the framework of optimizing the HDR imaging pipeline.
38

Single Shot High Dynamic Range and Multispectral Imaging Based on Properties of Color Filter Arrays

Simon, Paul M. 16 May 2011 (has links)
No description available.
39

Development of High Speed High Dynamic Range Videography

Griffiths, David John 09 February 2017 (has links)
High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods. In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology. In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D.
40

Image Based Visualization Methods for Meteorological Data

Olsson, Björn January 2004 (has links)
Visualization is the process of constructing methods, which are able to synthesize interesting and informative images from data sets, to simplify the process of interpreting the data. In this thesis a new approach to construct meteorological visualization methods using neural network technology is described. The methods are trained with examples instead of explicitely designing the appearance of the visualization. This approach is exemplified using two applications. In the fist the problem to compute an image of the sky for dynamic weather, that is taking account of the current weather state, is addressed. It is a complicated problem to tie the appearance of the sky to a weather state. The method is trained with weather data sets and images of the sky to be able to synthesize a sky image for arbitrary weather conditions. The method has been trained with various kinds of weather and images data. The results show that this is a possible method to construct weather visaualizations, but more work remains in characterizing the weather state and further refinement is required before the full potential of the method can be explored. This approach would make it possible to synthesize sky images of dynamic weather using a fast and efficient empirical method. In the second application the problem of computing synthetic satellite images form numerical forecast data sets is addressed. In this case a mode is trained with preclassified satellite images and forecast data sets to be able to synthesize a satellite image representing arbitrary conditions. The resulting method makes it possible to visualize data sets from numerical weather simulations using synthetic satellite images, but could also be the basis for algorithms based on a preliminary cloud classification. / Report code: LiU-Tek-Lic-2004:66.

Page generated in 0.1163 seconds