31 |
"Magic Lantern" videodekodér pro fotoaparát Canon 5D / Magic Lantern Video Decoder for Canon 5D CameraŠkvařilová, Radka January 2015 (has links)
Tato práce představuje návrh na vytvoření dekodéru pro video zaznamenané pomocí softwaru Magic Lantern, který může být nainstalován na Canon 5D. Toto video je význačné pro svoji velikost 14-bitů v raw formátu a proto může produkovat velmi kvalitní výstup. Práce má za cíl rozdělit video do jednotlivých snímků, ve vhodném formátu, který umí pracovat také s formáty obrazů s vysokým dynamickým rozsahem.
|
32 |
Computer vision at low lightAbhiram Gnanasambandam (12863432) 14 June 2022 (has links)
<p>Imaging in low light is difficult because the number of photons arriving at the image sensor is low. This is a major technological challenge for applications such as surveillance and autonomous driving. Conventional CMOS image sensors (CIS) circumvent this issue by using techniques such as burst photography. However, this process is slow and it does not solve the underlying problem that the CIS cannot efficiently capture the signals arriving at the sensors. This dissertation focuses on solving this problem using a combination of better image sensors (Quanta Image Sensors) and computational imaging techniques.</p>
<p><br></p>
<p>The first part of the thesis involves understanding how the quanta image sensors work and how they can be used to solve the low light imaging problem. The second part is about the algorithms that can deal with images obtained in low light. The contributions in this part include – 1. Understanding and proposing solutions for the Poisson noise model, 2. Proposing a new machine learning scheme called student-teacher learning for helping neural networks deal with noise, and 3. Developing solutions that work not only for low light but also for a wide range of signal and noise levels. Using the ideas, we can solve a variety of applications in low light, such as color imaging, dynamic scene reconstruction, deblurring, and object detection.</p>
|
33 |
Uživatelské rozhraní systému pro práci s HDR obrazem / User Interface for HDR Tone Mapping SystemJedlička, Jan January 2021 (has links)
The goal of this thesis is to improve graphical user interface of Tone Mapping Studio(TMS) program. This program is being developed on the Faculty of Information Technology(FIT), Brno University of Technology (BUT) by doc. Ing. Martin Čadík, PhD. The current program is using framework Qt3 , which is old and not compatible with modern libraries. This program has to be rewritten to support current version Qt5. I will analyze other programs in the area of working with High Dynamic Range (HDR) images and video. Changes for improving the interface will be proposed and UX tests will be done. Second part will consist of comparing plug-ins for converting images to grayscale that already exists in TMS.
|
34 |
REAL-TIME EMBEDDED ALGORITHMS FOR LOCAL TONE MAPPING OF HIGH DYNAMIC RANGE IMAGESHassan, Firas January 2007 (has links)
No description available.
|
35 |
A Real-Time Implementation of Gradient Domain High Dynamic Range Compression Using a Local Poisson SolverVytla, Lavanya 20 May 2010 (has links)
No description available.
|
36 |
Image-based Material EditingKhan, Erum 01 January 2006 (has links)
Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object's background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques.
|
37 |
Optimizing The High Dynamic Range Imaging PipelineAkyuz, Ahmet Oguz 01 January 2007 (has links)
High dynamic range (HDR) imaging is a rapidly growing field in computer graphics and image processing. It allows capture, storage, processing, and display of photographic information within a scene-referred framework. The HDR imaging pipeline consists of the major steps an HDR image is expected to go through from capture to display. It involves various techniques to create HDR images, pixel encodings and file formats for storage, tone mapping for display on conventional display devices and direct display on HDR capable screens. Each of these stages have important open problems, which need to be addressed for a smoother transition to an HDR imaging pipeline. We addressed some of these important problems such as noise reduction in HDR imagery, preservation of color appearance, validation of tone mapping operators, and image display on HDR monitors. The aim of this thesis is thus, to present our findings and describe the research we have conducted within the framework of optimizing the HDR imaging pipeline.
|
38 |
Single Shot High Dynamic Range and Multispectral Imaging Based on Properties of Color Filter ArraysSimon, Paul M. 16 May 2011 (has links)
No description available.
|
39 |
Development of High Speed High Dynamic Range VideographyGriffiths, David John 09 February 2017 (has links)
High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods.
In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology.
In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D. / High speed video is a tool that has been developed to capture events that occur faster than a human can observe. The use and prevalence of high speed video is rapidly expanding as cost drops and ease of use increases. It is currently used in private and government industries for quality control, manufacturing, test evaluation, and the entertainment industry in movie making and sporting events.
Due to the specific hardware requirements when capturing high speed video, the dynamic range, the ratio of the brightest measurement to the darkest measurement the camera can acquire, is limited. The dynamic range limitation can be seen in a video as either a white or black region with no discernible detail when there should be. This is referred to as regions of over saturation or under saturation.
Presented in this document is a new method to capture high speed video utilizing multiple commercially available high speed cameras. An optimized camera layout is presented and a mathematical algorithm is developed for the formation of a video that will never be over or under saturated using a minimum number of cameras. This was done to reduce the overall cost and complexity of the setup while retaining an accurate image. The concept is demonstrated with several examples of both controlled tests and explosive tests filmed up to 3,300 times faster than a standard video, with a dynamic range spanning over 310,000 times the capabilities of a standard high speed camera.
The technology developed in this document can be used in the previously mentioned industries whenever the content being filmed over saturates the imager. It has been developed so it can be scalable in order to capture extremely large dynamic range scenes, cost efficient to broaden applicability, and accurate to allow for a fragment free final image.
|
40 |
Image Based Visualization Methods for Meteorological DataOlsson, Björn January 2004 (has links)
Visualization is the process of constructing methods, which are able to synthesize interesting and informative images from data sets, to simplify the process of interpreting the data. In this thesis a new approach to construct meteorological visualization methods using neural network technology is described. The methods are trained with examples instead of explicitely designing the appearance of the visualization. This approach is exemplified using two applications. In the fist the problem to compute an image of the sky for dynamic weather, that is taking account of the current weather state, is addressed. It is a complicated problem to tie the appearance of the sky to a weather state. The method is trained with weather data sets and images of the sky to be able to synthesize a sky image for arbitrary weather conditions. The method has been trained with various kinds of weather and images data. The results show that this is a possible method to construct weather visaualizations, but more work remains in characterizing the weather state and further refinement is required before the full potential of the method can be explored. This approach would make it possible to synthesize sky images of dynamic weather using a fast and efficient empirical method. In the second application the problem of computing synthetic satellite images form numerical forecast data sets is addressed. In this case a mode is trained with preclassified satellite images and forecast data sets to be able to synthesize a satellite image representing arbitrary conditions. The resulting method makes it possible to visualize data sets from numerical weather simulations using synthetic satellite images, but could also be the basis for algorithms based on a preliminary cloud classification. / Report code: LiU-Tek-Lic-2004:66.
|
Page generated in 0.072 seconds