• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

"Magic Lantern" videodekodér pro fotoaparát Canon 5D / Magic Lantern Video Decoder for Canon 5D Camera

Škvařilová, Radka January 2015 (has links)
Tato práce představuje návrh na vytvoření dekodéru pro video zaznamenané pomocí softwaru Magic Lantern, který může být nainstalován na Canon 5D. Toto video je význačné pro svoji velikost 14-bitů v raw formátu a proto může produkovat velmi kvalitní výstup. Práce má za cíl rozdělit video do jednotlivých snímků, ve vhodném formátu, který umí pracovat také s formáty obrazů s vysokým dynamickým rozsahem.
2

Development of High Speed High Dynamic Range Videography

Griffiths, David John 09 February 2017 (has links)
High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods. In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology. In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D. / High speed video is a tool that has been developed to capture events that occur faster than a human can observe. The use and prevalence of high speed video is rapidly expanding as cost drops and ease of use increases. It is currently used in private and government industries for quality control, manufacturing, test evaluation, and the entertainment industry in movie making and sporting events. Due to the specific hardware requirements when capturing high speed video, the dynamic range, the ratio of the brightest measurement to the darkest measurement the camera can acquire, is limited. The dynamic range limitation can be seen in a video as either a white or black region with no discernible detail when there should be. This is referred to as regions of over saturation or under saturation. Presented in this document is a new method to capture high speed video utilizing multiple commercially available high speed cameras. An optimized camera layout is presented and a mathematical algorithm is developed for the formation of a video that will never be over or under saturated using a minimum number of cameras. This was done to reduce the overall cost and complexity of the setup while retaining an accurate image. The concept is demonstrated with several examples of both controlled tests and explosive tests filmed up to 3,300 times faster than a standard video, with a dynamic range spanning over 310,000 times the capabilities of a standard high speed camera. The technology developed in this document can be used in the previously mentioned industries whenever the content being filmed over saturates the imager. It has been developed so it can be scalable in order to capture extremely large dynamic range scenes, cost efficient to broaden applicability, and accurate to allow for a fragment free final image.
3

Real-time image based lighting with streaming HDR-light probe sequences

Hajisharif, Saghi January 2012 (has links)
This work presents a framework for shading of virtual objects using high dynamic range (HDR) light probe sequences in real-time. The method is based on using HDR environment map of the scene which is captured in an on-line process by HDR video camera as light probes. In each frame of the HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiance calculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporal coherence between frames to further smooth lighting variation over time. Our results show that the framework can achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in the real environment. We are using low-order spherical harmonics for representing both lighting and transfer functionsto avoid aliasing.

Page generated in 0.0874 seconds