Return to search

Development of High Speed High Dynamic Range Videography

High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods.

In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology.

In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D. / High speed video is a tool that has been developed to capture events that occur faster than a human can observe. The use and prevalence of high speed video is rapidly expanding as cost drops and ease of use increases. It is currently used in private and government industries for quality control, manufacturing, test evaluation, and the entertainment industry in movie making and sporting events.

Due to the specific hardware requirements when capturing high speed video, the dynamic range, the ratio of the brightest measurement to the darkest measurement the camera can acquire, is limited. The dynamic range limitation can be seen in a video as either a white or black region with no discernible detail when there should be. This is referred to as regions of over saturation or under saturation.

Presented in this document is a new method to capture high speed video utilizing multiple commercially available high speed cameras. An optimized camera layout is presented and a mathematical algorithm is developed for the formation of a video that will never be over or under saturated using a minimum number of cameras. This was done to reduce the overall cost and complexity of the setup while retaining an accurate image. The concept is demonstrated with several examples of both controlled tests and explosive tests filmed up to 3,300 times faster than a standard video, with a dynamic range spanning over 310,000 times the capabilities of a standard high speed camera.

The technology developed in this document can be used in the previously mentioned industries whenever the content being filmed over saturates the imager. It has been developed so it can be scalable in order to capture extremely large dynamic range scenes, cost efficient to broaden applicability, and accurate to allow for a fragment free final image.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/74990
Date09 February 2017
CreatorsGriffiths, David John
ContributorsMechanical Engineering, Wicks, Alfred L., Vick, Brian L., Roan, Michael J., Abbott, A. Lynn, Xuan, Jianhua
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0012 seconds