Spelling suggestions: "subject:"ddr"" "subject:"rdr""
21 |
Survey and Evaluation of Tone Mapping Operators for HDR-videoEilertsen, Gabriel, Unger, Jonas, Wanat, Robert, Mantiuk, Rafal January 2013 (has links)
This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images. / VPS
|
22 |
Time lapse HDR: time lapse photography with high dynamic range imagesClark, Brian Sean 29 August 2005 (has links)
In this thesis, I present an approach to a pipeline for time lapse photography using
conventional digital images converted to HDR (High Dynamic Range) images (rather
than conventional digital or film exposures). Using this method, it is possible to
capture a greater level of detail and a different look than one would get from a
conventional time lapse image sequence. With HDR images properly tone-mapped
for display on standard devices, information in shadows and hot spots is not lost, and
certain details are enhanced.
|
23 |
Pokročilý prohlížeč HDR obrazů / Advanced HDR image viewerWirth, Michal January 2017 (has links)
04.01.17 abstract.txt 1 file:///home/misa/Desktop/dp/abstract.txt The primary purpose of this thesis is to determine criteria for a high- dynamic range (HDR) image viewer accented by computer graphics artists and other users who work with HDR images produced by physically-based renderers on a daily basis. Also an overview of already existing solutions is present. Based on both of them, a new HDR viewer is designed and implemented giving an emphasis on its memory and performance efficiency. For these purposes two alternative image data layouts, Array-of-Structures (AoS) and Structure-of-Arrays (SoA), are discussed and their impact is measured on the speed of an algorithm for changing image saturation which has been selected as a representative part of whole tone mapping process of the viewer. It has turned out that the latter type of layout allows the algorithm to run about 3 times faster or more under the conditions of a defined testing environment. The thesis has two main contributions. First it gives the above users a tool which could help them when working with HDR images. Second it indicates that there may be a potential of significant speed-up of implementations of tone mapping algorithms.
|
24 |
Omnidirectional High Dynamic Range Imaging with a Moving CameraZhou, Fanping January 2014 (has links)
Common cameras with a dynamic range of two orders cannot reproduce typical outdoor scenes with a radiance range of over five orders. Most high dynamic range (HDR) imaging techniques reconstruct the whole dynamic range from exposure bracketed low dynamic range (LDR) images. But the camera must be kept steady with no or small motion, which is not practical in many cases. Thus, we develop a more efficient framework for omnidirectional HDR imaging with a moving camera.
The proposed framework is composed of three major stages: geometric calibration and rotational alignment, multi-view stereo correspondence and HDR composition. First, camera poses are determined and omnidirectional images are rotationally aligned. Second, the aligned images are fed into a spherical vision toolkit to find disparity maps. Third, enhanced disparity maps are used to warp differently exposed neighboring images to a target view and an HDR radiance map is obtained by fusing the registered images in radiance. We develop disparity-based forward and backward image warping algorithms for spherical stereo vision and implement them in GPU. We also explore some techniques for disparity map enhancement including a superpixel technique and a color model for outdoor scenes.
We examine different factors such as exposure increment step size, sequence ordering, and the baseline between views. We demonstrate the success with indoor and outdoor scenes and compare our results with two state-of-the-art HDR imaging methods. The proposed HDR framework allows us to capture HDR radiance maps, disparity maps and an omnidirectional field of view, which has many applications such as HDR view synthesis and virtual navigation.
|
25 |
The Development of a Genomic Toolbox for Studying the Evolutionary Genetics of Reptilian Lungs Using the Chicken ModelEdvalson, Logan Thomas 22 November 2022 (has links)
There is a vast diversity in tetrapod lung branching morphology. Phylogenetically, much of the pulmonary diversity among vertebrates appears to arise from the way epithelial tubes branch or form saccular (cyst) structures. Fgf10 activity has been shown to play a critical role in regulating branch versus cyst morphology. We hypothesize that the species-specific differences in lung morphology may be primarily due to species-specific differences in Fgf10 expression. To test this hypothesis, we have performed bioinformatic analyses on the Fgf10 locus and have identified a conserved 11 kb noncoding region that potentially contains the Fgf10 lung enhancer. We are taking a large DNA sequence upstream of the Fgf10 gene of the American Alligator and swapping it into the orthologous locus in the genome of chicken primordial germ cells (cPGCs). We are accomplishing these swaps by using a combination of homology directed repair (HDR) and recombinase mediated cassette exchange (RMCE) in cPGCs. These edited cell lines can be used to generate germline chimeric chickens capable of producing offspring that putatively drive Fgf10 expression in the lung under control of regulatory sequences from various other reptiles. We have also generated a cPGC line where, through RMCE, we can easily target any enhancer from any organism to drive a GFP reporter as a means to test the temporal and spatial regulatory characteristics of these enhancers. This work is funded through a BYU Turkey Vaccine Grant and a Skaggs Mentoring Grant.
|
26 |
Dosimetric Comparison of Superficial X-Rays and a Custom HDR Surface Applicator for the Treatment of Superficial CancersMerz, Brandon A. 12 November 2008 (has links)
No description available.
|
27 |
Error Analysis of non-TLD HDR Brachytherapy Dosimetric TechniquesAmoush, Ahmad A. 20 September 2011 (has links)
No description available.
|
28 |
EXTREME LOW-LIGHT IMAGING OF DYNAMIC HDR SCENES USING DEEP LEARNING METHODSYiheng Chi (19234225) 02 August 2024 (has links)
<p dir="ltr">Imaging in low light is difficult because few photons can arrive at the sensor in a particular time interval. Increasing the exposure time is not always an option, as images will be blurry if the scenes are dynamic. If scenes or objects are moving, one can capture multiple frames with short exposure time and fuse them using carefully designed algorithms; however, aligning the pixels in adjacent frames is challenging due to the high photon shot noise and sensor read noise at low light. If the dynamic range of the scene is high, one needs to further blend multiple exposures from the frames. This blending requires removal of spatially varying noise at various lighting conditions while todays high dynamic range (HDR) fusion algorithms usually assume well illuminated scenes. Therefore, this low-light HDR imaging problem remains unsolved. </p><p dir="ltr">To address these dynamic low-light imaging problems, researches in this dissertation explore both conventional CMOS image sensors and a new type of image sensor, named quanta image sensor (QIS), develop models of the imaging conditions of interest, and propose new image reconstruction algorithms based on deep neural networks together with new training protocols to assist the learning. Researches in this dissertation target to reconstruct dynamic HDR scenes at a light level of 1 photon per pixel (ppp) or less than 1 lux illuminance.</p>
|
29 |
Transit dosimetry in 192Ir high dose rate brachytherapyAde, Nicholas 02 December 2010 (has links)
Background and purpose: Historically HDR brachytherapy treatment planning systems
ignore the transit dose in the computation of patient dose. However, the total radiation
dose delivered during each treatment cycle is equal to the sum of the static dose and the
transit dose and every HDR application therefore results in two radiation doses.
Consequently, the absorbed dose to the target volume is more than the prescribed dose as
computed during treatment planning. The aim of this study was to determine the
magnitude of the transit dose component of two 192Ir HDR brachytherapy units and assess
its dosimetric significance.
Materials and Methods: Ionization chamber dosimetry systems (well-type and Farmertype
ionization chambers) were used to measure the charge generated during the transit of
the 192Ir source from a GammaMed and a Nucletron MicroSelectron HDR afterloader
using single catheters of lengths 120 cm. Different source configurations were used for
the measurements of integrated charge. Two analysis techniques were used for transit
time determination: the multiple exposure technique and the graphical solution of zero
exposure. The transit time was measured for the total transit of the radioactive source into
(entry) and out of (exit) the catheters.
Results: A maximum source transit time of 1.7 s was measured. The transit dose depends
on the source activity, source configuration, number of treatment fractions, prescription
dose and the type of remote afterloader used. It does not depend on the measurement
technique, measurement distance or the analysis technique used for transit time
determination.
Conclusion: A finite transit time increases the radiation dose beyond that due to the
programmed source dwell time alone. The significance of the transit dose would increase
with a decrease in source dwell time or a higher activity source.
|
30 |
Génération, visualisation et évaluation d’images HDR : application à la simulation de conduite nocturne / Rendering, visualization and evaluation of HDR images : application to driving simulation at nightPetit, Josselin 03 December 2010 (has links)
Cette thèse se situe à l’interface de deux des sujets de recherche du LEPSi8S, la perception et la réalité virtuelle, appliqués aux transports routiers. L’objectif de la thèse est d’améliorer l’état de l’art concernant le rendu des images de synthèse pour les simulateurs de conduite. L’axe privilégié est le réalisme perceptif des images. L’approche retenue propose un mode de rendu High Dynamic Range, qui permet de générer une image en luminance. La technique proposée permet de réutiliser des environnements virtuels classiques, avec un minimum d’informations supplémentaires concernant les sources lumineuses. Les textures et matériaux existants sont utilisés pour un rendu aussi proche physiquement de la réalité que possible. Ensuite, l’image est traitée avec un opérateur de reproduction de tons, qui compresse la dynamique pour tenir compte des limites liées au dispositif d’affichage, tout en respectant autant que possible un réalisme perceptif du rendu. L’opérateur a été choisi de façon à ce qu’il soit adapté à la simulation de conduite, notamment pour les cas extrêmes (nuit, éblouissement, soleil rasant). Une simulation de l’éblouissement a également été implémentée. L’ensemble du rendu est temps réel, et a été intégré dans la boucle visuelle les simulateurs de conduite du LEPSiS. Enfin, des comparaisons réel-virtuel ont permis de montrer la qualité du rendu HDR obtenu. Des expérimentations avec sujets, sur des photographies (avec une référence réelle) et sur des vidéos, ont de plus montré les meilleures performances d’un opérateur doté d’un modèle visuel humain pour la simulation de conduite, notamment par sa capacité à s’adapter temporellement aux variations de luminance. / The LEPSiS is leading applied research on the transportation field. This PhD addresses perception and virtual reality, two research topics at the LEPSiS. The objective of my PhD was to improve the state of the art of the computer graphic image rendering for driving simulator applications. The main issue was the perceptual realism of the images, notably in high dynamic range conditions (night, glare). The proposed approach puts forward a High Dynamic Range mode, allowing us to render images in luminance.We use classic virtual environments, with small additional information about the light sources. The textures and materials are used for a rendering as close as possible to physical reality. Then, the image is processed by a tone mapping operator, which compresses the luminance dynamic, taking into account the limited range of the display device and the perceptual realism of the rendering. The chosen tone mapping is adapted to driving simulations, and especially to extreme situations (night, skimming sun). A glare simulation was also added. The entire rendering is real time, and is now included in the driving simulators of the LEPSiS. Lastly, real-virtual comparisons assessed the quality of the obtained HDR rendering. Moreover, two psycho-visual experiments with subjects, on photographs (with a real reference) and on video (without reference), showed the relevance of a tone mapping with a human visual model, including temporal adaptation to changing luminance, for driving simulations.
|
Page generated in 0.0501 seconds