• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 7
  • 7
  • 6
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 77
  • 45
  • 33
  • 28
  • 27
  • 23
  • 18
  • 16
  • 16
  • 15
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of Galileo and GPS systems

Zhi, Chen, Qishan, Zhang 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes key points in the field of Galileo application abroad spacecraft and normal vehicles. On the basis of ephemeris of Galileo constellation, the mathematic model and procession are given in high dynamic signal environment, the digital simulation is also completed, the results are statistics and analyzed and presented. On the topic of navigation satellite constellation orbit and visibility, the paper presents the Galileo frame system, time system, navigation satellite orbit elements, constellation structure, and GDOP calculation. The users include low dynamic as well as high dynamic spacecraft. The analysis for relevant GPS is also showed. About the navigation signal structure, main points are Galileo system working frequency, including E5, E6 and L1 frequency spans, the modulation and navigation data, ets. At the same time, this paper compares Galileo with GPS. On the aspect of signal communication link, Dopplar frequency shift and power level calculation are present as well as compare with GPS system.
2

Reconfigurable Snapshot HDR Imaging Using Coded Masks

Alghamdi, Masheal M. 10 July 2021 (has links)
High Dynamic Range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this thesis, we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware combined with a deep learning algorithm to reconstruct the HDR image. In this approach, we achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware-software solution offers a flexible, yet robust, way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparative analysis demonstrated that our method outperforms the state-of-the-art in terms of visual perception quality. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction
3

Algorithms for compression of high dynamic range images and video

Dolzhenko, Vladimir January 2015 (has links)
The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented.
4

Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering

Unger, Jonas, Gustavson, Stefan, Ynnerman, Anders January 2007 (has links)
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.
5

Survey and Evaluation of Tone Mapping Operators for HDR-video

Eilertsen, Gabriel, Unger, Jonas, Wanat, Robert, Mantiuk, Rafal January 2013 (has links)
This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images. / VPS
6

Time lapse HDR: time lapse photography with high dynamic range images

Clark, Brian Sean 29 August 2005 (has links)
In this thesis, I present an approach to a pipeline for time lapse photography using conventional digital images converted to HDR (High Dynamic Range) images (rather than conventional digital or film exposures). Using this method, it is possible to capture a greater level of detail and a different look than one would get from a conventional time lapse image sequence. With HDR images properly tone-mapped for display on standard devices, information in shadows and hot spots is not lost, and certain details are enhanced.
7

A web-based approach to image-based lighting using high dynamic range images and QuickTime object virtual reality

Cuellar, Tamara Melissa 10 October 2008 (has links)
This thesis presents a web-based approach to lighting three-dimensional geometry in a virtual scene. The use of High Dynamic Range (HDR) images for the lighting model makes it possible to convey a greater sense of photorealism than can be provided with a conventional computer generated three-point lighting setup. The use of QuickTime ™ Object Virtual Reality to display the three-dimensional geometry offers a sophisticated user experience and a convenient method for viewing virtual objects over the web. With this work, I generate original High Dynamic Range images for the purpose of image-based lighting and use the QuickTime ™ Object Virtual Reality framework to creatively alter the paradigm of object VR for use in object lighting. The result is two scenarios: one that allows for the virtual manipulation of an object within a lit scene, and another with the virtual manipulation of light around a static object. Future work might include the animation of High Dynamic Range image-based lighting, with emphasis on such features as depth of field and glare generation.
8

Omnidirectional High Dynamic Range Imaging with a Moving Camera

Zhou, Fanping January 2014 (has links)
Common cameras with a dynamic range of two orders cannot reproduce typical outdoor scenes with a radiance range of over five orders. Most high dynamic range (HDR) imaging techniques reconstruct the whole dynamic range from exposure bracketed low dynamic range (LDR) images. But the camera must be kept steady with no or small motion, which is not practical in many cases. Thus, we develop a more efficient framework for omnidirectional HDR imaging with a moving camera. The proposed framework is composed of three major stages: geometric calibration and rotational alignment, multi-view stereo correspondence and HDR composition. First, camera poses are determined and omnidirectional images are rotationally aligned. Second, the aligned images are fed into a spherical vision toolkit to find disparity maps. Third, enhanced disparity maps are used to warp differently exposed neighboring images to a target view and an HDR radiance map is obtained by fusing the registered images in radiance. We develop disparity-based forward and backward image warping algorithms for spherical stereo vision and implement them in GPU. We also explore some techniques for disparity map enhancement including a superpixel technique and a color model for outdoor scenes. We examine different factors such as exposure increment step size, sequence ordering, and the baseline between views. We demonstrate the success with indoor and outdoor scenes and compare our results with two state-of-the-art HDR imaging methods. The proposed HDR framework allows us to capture HDR radiance maps, disparity maps and an omnidirectional field of view, which has many applications such as HDR view synthesis and virtual navigation.
9

High dynamic simulations for global positioning system receivers

Osmanbhoy, Azhar Haroon Rashid January 2000 (has links)
No description available.
10

A Psychophysical Evaluation of Inverse Tone Mapping Techniques.

Banterle, F., Ledda, P., Debattista, K., Bloj, Marina, Artussi, A., Chalmers, A. January 2009 (has links)
No / In recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.

Page generated in 0.0478 seconds