• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 25
  • 25
  • 23
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Object avoidance and wall following using the Kinect

Schwab, Carl William 24 February 2012 (has links)
The range camera in Microsoft's Kinect, intended for the Xbox 360 gaming console, offers a powerful alternative to the many standard sensors used in robotics for gathering spatial information about a robot’s surroundings. The recently-released Kinect is the first commercially available product to provide depth data of its resolution and accuracy with a price tag within reach of many robotics projects. The work described in this paper explores the feasibility of using this sensor by developing a robot that relies solely on the Kinect for sensory data. This robot successfully performs standard navigational procedures, demonstrating the possibility of integrating spatial information from the Kinect into a real-time robotics application. This paper documents the techniques used to integrate the Kinect into the system, highlighting the key benefits and limitations of the sensor. / text
2

Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering

Unger, Jonas, Gustavson, Stefan, Ynnerman, Anders January 2007 (has links)
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.
3

Survey and Evaluation of Tone Mapping Operators for HDR-video

Eilertsen, Gabriel, Unger, Jonas, Wanat, Robert, Mantiuk, Rafal January 2013 (has links)
This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images. / VPS
4

ACTIVE SENSING FOR INTELLIGENT ROBOT VISION WITH RANGE IMAGING SENSOR

Fukuda, Toshio, Kubota, Naoyuki, Sun, Baiqing, Chen, Fei, Fukukawa, Tomoya, Sasaki, Hironobu January 2010 (has links)
No description available.
5

Interest Point Sampling for Range Data Registration in Visual Odometry

PANWAR, VIVEK 07 November 2011 (has links)
Accurate registration of 3D data is one of the most challenging problems in a number of Computer Vision applications. Visual Odometry is one such application, which determines the motion, or change in position of a moving rover by registering 3D data captured by an on-board range sensor, in a pairwise manner. The performance of Visual Odometry depends upon two main factors, the first being the quality of 3D data, which itself depends upon the type of sensor being used. The second factor is the robustness of the registration algorithm. Where sensors like stereo cameras and LIDAR scanners have been used in the past to improve the performance of Visual Odometry, the introduction of the Velodyne LIDAR scanner is fairly new and has been less investigated, particularly for odometry applications. This thesis presents and examines a new method for registering 3D point clouds generated by a Velodyne scanner mounted on a moving rover. The method is based on one of the the most widely used registration algorithms called Iterative Closest Point (ICP). The proposed method is divided into two steps. The first step, which is also the main contribution of this work, is the introduction of a new point sampling method, which prudently select points that belong to the regions of greatest geometric variance in the scan. Interest Point (Region) Sampling plays an important role in the performance of ICP by effectively discounting the regions with non-uniform resolution and selecting regions with a high geometric variance and uniform resolution. Second step is to use sampled scan pairs as the input to a new plane-to-plane variant of ICP, known as Generalized ICP. Several experiments have been executed to test the compatibility and robustness of Interest Point Sampling (IPS) for a variety of terrain landscapes. Through these experiments, which include comparisons of variants of ICP and past sampling methods, this work demonstrates that the combination of IPS and GICP results in the least localization error as compared to all other tested method. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-11-03 11:12:43.596
6

A Psychophysical Evaluation of Inverse Tone Mapping Techniques.

Banterle, F., Ledda, P., Debattista, K., Bloj, Marina, Artussi, A., Chalmers, A. January 2009 (has links)
No / In recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.
7

Incident Light Fields

Unger, Jonas January 2009 (has links)
Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene. This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased. The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.
8

High Dynamic Range Video for Photometric Measurement of Illumination

Unger, Jonas, Gustavson, Stefan, Ynnerman, Anders January 2007 (has links)
We describe the design and implementation of a high dynamic range (HDR) imaging system capable of capturing RGB color images with a dynamic range of 10,000,000 : 1 at 25 frames per second. We use a highly programmable camera unit with high throughput A/D conversion, data processing and data output. HDR acquisition is performed by multiple exposures in a continuous rolling shutter progression over the sensor. All the different exposures for one particular row of pixels are acquired head to tail within the frame time, which means that the time disparity between exposures is minimal, the entire frame time can be used for light integration and the longest expo- sure is almost the entire frame time. The system is highly configurable, and trade-offs are possible between dynamic range, precision, number of exposures, image resolution and frame rate.
9

Growing neural gas for intelligent robot vision with range imaging camera

Sasaki, Hironobu, Fukuda, Toshio, Satomi, Masashi, Kubota, Naoyuki 09 August 2009 (has links)
No description available.
10

Laser Triangulation Using Spacetime Analysis

Benderius, Björn January 2007 (has links)
<p>In this thesis spacetime analysis is applied to laser triangulation in an attempt to eliminate certain artifacts caused mainly by reflectance variations of the surface being measured. It is shown that spacetime analysis do eliminate these artifacts almost completely, it is also shown that the shape of the laser beam used no longer is critical thanks to the spacetime analysis, and that in some cases the laser probably even could be exchanged for a non-coherent light source. Furthermore experiments of running the derived algorithm on a GPU (Graphics Processing Unit) are conducted with very promising results.</p><p>The thesis starts by deriving the theory needed for doing spacetime analysis in a laser triangulation setup taking perspective distortions into account, then several experiments evaluating the method is conducted.</p>

Page generated in 0.0799 seconds