1 |
Intermediate View Interpolation of Stereoscopic Images for 3D-DisplayThulin, Oskar January 2006 (has links)
<p>This thesis investigates how disparity estimation may be used to visualize an object on a 3D-screen. The first part looks into different methods of disparity estimation, and the second part examines different ways to visualize an object from one or several stereo pairs and a disparity map. Input to the system is one or several stereo pairs, and output is a sequence of images of the input scene but from more angles. This sequence of images can be shown on Setred AB's 3D-screen. The system has high real time demands and the goal is to do the disparity estimation and visualization in real time.</p><p>In the first part of the thesis, three different ways to calculate disparity maps are implemented and compared. The three methods are correlation-based, local structure-based and phase-based techniques. The correlation-based methods cannot satisfy the real-time demands due to the large number of 2D-convolutions required per pixel. The local structure-based methods have too much noise and cannot satisfy the quality requirements. Therefore, the best method by far is the phase-based method. This method has been implemented in Matlab and C and comparisons between the different implementations are presented.</p><p>The quality of the disparity maps is satisfying, but the real-time demands cannot yet be fulfilled. The future work is therefore to optimize the C code and move some functions to a GPU, because a GPU can perform calculations in parallel with the CPU. Another reason is that many of the calculations are related to resizing and warping, which are well-suited to implementation on a GPU.</p>
|
2 |
User Directed View Synthesis On Omap ProcessorsYildiz, Mursel 01 July 2009 (has links) (PDF)
In this thesis, real time image rendering for hand held devices is studied according to user&rsquo / s view point choice and using image frames with corresponding depth maps obtained from 2 different cameras, of which positions on coordinate system is known. User&rsquo / s view point choice is restricted to the area between right, and left cameras. Occlusion handling methods for image rendering systems is explored and discussed together with frame enhancement techniques. Median filtering is studied for multicolor image frames and post processing methods are discussed for image enhancement at the end of rendering algorithm. In this thesis, OMAP3530 microprocessor is used as the main processor which processes suggested rendering algorithm with occlusion handling and frame enhancement. proposed algorithms are implemented on DSP core and ARM cores of OMAP3530 separately and their performances are evaluated through experiments. Embedded Linux (Kernel-2.6.22) is run as the operating system for applications. Driver usage together with devices for Linux embedded operating system is explored and studied. 3 boards are used for the realization of proposed system. OMAP35x EVM board from Mistral Solutions Company is used for processor utilization, high resolution LCD utilization, system monitoring, user interface and communication purposes. Two daughter cards are designed for user view point determination. First daughter card handles communication process with EVM board and calculates view point according to input from second daughter card with single axis response GYRO sensor (ADIS16060). Spartan® / -3A DSP FPGA family is utilized in this system for view point determination. DSP slices that are hardly present inside gate arrays of this FPGA family are utilized and their performance is studied. Asynchronous memory interface, i2c bus interface, SPI interface are studied and implemented on FPGA.
|
3 |
Intermediate View Interpolation of Stereoscopic Images for 3D-DisplayThulin, Oskar January 2006 (has links)
This thesis investigates how disparity estimation may be used to visualize an object on a 3D-screen. The first part looks into different methods of disparity estimation, and the second part examines different ways to visualize an object from one or several stereo pairs and a disparity map. Input to the system is one or several stereo pairs, and output is a sequence of images of the input scene but from more angles. This sequence of images can be shown on Setred AB's 3D-screen. The system has high real time demands and the goal is to do the disparity estimation and visualization in real time. In the first part of the thesis, three different ways to calculate disparity maps are implemented and compared. The three methods are correlation-based, local structure-based and phase-based techniques. The correlation-based methods cannot satisfy the real-time demands due to the large number of 2D-convolutions required per pixel. The local structure-based methods have too much noise and cannot satisfy the quality requirements. Therefore, the best method by far is the phase-based method. This method has been implemented in Matlab and C and comparisons between the different implementations are presented. The quality of the disparity maps is satisfying, but the real-time demands cannot yet be fulfilled. The future work is therefore to optimize the C code and move some functions to a GPU, because a GPU can perform calculations in parallel with the CPU. Another reason is that many of the calculations are related to resizing and warping, which are well-suited to implementation on a GPU.
|
4 |
Automatic Eye Tracking And Intermediate View Reconstruction For 3d Imaging SystemsBediz, Yusuf 01 September 2006 (has links) (PDF)
In recent years, the utilization of 3D display systems became popular in many application areas. One of the most important issues in the utilization of these systems is to render the correct view to the observer based on his/her position. In this thesis, we propose and implement a single user view rendering system for autostereoscopic/stereoscopic displays. The system can easily be installed on a standard PC together with an autostereoscopic display or stereoscopic glasses (shutter, polarized, pulfrich, and anaglyph) with appropriate video card. Proposed system composes of three main blocks: view point detection, view point tracking and intermediate view reconstruction. Haar object detection method, which is based on boosted cascade of simple feature classifiers, is utilized as the view point detection method. After detection, feature points are found on the detected region and accordingly they are fed to the feature tracker. View point of the observer is calculated by using the tracked position of the observer on the image. Correct stereoscopic view is, then, rendered on the display. A 3D warping-based method is utilized in the system as the intermediate view reconstruction method. System is implemented on a computer with Pentium IV 3.0 GHz processor using E-D 3D shutter glasses and Creative NX Webcam.
|
Page generated in 0.1037 seconds