• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 185
  • 185
  • 64
  • 35
  • 30
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Design of a Depth-Image-Based Rendering (DIBR) 3D Stereo View Synthesis Engine

Chang, Wei-Chun 01 September 2011 (has links)
Depth-Based Image Rendering (DIBR) is a popular method to generate 3D virtual image at different view positions using an image and a depth map. In general, DIBR consists of two major operations: image warping and hole filling. Image warping calculates the disparity from the depth map given some information of viewers and display screen. Hole filling is to calculate the color of pixel locations that do not correspond to any pixels in the original image after image warping. Although there are many different hole filling methods that determine the colors of the blank pixels, some undesirable artifacts are still observed in the synthesized virtual image. In this thesis, we present an approach that examines the geometry information near the region of blank pixels in order to reduce the artifacts near the edges of objects. Experimental results show that the proposed design can generate more natural shape around the edges of objects at the cost of more hardware and computation time.
62

Hardware Design for Disparity Estimation Using Dynamic Programming

Wang, Wen-Ling 11 September 2012 (has links)
Recently, stereo vision has been widely used in many applications, and depth map is important information in stereo vision. In general, depth map can be generated from the disparity using stereo matching based on two input images of different viewing positions. Due to the large computation complexity, software implementation of stereo matching usually cannot achieve real-time computation speed. In this thesis, we propose hardware implementations of stereo matching to speed up the generation of depth map. The proposed design uses a global optimization method, called dynamic programming, to find the disparity based on two input images: left image and right image. It consists of three main processing steps: matching cost computation (M.C.C.), minimum cost accumulation (M.C.A.), and disparity optimization (D.O.). The thesis examines the impact of different pixel operation orders in M.C.C and M.C.A modules on the cost of hardware. In the design of D.O. module, we use two different approaches. One is a Systolic-Like structure with streaming processing, and the other is memory-based design with low hardware cost. The final architecture with pipelining and memory-based D.O. can save a lot of hardware cost and achieve high throughput rate for processing a sequence of image pairs.
63

Design Of An Image Acquisition Setup For Mimic Tracking

Akoner, Ozguler Mine 01 September 2007 (has links) (PDF)
With the advances in computer technology and the changing needs of people&rsquo / s daily lives, robots start to offer alternative solutions. As one of these solutions, the branch of humanoid robots emerged as advanced robots that can interact with people. Robot faces are one of the most effective means of interacting with people / since they can express their emotions and reactions through facial mimics. However, the development of realistic robot faces necessitates the knowledge of the trajectories and displacements of actual face mimics. In this study, a setup (both hardware and software), that can be used for tracking critical points on human face while exhibiting mimics, is developed. From the outputs of this setup, the mimic trajectories are going to be extracted. The setup is designed and manufactured to be durable to external effects so that with a single camera calibration procedure the 3D reconstruction can be carried out several times. The setup consists of two webcams that are specially oriented for mimic tracking. The images taken from the cameras are corrected / their features are extracted using image processing algorithms / the centroids of the features are found / correspondence is carried out and the reconstruction is made. This system can also be used for any special point tracking or volumetric measurement purposes.
64

Movement and Force Measurement Systems as a Foundation for Biomimetic Research on Insects

Mills, Clayton Harry January 2008 (has links)
During the undertaken research and development, two major systems were designed. These were; a prototype force sensor, and a movement measurement system. Both the developed systems were designed for the intended field of insect research, but were developed using very different underlying principles. The force measurement system uses the piezo-electric effect induced in piezo-electric bimorph elements to produce a measure of force exerted on the sensor. The movement measurement system on the other hand uses computer vision (CV) techniques to find and track the three dimensional (3D) position of markers on the insect, and thereby record the pose of the insect. To further increase the usefulness of the two measurement systems, a prototype graphical user interface (GUI) was produced to encapsulate the functionality of the systems and provide an end user with a more complete and functional research tool. The GUI allows a user to easily define the parameters required for the CV operations and presents the results of these operations to the user in an easily understood visual format. The GUI is also intended to display force measurements in a graphical means to make them easily interpreted. The GUI has been named Weta Evaluation Tracking and Analysis (WETA). Testing on the developed prototype force sensor shows that the piezo-electric bimorph elements provide an adequate measure of force exerted on them, when the voltage signal produced by an element is integrated. Furthermore, the testing showed that the developed force sensor layout produces an adequate measure of forces in the two horizontal linear degrees of freedom (DOF), but the prototype did not produce a good measure of forces in the vertical linear DOF. Development and testing of the movement measurement system showed that stereo vision techniques have the ability to produce accurate measurements of 3D position using two cameras. Although, when testing these techniques with one of the cameras replaced by a mirror, the system produced less than satisfactory results. Further testing on the feature detection and tracking portions of the movement system showed that even though these systems were implemented in a relatively simple way, they were still adequate in their associated operations. However, it was found that with some simple changes in colour spaces used during feature detection, the performance of the feature detection system in varying illumination was greatly improved. The tracking system on the other hand, operated adequately using just its associated basic principles. During the development of both prototype measurement systems, a number of conclusions were formulated that indicated areas of future development. These areas include; advanced force sensor configurations, force sensor miniaturisation, design of a force plate, improvement of feature detection and tracking, and refining of the stereo vision equipment.
65

Capacités audiovisuelles en robot humanoïde NAO

Sanchez-Riera, Jordi 14 June 2013 (has links) (PDF)
Dans cette thèse nous avons l'intention d'enquêter sur la complémentarité des données auditives et visuelles sensorielles pour la construction d'une interprétation de haut niveau d'une scène. L'audiovisuel (AV) d'entrée reçus par le robot est une fonction à la fois l'environnement extérieur et de la localisation réelle du robot qui est étroitement liée à ses actions. La recherche actuelle dans AV analyse de scène a eu tendance à se concentrer sur les observateurs fixes. Toutefois, la preuve psychophysique donne à penser que les humains utilisent petite tête et les mouvements du corps, afin d'optimiser l'emplacement de leurs oreilles à l'égard de la source. De même, en marchant ou en tournant, le robot mai être en mesure d'améliorer les données entrantes visuelle. Par exemple, dans la perception binoculaire, il est souhaitable de réduire la distance de vue à un objet d'intérêt. Cela permet à la structure 3D de l'objet à analyser à une profondeur de résolution supérieure.
66

Motion Estimation Using Complex Discrete Wavelet Transform

Sari, Huseyin 01 January 2003 (has links) (PDF)
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blocks of filtering with a set of Gabor-like kernels with different scales and orientations. The output is a hierarchy of scaled and subsampled orientation-tuned subimages. The motion estimation algorithm is based on the relationship between translations in image domain and phase shifts in CDWT domain, which is satisfied by the shiftability and interpolability property of CDWT. Optical flow is estimated by using this relationship at each scale, in a coarse-to-fine (hierarchical) manner, where information from finer scales is used to refine the estimates from coarser scales. The performance of the motion estimation algorithm is investigated with various image sequences as input and the effects of the options in the algorithm like curvature-correction, interpolation kernel between levels and some parameter values like confidence threshold iv maximum number of CDWT levels and minimum finest level of detail are also experimented and discussed. The test results show that the method is superior to other well-known algorithms in estimation accuracy, especially under high illuminance variations and additive noise.
67

Active Stereo Vision: Depth Perception For Navigation, Environmental Map Formation And Object Recognition

Ulusoy, Ilkay 01 September 2003 (has links) (PDF)
In very few mobile robotic applications stereo vision based navigation and mapping is used because dealing with stereo images is very hard and very time consuming. Despite all the problems, stereo vision still becomes one of the most important resources of knowing the world for a mobile robot because imaging provides much more information than most other sensors. Real robotic applications are very complicated because besides the problems of finding how the robot should behave to complete the task at hand, the problems faced while controlling the robot&rsquo / s internal parameters bring high computational load. Thus, finding the strategy to be followed in a simulated world and then applying this on real robot for real applications is preferable. In this study, we describe an algorithm for object recognition and cognitive map formation using stereo image data in a 3D virtual world where 3D objects and a robot with active stereo imaging system are simulated. Stereo imaging system is simulated so that the actual human visual system properties are parameterized. Only the stereo images obtained from this world are supplied to the virtual robot. By applying our disparity algorithm, depth map for the current stereo view is extracted. Using the depth information for the current view, a cognitive map of the environment is updated gradually while the virtual agent is exploring the environment. The agent explores its environment in an intelligent way using the current view and environmental map information obtained up to date. Also, during exploration if a new object is observed, the robot turns around it, obtains stereo images from different directions and extracts the model of the object in 3D. Using the available set of possible objects, it recognizes the object.
68

Stereo based Visual Odometry

January 2010 (has links)
abstract: The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively. / Dissertation/Thesis / M.S. Electrical Engineering 2010
69

Visual Tracking Using Stereo Images

Dehlin, Carl January 2019 (has links)
Visual tracking concerns the problem of following an arbitrary object in a video sequence. In this thesis, we examine how to use stereo images to extend existing visual tracking algorithms, which methods exists to obtain information from stereo images, and how the results change as the parameters to each tracker vary. For this purpose, four abstract approaches are identified, with five distinct implementations. Each tracker implementation is an extension of a baseline algorithm, MOSSE. The free parameters of each model are optimized with respect to two different evaluation strategies called nor- and wir-tests, and four different objective functions, which are then fixed when comparing the models against each other. The results are created on single target tracks extracted from the KITTI tracking dataset, and the optimization results show that none of the objective functions are sensitive to the exposed parameters under the joint selection of model and dataset. The evaluation results also shows that none of the extensions improve the results of the baseline tracker.
70

Correspondence-based pairwise depth estimation with parallel acceleration

Bartosch, Nadine January 2018 (has links)
This report covers the implementation and evaluation of a stereo vision corre- spondence-based depth estimation algorithm on a GPU. The results and feed- back are used for a Multi-view camera system in combination with Jetson TK1 devices for parallelized image processing and the aim of this system is to esti- mate the depth of the scenery in front of it. The performance of the algorithm plays the key role. Alongside the implementation, the objective of this study is to investigate the advantages of parallel acceleration inter alia the differences to the execution on a CPU which are significant for all the function, the imposed overheads particular for a GPU application like memory transfer from the CPU to the GPU and vice versa as well as the challenges for real-time and concurrent execution. The study has been conducted with the aid of CUDA on three NVIDIA GPUs with different characteristics and with the aid of knowledge gained through extensive literature study about different depth estimation algo- rithms but also stereo vision and correspondence as well as CUDA in general. Using the full set of components of the algorithm and expecting (near) real-time execution is utopic in this setup and implementation, the slowing factors are in- ter alia the semi-global matching. Investigating alternatives shows that results for disparity maps of a certain accuracy are also achieved by local methods like the Hamming Distance alone and by a filter that refines the results. Further- more, it is demonstrated that the kernel launch configuration and the usage of GPU memory types like shared memory is crucial for GPU implementations and has an impact on the performance of the algorithm. Just concurrency proves to be a more complicated task, especially in the desired way of realization. For the future work and refinement of the algorithm it is therefore recommended to invest more time into further optimization possibilities in regards of shared memory and into integrating the algorithm into the actual pipeline.

Page generated in 0.0504 seconds