• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Computational Study of Vision

Hildreth, Ellen C., Ullman, Shimon 01 April 1988 (has links)
The computational approach to the study of vision inquires directly into the sort of information processing needed to extract important information from the changing visual image---information such as the three-dimensional structure and movement of objects in the scene, or the color and texture of object surfaces. An important contribution that computational studies have made is to show how difficult vision is to perform, and how complex are the processes needed to perform visual tasks successfully. This article reviews some computational studies of vision, focusing on edge detection, binocular stereo, motion analysis, intermediate vision, and object recognition.
2

Relative Orientation

Horn, Berthold K.P. 01 September 1987 (has links)
Before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information. Described here is a particularly simple iterative scheme for recovering relative orientation that, unlike existing methods, does not require a good initial guess for the baseline and the rotation.
3

Real-Time Motion and Stereo Cues for Active Visual Observers

Björkman, Mårten January 2002 (has links)
No description available.
4

Real-Time Motion and Stereo Cues for Active Visual Observers

Björkman, Mårten January 2002 (has links)
No description available.
5

Specialised global methods for binocular and trinocular stereo matching

Horna Carranza, Luis Alberto January 2017 (has links)
The problem of estimating depth from two or more images is a fundamental problem in computer vision, which is commonly referred as to stereo matching. The applications of stereo matching range from 3D reconstruction to autonomous robot navigation. Stereo matching is particularly attractive for applications in real life because of its simplicity and low cost, especially compared to costly laser range finders/scanners, such as for the case of 3D reconstruction. However, stereo matching has its very unique problems like convergence issues in the optimisation methods, and challenges to find matches accurately due to changes in lighting conditions, occluded areas, noisy images, etc. It is precisely because of these challenges that stereo matching continues to be a very active field of research. In this thesis we develop a binocular stereo matching algorithm that works with rectified images (i.e. scan lines in two images are aligned) to find a real valued displacement (i.e. disparity) that best matches two pixels. To accomplish this our research has developed techniques to efficiently explore a 3D space, compare potential matches, and an inference algorithm to assign the optimal disparity to each pixel in the image. The proposed approach is also extended to the trinocular case. In particular, the trinocular extension deals with a binocular set of images captured at the same time and a third image displaced in time. This approach is referred as to t +1 trinocular stereo matching, and poses the challenge of recovering camera motion, which is addressed by a novel technique we call baseline recovery. We have extensively validated our binocular and trinocular algorithms using the well known KITTI and Middlebury data sets. The performance of our algorithms is consistent across different data sets, and its performance is among the top performers in the KITTI and Middlebury datasets.
6

Tracking and Measuring Objects in Obscure Image Scenarios Through the Lens of Shot Put in Track and Field

Smith, Ashley Nicole 23 May 2022 (has links)
Object tracking and object measurement are two well-established and prominent concepts within the field of computer vision. While the two techniques are fairly robust in images and videos where the object of interest(s) is clear, there is a significant decrease in performance when objects appear obscured due to a number of factors including motion blur, far distance from the camera, and blending with the background. Additionally, most established object detection models focus on detecting as many objects as possible, rather than striving for high accuracy on a few, predetermined objects. One application of computer vision tracking and measurement in imprecise and single-object scenarios is programmatically measuring the distance of a shot put throw in the sport of track and field. Shot put throws in competition are currently measured by human officials, which is both time-consuming and often erroneous. In this work, a computer vision system is developed that automatically tracks the path of a shot put throw through combining a custom-trained YOLO model and path predictor with kinematic formulas and then measures its distance traveled by triangulation using binocular stereo vision. The final distance measurements produce directionally accurate results with an average error of 82% after removing one outlier, an average detection time of 2.9 ms per frame and a total average run time of 4.5 minutes from the time the shot put leaves the thrower's hand. Shortcomings of tracking and measurement in imperfect or singular object settings are addressed and potential improvements are suggested, while also providing the opportunity to increase the accuracy and efficiency of the sporting event. / Master of Science / Object tracking and object measurement are two well-established and prominent concepts within the field of computer vision. While the two techniques are fairly robust in images and videos where the object of interest(s) is clear, there is a significant decrease in performance when objects appear obscured due to a number of factors including motion blur, far distance from the camera, and blending with the background. Additionally, most established object detection models focus on detecting as many objects as possible, rather than striving for high accuracy on a few, predetermined objects. One application of computer vision tracking and measurement in imprecise and single-object scenarios is programmatically measuring the distance of a shot put throw in the sport of track and field. Shot put throws in competition are currently measured by human officials, which is both time-consuming and often erroneous. In this work, a computer vision system is developed that automatically tracks the path of a shot put throw through combining a custom-trained YOLO model and path predictor with kinematic formulas and then measures its distance traveled by triangulation using binocular stereo vision. The final distance measurements produce directionally accurate results with an average error of 82% after removing one outlier, an average detection time of 2.9 ms per frame and a total average run time of 4.5 minutes from the time the shot put leaves the thrower's hand. Shortcomings of tracking and measurement in imperfect or singular object settings are addressed and potential improvements are suggested, while also providing the opportunity to increase the accuracy and efficiency of the sporting event.

Page generated in 0.0493 seconds