• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 17
  • 8
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 169
  • 124
  • 43
  • 41
  • 25
  • 18
  • 18
  • 15
  • 15
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Binokulární vidění a výroba anaglyfů / Binocular vision and anaglyph production

Švec, Martin January 2009 (has links)
Objective of this master's thesis is developing and describing of human vision process and anaglyph production. The text contains individual possibilities of achieving spatial sensation from two two-dimensional images. This thesis contains an application for the anaglyph creation developed in the Matlab 2008b interface and description of its functions and properties. Created anaglyphs and the source code sample are attached. The electronic version of the thesis and application are available on the attached DVD.
112

Binokulární vidění / Binocular vision

Jemelka, Ondřej January 2011 (has links)
This thesis concern physiology of binocular vision, tools for recording, processing and reproduction of stereoscopic dynamic records. The thesis describes design and realization of recording tool which uses couple of digital still video cameras as stereoscopic video recorders. The thesis deals with the software capabilities of image processing in Matlab and presents possible solutions in the form of a program for creating stereoscopic video in various formats. In the thesis is presented in detail passive projection techniques O anaglyph and polarization method. Has been designed and constructed projection tool uses the light polarization to obtain stereoscopic vision and then has been objectively and subjectively evaluated qualities of both methods in survey with group of observers.
113

Binokulární vidění / Binocular vision

Portyš, Jakub January 2013 (has links)
This paper follows up theme of binocular vision and his working in daily life. The individual chapters contain anatomy and physiology of visual organ. In detail are discussed physical principles of binocular vision. Discussed are also development, eventual pathologies and examination methods of human binocular vision. Theoretical introduction defines the concept of stereoscopy and states overview of stereoscopic imaging methods. Paper also includes approaches to recording of three dimensional image and camera settings. Practical part of work is dedicated to the selection of shooting methods and design of dynamic scenes from daily traffic. Introduced is also the projection method of captured scenes to the group of viewers. The last chapter analyzes the method and evaluation of the experiment itself and the resulting conclusions.
114

The visual perception of 3D shape from stereo: Metric structure or regularization constraints?

Yu, Ying 07 December 2017 (has links)
No description available.
115

Relationship Between Ocular Sensory Dominance and Stereopsis

Ali, Raheela Saeed 21 September 2016 (has links)
Purpose: It is unknown whether individuals with two balanced eyes show quicker response and lower threshold in fine stereoscopic detection. Previous methods to measure ocular dominance were primarily qualitative, which do not quantify the degree of dominance and show limitation in identifying the dominant eye. In this study, we aimed at quantifying the difference of ocular strength between the two eyes with ocular dominance index (ODI) and studying the association of ocular balance between the two eyes with stereoscopic detection. Methods: Stereoscopic threshold was measured in thirty-three subjects. Stereopsis was measured with random dot stimuli. The minimal detectable disparity (Dmin) and the minimal time needed to acquire the best stereoacuity (Tmin) were quantified. Ocular dominance was measured by a continuous flashing technique with the tested eye viewing a titled Gabor patch increasing in contrast and the fellow non-tested eye viewing a Mondrian noise decreasing in contrast. The log ratio of Mondrian to Gabor’s contrasts was recorded when a subject just detected the tilting direction of the Gabor during each trial. The t-value derived from a t-test of the 50 values obtained in each eye was used to determine a subject’s ODI (ocular dominance index) to quantify the degree of ocular dominance. A subject with ODI ≥ 2 (p < 0.05) was defined to have clear dominance and the eye with larger mean ratio was the dominant eye. Results: The Dmin (55.40 arcsec) in subjects with two balanced eyes were not significantly different from the Dmin (43.29 arcsec) in subjects with clear ocular dominance (p = 0.87). Subjects with two balanced eyes had significantly (p = 0.01) shorter reaction times on average (Tmin = 138.28 msec) compared to subjects with clear dominance (Tmin = 1229.02 msec). Tmin values were highly correlated with ocular dominance (p = 0.0004). Conclusion: Subjects with two relatively balanced eyes take shorter reaction time to achieve optimal level of stereoacuity. Keywords: Ocular Dominance, Local Stereopsis, Binocular, Balanced Eyes, Anisometropia
116

Adaptive optics, aberration dynamics and accomodation control. An investigation of the properties of ocular aberrations, and their role in accomodation control.

Chin, Sem Sem January 2009 (has links)
This thesis consists of two parts: a report on the use of a binocular Shack-Hartmann (SH) sensor to study the dynamic correlation of ocular aberrations; and the application of an adaptive optics (AO) system to investigate the effect of the manipulation of aberrations on the accommodation control. The binocular SH sensor consists of one laser source and one camera to reduce system cost and complexity. Six participants took part in this study. Coherence function analysis showed that coherence values were dependent on the subject, aberration and frequency component. Inter-ocular correlations of the aberration dynamics were fairly weak for all participants. Binocular and monocular viewing conditions produced similar wavefront error dynamics. The AO system has a dual wavefront sensing channel. The extra sensing channel permits direct measurement of the eye¿s aberrations independent of the deformable mirror. Dynamic correction of aberrations during steady-state fixation did not affect the accommodation microfluctuations, possibly due to the prior correction of the static aberration level and/or the limited correction bandwidth. The inversion of certain aberrations during dynamic accommodation affected the gain and latency of accommodation response (AR), suggesting that the eye used the aberrations to guide its initial path of accommodative step response. Corrections of aberrations at various temporal locations of AR cycle produced subject- and aberration-dependent results. The gain and phase lag of the AR to a sinusoidally moving target were unaffected by aberration correction. The predictable nature of the target had been suggested as the reason for its failure to produce any significant effect on the AR gain and phase lag.
117

Vision Therapy for Binocular Dysfunction Post Brain Injury

Conrad, Joseph Samuel 25 July 2011 (has links)
No description available.
118

Monocular and Binocular Visual Tracking

Salama, Gouda Ismail Mohamed 06 January 2000 (has links)
Visual tracking is one of the most important applications of computer vision. Several tracking systems have been developed which either focus mainly on the tracking of targets moving on a plane, or attempt to reduce the 3-dimensional tracking problem to the tracking of a set of characteristic points of the target. These approaches are seriously handicapped in complex visual situations, particularly those involving significant perspective, textures, repeating patterns, or occlusion. This dissertation describes a new approach to visual tracking for monocular and binocular image sequences, and for both passive and active cameras. The method combines Kalman-type prediction with steepest-descent search for correspondences, using 2-dimensional affine mappings between images. This approach differs significantly from many recent tracking systems, which emphasize the recovery of 3-dimensional motion and/or structure of objects in the scene. We argue that 2-dimensional area-based matching is sufficient in many situations of interest, and we present experimental results with real image sequences to illustrate the efficacy of this approach. Image matching between two images is a simple one to one mapping, if there is no occlusion. In the presence of occlusion wrong matching is inevitable. Few approaches have been developed to address this issue. This dissertation considers the effect of occlusion on tracking a moving object for both monocular and binocular image sequences. The visual tracking system described here attempts to detect occlusion based on the residual error computed by the matching method. If the residual matching error exceeds a user-defined threshold, this means that the tracked object may be occluded by another object. When occlusion is detected, tracking continues with the predicted locations based on Kalman filtering. This serves as a predictor of the target position until it reemerges from the occlusion again. Although the method uses a constant image velocity Kalman filtering, it has been shown to function reasonably well in a non-constant velocity situation. Experimental results show that tracking can be maintained during periods of substantial occlusion. The area-based approach to image matching often involves correlation-based comparisons between images, and this requires the specification of a size for the correlation windows. Accordingly, a new approach based on moment invariants was developed to select window size adaptively. This approach is based on the sudden increasing or decreasing in the first Maitra moment invariant. We applied a robust regression model to smooth the first Maitra moment invariant to make the method robust against noise. This dissertation also considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged, in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. Quantization of the image plane is necessary, because otherwise the image cannot be processed digitally. Image acquisition by a digital system imposes spatial and intensity quantization that, in turn, introduce errors into moment and invariant computations. This dissertation also derives expressions for quantization-induced error in several important cases. Although it considers spatial quantization only, this represents an important extension of work by other researchers. A mathematical theory for a visual tracking approach of a moving object is presented in this dissertation. This approach can track a moving object in an image sequence where the camera is passive, and when the camera is actively controlled. The algorithm used here is computationally cheap and suitable for real-time implementation. We implemented the proposed method on an active vision system, and carried out experiments of monocular and binocular tracking for various kinds of objects in different environments. These experiments demonstrated that very good performance using real images for fairly complicated situations. / Ph. D.
119

Tracking and Measuring Objects in Obscure Image Scenarios Through the Lens of Shot Put in Track and Field

Smith, Ashley Nicole 23 May 2022 (has links)
Object tracking and object measurement are two well-established and prominent concepts within the field of computer vision. While the two techniques are fairly robust in images and videos where the object of interest(s) is clear, there is a significant decrease in performance when objects appear obscured due to a number of factors including motion blur, far distance from the camera, and blending with the background. Additionally, most established object detection models focus on detecting as many objects as possible, rather than striving for high accuracy on a few, predetermined objects. One application of computer vision tracking and measurement in imprecise and single-object scenarios is programmatically measuring the distance of a shot put throw in the sport of track and field. Shot put throws in competition are currently measured by human officials, which is both time-consuming and often erroneous. In this work, a computer vision system is developed that automatically tracks the path of a shot put throw through combining a custom-trained YOLO model and path predictor with kinematic formulas and then measures its distance traveled by triangulation using binocular stereo vision. The final distance measurements produce directionally accurate results with an average error of 82% after removing one outlier, an average detection time of 2.9 ms per frame and a total average run time of 4.5 minutes from the time the shot put leaves the thrower's hand. Shortcomings of tracking and measurement in imperfect or singular object settings are addressed and potential improvements are suggested, while also providing the opportunity to increase the accuracy and efficiency of the sporting event. / Master of Science / Object tracking and object measurement are two well-established and prominent concepts within the field of computer vision. While the two techniques are fairly robust in images and videos where the object of interest(s) is clear, there is a significant decrease in performance when objects appear obscured due to a number of factors including motion blur, far distance from the camera, and blending with the background. Additionally, most established object detection models focus on detecting as many objects as possible, rather than striving for high accuracy on a few, predetermined objects. One application of computer vision tracking and measurement in imprecise and single-object scenarios is programmatically measuring the distance of a shot put throw in the sport of track and field. Shot put throws in competition are currently measured by human officials, which is both time-consuming and often erroneous. In this work, a computer vision system is developed that automatically tracks the path of a shot put throw through combining a custom-trained YOLO model and path predictor with kinematic formulas and then measures its distance traveled by triangulation using binocular stereo vision. The final distance measurements produce directionally accurate results with an average error of 82% after removing one outlier, an average detection time of 2.9 ms per frame and a total average run time of 4.5 minutes from the time the shot put leaves the thrower's hand. Shortcomings of tracking and measurement in imperfect or singular object settings are addressed and potential improvements are suggested, while also providing the opportunity to increase the accuracy and efficiency of the sporting event.
120

Binocular correlation of ocular aberration dynamics

Chin, Sem Sem, Hampson, Karen M., Mallen, Edward A.H. January 2008 (has links)
No / Fluctuations in accommodation have been shown to be correlated in the two eyes of the same subject. However, the dynamic correlation of higher-order aberrations in the frequency domain has not been studied previously. A binocular Shack-Hartmann wavefront sensor is used to measure the ocular wavefront aberrations concurrently in both eyes of six subjects at a sampling rate of 20.5 Hz. Coherence function analysis shows that the inter-ocular correlation between aberrations depends on subject, Zernike mode and frequency. For each subject, the coherence values are generally low across the resolvable frequency range (mean 0.11), indicating poor dynamic correlation between the aberrations of the two eyes. Further analysis showed that phase consistency dominates the coherence values. Monocular and binocular viewing conditions showed similar power spectral density functions.

Page generated in 0.035 seconds