Conventional stereoscopic displays present a pair of stereoscopic images on a single and fixed image plane decoupled with the vergence and accommodation responses of the viewer. In consequence, these displays lack the capability of correctly rendering focus cues (i.e. accommodation and retinal blur) and may induce the discrepancy between accommodation and convergence. A number of visual artifacts associated with incorrect focus cues in stereoscopic displays have been reported, limiting the applicability of these displays for demanding applications and daily usage.In this dissertation, methods and apparatus for generating addressable focus cues in conventional stereoscopic displays are proposed. Focus cues can be addressed throughout a volumetric space, either through dynamically varying the focal distance of a display enabled by an active optical element or by multiplexing a stack of 2-D image planes. Optimal depth-weighted fusing functions are developed to fuse a number of discrete image planes into a seamless volumetric space with continuous and near-correct focus cues similar to the real world counterparts.The optical design, driving methodology, and prototype implementation of the addressable focus displays are presented and discussed. Experimental results demonstrate continuously addressable focus cues from infinity to as close as the near eye distance. Experiments to further evaluate the depth perception in the display prototype are conducted. Preliminary results suggest that the perceived distance and accommodative response of the viewer match with the addressable accommodation cues rendered by the display, approximating the real-world viewing condition.
Identifer | oai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/193859 |
Date | January 2010 |
Creators | LIU, SHENG |
Contributors | Hua, Hong, Hua, Hong, Dereniak, Eustace L., Schwiegerling, James T. |
Publisher | The University of Arizona. |
Source Sets | University of Arizona |
Language | English |
Detected Language | English |
Type | text, Electronic Dissertation |
Rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. |
Page generated in 0.0016 seconds