• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A color blending model and a color correction algorithm for additive optical see-through displays

Kirshnamachari Sridharan, Srikanth 06 October 2013 (has links)
Optical see-through display (OSTD) is a transparent digital display which simultaneously gives access to the digital contents and the real world objects behind it. Additive optical see-though display is a hardware subtype of OSTD which has its own light source to create the digital contents. In Additive OSTD, light coming from background objects mixes with the light originating from the display causing what is known as the color blending problem. The work in this thesis provides a solution to the color blending problem. In order to understand the problem, this thesis first presents a new color blending model for additive OSTD based on two display induced distortions: the Render distortion and the Material distortion. A new method called Binned Profile (BP) method which accounts for the render distortion is developed to predict the blended color, when applied on the color blending model. BP method is validated with other known methods and is shown to be the most accurate in predicting the color blends with 9 just noticeable differences (JND) in worst case. Based on the BP method, a new color correction algorithm called BP color correction is created to solve the color blending problem. BP-color correction finds the alternative digital color to counter balance the blending. The correction capacity of various digital colors were analysed using the BP color correction approach. BP color correction is also compared and proven to be better than the existing solution. A quicker version of the correction called quick correction is also explored. The thesis concludes with an exploration of the material distortion, explains the limitations of BP-correction, provides design recommendations .
2

A color blending model and a color correction algorithm for additive optical see-through displays

Kirshnamachari Sridharan, Srikanth 06 October 2013 (has links)
Optical see-through display (OSTD) is a transparent digital display which simultaneously gives access to the digital contents and the real world objects behind it. Additive optical see-though display is a hardware subtype of OSTD which has its own light source to create the digital contents. In Additive OSTD, light coming from background objects mixes with the light originating from the display causing what is known as the color blending problem. The work in this thesis provides a solution to the color blending problem. In order to understand the problem, this thesis first presents a new color blending model for additive OSTD based on two display induced distortions: the Render distortion and the Material distortion. A new method called Binned Profile (BP) method which accounts for the render distortion is developed to predict the blended color, when applied on the color blending model. BP method is validated with other known methods and is shown to be the most accurate in predicting the color blends with 9 just noticeable differences (JND) in worst case. Based on the BP method, a new color correction algorithm called BP color correction is created to solve the color blending problem. BP-color correction finds the alternative digital color to counter balance the blending. The correction capacity of various digital colors were analysed using the BP color correction approach. BP color correction is also compared and proven to be better than the existing solution. A quicker version of the correction called quick correction is also explored. The thesis concludes with an exploration of the material distortion, explains the limitations of BP-correction, provides design recommendations .
3

Perceived location of virtual content measurement method in optical see through augmented reality

Khan, Farzana Alam 09 August 2022 (has links) (PDF)
An important research question for optical see through AR is, “how accurately and precisely can a virtual object’s perceived location be measured in three dimensional space?” Previously, a method was developed for measuring the perceived 3D location of virtual objects using Microsoft HoloLens1 display. This study found an unexplained rightward perceptual bias on horizontal plane; most participants were right eye dominant, and consistent with the hypothesis that perceived location is biased in eye dominance direction. In this thesis, a replication study is reported, which includes binocular and monocular viewing conditions, recruits an equal number of left and right eye dominant participants, uses Microsoft HoloLens2 display. This replication study examined whether the perceived location of virtual objects is biased in the direction of dominant eye. Results suggest that perceived location is not biased in the direction of dominant eye. Compared to previous study’s findings, overall perceptual accuracy increased, and precision was similar.
4

ROBOMIRROR: A SIMULATED MIRROR DISPLAY WITH A ROBOTIC CAMERA

Zhang, Yuqi 01 January 2014 (has links)
Simulated mirror displays have a promising prospect in applications, due to its capability for virtual visualization. In most existing mirror displays, cameras are placed on top of the displays and unable to capture the person in front of the display at the highest possible resolution. The lack of a direct frontal capture of the subject's face and the geometric error introduced by image warping techniques make realistic mirror image rendering a challenging problem. The objective of this thesis is to explore the use of a robotic camera in tracking the face of the subject in front of the display to obtain a high-quality image capture. Our system uses a Bislide system to control a camera for face capture, while using a separate color-depth camera for accurate face tracking. We construct an optical device in which a one-way mirror is used so that the robotic camera behind can capture the subject while the rendered images can be displayed by reflecting off the mirror from an overhead projector. A key challenge of the proposed system is the reduction of light due to the one-way mirror. The optimal 2D Wiener filter is selected to enhance the low contrast images captured by the camera.
5

Augmented reality fonts with enhanced out-of-focus text legibility

Arefin, Mohammed Safayet 09 December 2022 (has links) (PDF)
In augmented reality, information is often distributed between real and virtual contexts, and often appears at different distances from the viewer. This raises the issues of (1) context switching, when attention is switched between real and virtual contexts, (2) focal distance switching, when the eye accommodates to see information in sharp focus at a new distance, and (3) transient focal blur, when information is seen out of focus, during the time interval of focal distance switching. This dissertation research has quantified the impact of context switching, focal distance switching, and transient focal blur on human performance and eye fatigue in both monocular and binocular viewing conditions. Further, this research has developed a novel font that when seen out-of-focus looks sharper than standard fonts. This SharpView font promises to mitigate the effect of transient focal blur. Developing this font has required (1) mathematically modeling out-of-focus blur with Zernike polynomials, which model focal deficiencies of human vision, (2) developing a focus correction algorithm based on total variation optimization, which corrects out-of-focus blur, and (3) developing a novel algorithm for measuring font sharpness. Finally, this research has validated these fonts through simulation and optical camera-based measurement. This validation has shown that, when seen out of focus, SharpView fonts are as much as 40 to 50% sharper than standard fonts. This promises to improve font legibility in many applications of augmented reality.

Page generated in 0.0419 seconds