Return to search

Harnessing the Power of Self-Training for Gaze Point Estimation in Dual Camera Transportation Datasets

This thesis proposes a novel approach for efficiently estimating gaze points in dual camera transportation datasets. Traditional methods for gaze point estimation are dependent on large amounts of labeled data, which can be both expensive and time-consuming to collect. Additionally, alignment and calibration of the two camera views present significant challenges. To overcome these limitations, this thesis investigates the use of self-learning techniques such as semi-supervised learning and self-training, which can reduce the need for labeled data while maintaining high accuracy. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field. / Master of Science / This thesis presents a new method for efficiently estimating the gaze point of drivers while driving, which is crucial for understanding driver behavior and improving transportation safety. Traditional methods require a lot of labeled data, which can be time-consuming and expensive to obtain. This thesis proposes a self-learning approach that can learn from both labeled and unlabeled data, reducing the need for labeled data while maintaining high accuracy. By training the model on labeled data and using its own estimations on unlabeled data to improve its performance, the proposed approach can adapt to new scenarios and improve its accuracy over time. The proposed method is evaluated on the DGAZE dataset and achieves a 57.2\% improvement in performance compared to the previous methods. Overall, this approach offers a more efficient and cost-effective solution that can potentially help improve transportation safety by providing a better understanding of driver behavior. This approach can prove to be a valuable tool for studying visual attention in transportation research, leading to more cost-effective and efficient research in this field.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/115430
Date14 June 2023
CreatorsBhagat, Hirva Alpesh
ContributorsComputer Science and Applications, Karpatne, Anuj, Abbott, Amos L., Sarkar, Abhijit, Fox, Edward A.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0023 seconds