Return to search

Accessible Retail Shopping For The Visually Impaired Using Deep Learning

abstract: Over the past decade, advancements in neural networks have been instrumental in achieving remarkable breakthroughs in the field of computer vision. One of the applications is in creating assistive technology to improve the lives of visually impaired people by making the world around them more accessible. A lot of research in convolutional neural networks has led to human-level performance in different vision tasks including image classification, object detection, instance segmentation, semantic segmentation, panoptic segmentation and scene text recognition. All the before mentioned tasks, individually or in combination, have been used to create assistive technologies to improve accessibility for the blind.

This dissertation outlines various applications to improve accessibility and independence for visually impaired people during shopping by helping them identify products in retail stores. The dissertation includes the following contributions; (i) A dataset containing images of breakfast-cereal products and a classifier using a deep neural (ResNet) network; (ii) A dataset for training a text detection and scene-text recognition model; (iii) A model for text detection and scene-text recognition to identify product images using a user-controlled camera; (iv) A dataset of twenty thousand products with product information and related images that can be used to train and test a system designed to identify products. / Dissertation/Thesis / Masters Thesis Computer Science 2020

Identiferoai:union.ndltd.org:asu.edu/item:57075
Date January 2020
ContributorsPatel, Akshar (Author), Panchanathan, Sethuraman (Advisor), Venkateswara, Hemanth (Advisor), McDaniel, Troy (Committee member), Arizona State University (Publisher)
Source SetsArizona State University
LanguageEnglish
Detected LanguageEnglish
TypeMasters Thesis
Format64 pages
Rightshttp://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0015 seconds