The long term purpose of this project is to reduce the attention demand of drivers whenusing infotainment systems in a car setting. With the development of the car industry,a contradiction between safety issue and entertainment demands in cars has arisen.Speech-recognition-based controls meet their bottleneck in the presence of backgroundaudio (such as engine noise, other passengers speech and/or the infotainment systemitself). We propose a new method to control the infotainment system using computervision technology in this thesis. This project uses algorithms of object detection, opticalflow(estimated motion) and feature analysis to build a communication channel betweenhuman and machine. By tracking the driver’s head and measuring the optical flow overthe lip region, the driver’s mouth feature can be indicated. Performance concerning theefficiency and accuracy of the system is analyzed. The contribution of this thesis is toprovide a method using facial gestures to communicate with the system, and we focuson the movement of lips especially. This method offers a possibility to create a new modeof interaction between human and machine.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-25742 |
Date | January 2014 |
Creators | Tantai, Along, Chen, Da |
Publisher | Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE) |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0023 seconds