Return to search

A Machine Learning Based Visible Light Communication Model Leveraging Complementary Color Channel

Recently witnessed a great popularity of unobtrusive Visible Light Communication (VLC) using screen-camera channels. They overcomes the inherent drawbacks of traditional approaches based on coded images like bar codes. One popular unobtrusive method is the utilizing of alpha channel or color channels to encode bits into the pixel translucency or color intensity changes with over-the-shelf smart devices. Specifically, Uber-in-light proves to be an successful model encoding data into the color intensity changes that only requires over-the-shelf devices. However, Uber-in-light only exploit Multi Frequency Shift Keying (MFSK), which limits the overall throughput of the system since each data segment is only 3-digit long. Motivated by some previous works like Inframe++ or Uber-in-light, in this thesis, we proposes a new VLC model encoding data into color intensity changes on red and blue channels of video frames. Multi-Phase-Shift-Keying (MPSK) along with MFSK are used to match 4-digit and 5-digit long data segments to specific transmission frequencies and phases. To ensure the transmission accuracy, a modified correlation-based demodulation method and two learning-based methods using SVM and Random Forest are also developed.

  1. 10.25394/pgs.12708527.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/12708527
Date29 July 2020
CreatorsRuizhe Jiang (9166208)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/A_Machine_Learning_Based_Visible_Light_Communication_Model_Leveraging_Complementary_Color_Channel/12708527

Page generated in 0.0015 seconds