• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 11
  • 11
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

IMPROVING QOE OF 5G APPLICATIONS (VR AND VIDEO ANALYTICS APPLICATION) ON EDGE DEVICES

Sibendu Paul (14270921) 17 May 2024 (has links)
<p>Recent advancements in deep learning (DL) and high-communication bandwidth access networks such as 5G enable applications that require intelligence and faster computational power at the edge with low power consumption. In this thesis, we study how to improve the Quality-of-Experience (QoE) of these emerging 5G applications, e.g., virtual reality (VR) and video analytics on edge devices. These 5G applications either require high-quality visual effects with a stringent latency requirement (for VR) or high analytics accuracy (for video analytics) while maintaining frame rate requirements under dynamic conditions. </p> <p>In part 1, we study how to support high-quality untethered immersive multiplayer VR on commodity mobile devices. Simply replicating the prior-art for a single-user VR will result in a linear increase in network bandwidth requirement that exceeds the bandwidth of WiFi (802.11ac). We propose a novel technique, <em>Coterie, </em>that splits the rendering of background environment (BE) frames between the mobile device and the edge server that drastically enhances the similarity of the BE frames and reduces the network load via frame caching and reuse. Our proposed VR framework, Coterie, reduces per-player network requirement by over 10x and easily supports 4 players on Pixel 2 over 802.11ac while maintaining the QoE constraints of 4K VR.</p> <p>In part 2, we study how to achieve high accuracy of analytics in video analytics pipelines (VAP). We observe that the frames captured by the surveillance video cameras powering a variety of 24X7 analytics applications are not always pristine -- they can be distorted due to environmental condition changes, lighting issues, sensor noise, compression, etc. Such distortions not only deteriorate the accuracy of deep learning applications but also negatively impact the utilization of the edge server resources used to run these computationally expensive DL models. First, we study how to dynamically filter out low-quality frames captured. We propose a lightweight DL-based quality estimator, <em>AQuA</em>, that can be used to filter out low-quality frames that can lead to high-confidence errors (false-positives) if fed into the analytic units (AU) in the VAP. AQuA-filter reduces false positives by 17% and the compute and network usage by up to 27% when used in a face-recognition VAP. Second, we study how to reduce such poor-quality frame captures by the camera. We propose <em>CamTuner, </em>a system that automatically and dynamically adapts the complex camera settings to changing environmental conditions based on analytical quality estimation to enhance the accuracy of video analytics. In a real customer deployment, <em>CamTuner</em> enhances VAP accuracy by detecting 15.9% additional persons and 2.6%–4.2% additional cars (without any false positives) than the default camera setting. While <em>CamTuner</em> focuses on improving the accuracy of single-AU running on a camera stream, next we present <em>Elixir</em>, a system that enhances the video stream quality for multiple analytics on a video stream by jointly optimizing different AUs’ objectives. In a real-world deployment, <em>Elixir</em> correctly detects 7.1% (22,068) and 5.0% (15,731) more cars, 94% (551) and 72% (478) more faces, and 670.4% (4975) and 158.6% (3507) more persons than the default-camera-setting and time-sharing approaches, respectively.</p>

Page generated in 0.0578 seconds