Perceptual, "context-aware" applications that observe their environment and interact with users via cameras and other sensors are becoming ubiquitous on personal computers, mobile phones, gaming platforms, household robots, and augmented-reality devices. This dissertation's main thesis is that perceptual applications present several new classes of security and privacy risks to both their users and the bystanders. Existing perceptual platforms are often completely inadequate for mitigating these risks. For example, we show that the augmented reality browsers, a class of popular perceptual platforms, contain numerous inherent security and privacy flaws. The key insight of this dissertation is that perceptual platforms can provide stronger security and privacy guarantees by controlling the interfaces they expose to the applications. We explore three different approaches that perceptual platforms can use to minimize the risks of perceptual computing: (i) redesigning the perceptual platform interfaces to provide a fine-grained permission system that allows least-privileged application development; (ii) leveraging existing perceptual interfaces to enforce access control on perceptual data, apply algorithmic privacy transforms to reduce the amount of sensitive content sent to the applications, and enable the users to audit/control the amount of perceptual data that reaches each application; and (iii) monitoring the applications' usage of perceptual interfaces to find anomalous high-risk cases. To demonstrate the efficacy of our approaches, first, we build a prototype perceptual platform that supports fine-grained privileges by redesigning the perceptual interfaces. We show that such a platform not only allows creation of least-privileged perceptual applications but also can improve performance by minimizing the overheads of executing multiple concurrent applications. Next, we build DARKLY, a security and privacy-aware perceptual platform that leverages existing perceptual interfaces to deploy several different security and privacy protection mechanisms: access control, algorithmic privacy transforms, and user audit. We find that DARKLY can run most existing perceptual applications with minimal changes while still providing strong security and privacy protection. Finally, We introduce peer group analysis, a new technique that detects anomalous high-risk perceptual interface usages by creating peer groups with software providing similar functionality and comparing each application's perceptual interface usages against those of its peers. We demonstrate that such peer groups can be created by leveraging information already available in software markets like textual descriptions and categories of applications, list of related applications, etc. Such automated detection of high-risk applications is essential for creating a safer perceptual ecosystem as it helps the users in identifying and installing safer applications with any desired functionality and encourages the application developers to follow the principle of least privilege. / text
Identifer | oai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/25990 |
Date | 18 September 2014 |
Creators | Jana, Suman |
Source Sets | University of Texas |
Language | English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Page generated in 0.0063 seconds