Return to search

Private Peer-to-peer similarity computation in personalized collaborative platforms

In this thesis, we consider a distributed collaborative platform in which each peer hosts his private information, such as the URLs he liked or the news articles that grabbed his interest or videos he watched, on his own machine. Then, without relying on a trusted third party, the peer engages in a distributed protocol, combining his private data with other peers' private data to perform collaborative filtering. The main objective is to be able to receive personalized recommendations or other services such as a personalized distributed search engine. User-based collaborative filtering protocols, which depend on computing user-to-user similarity, have been applied to distributed systems. As computing the similarity between users requires the use of their private profiles, this raises serious privacy concerns. In this thesis, we address the problem of privately computing similarities between peers in collaborative platforms. Our work provides a private primitive for similarity computation that can make collaborative protocols privacy-friendly. We address the unique challenges associated with applying privacy-preserving techniques for similarity computation to dynamic large scale systems. In particular, we introduce a two-party cryptographic protocol that ensures differential privacy, a strong notion of privacy. Moreover, we solve the privacy budget issue that would prevent peers from computing their similarities more than a fixed number of times by introducing the notion of bidirectional anonymous channel. We also develop a heterogeneous variant of differential privacy that can provide different level of privacy to different users, and even different level of privacy to different items within a single user's profile, thus taking into account different privacy expectations. Moreover, we propose a non-interactive protocol that is very efficient for releasing a small and private representation of peers' profiles that can be used to estimate similarity. Finally, we study the problem of choosing an appropriate privacy parameter both theoretically and empirically by creating several inference attacks that demonstrate for which values of the privacy parameter the privacy level provided is acceptable.

Identiferoai:union.ndltd.org:CCSD/oai:tel.archives-ouvertes.fr:tel-00989164
Date16 December 2013
CreatorsAlaggan, Mohammad
PublisherUniversité Rennes 1
Source SetsCCSD theses-EN-ligne, France
LanguageEnglish
Detected LanguageEnglish
TypePhD thesis

Page generated in 0.0024 seconds