Spelling suggestions: "subject:"privacy enhancing detechnologies"" "subject:"privacy enhancing aitechnologies""
1 |
Implementing Transparency Logging for an Issue Tracking SystemGrahn, Christian January 2012 (has links)
On the Internet today, users are accustomed to disclosing personal information when accessing a new service. When a user does so, there is rarely a system in place which allows the user to monitor how his or her information is actually shared or used by services. One proposed solution to this problem is to have services perform transparency logging on behalf of users, informing them how their data is processed as processing is taking place. We have recently participated in a collaboration to develop a privacy-preserving secure logging scheme that can be used for the purpose of transparency logging. As part of that collaboration we created a proof of concept implementation. In this thesis, we elaborate on that implementation and integrate it with a minimalistic open source issue-tracking system. We evaluate the amount of work required to integrate the logging system and attempt to identify potential integration problems. Using this issue-tracking system we then design and implement a scenario that demonstrates the value of the logging system to the average user.
|
2 |
Privacy Enhancing Techniques for Digital Identity ManagementHasini T Urala Liyanage Dona Gunasinghe (8479665) 23 July 2021 (has links)
Proving and verifying remotely a user's identity information have become a critical and challenging problem in the online world, with the increased number of sensitive services offered online. The digital identity management ecosystem has been evolving over the years to address this problem. However, the limitations in existing identity management approaches in handling this problem in a privacy preserving and secure manner have caused disruptions to users' digital lives and damages to revenue and reputation of service providers.<br><br>In this dissertation, we analyze different areas of the identity management ecosystem in terms of privacy and security. In our analysis, we observe three critical aspects to take into account when identifying the privacy and security requirements to address in identity management scenarios, namely: i) protecting privacy and security of digital identity and online transactions of users; ii) providing other stakeholders with assurance about user identity information and accountability of transactions; iii) preserving utility (e.g. accuracy, efficiency and deployability).<br>We show that existing authentication models and identity management protocols fail to address critical privacy and security requirements related to all these three aspects, mainly because of inherent conflicts among these requirements. <br>For example, existing authentication protocols, which aim to protect service providers from imposters by involving strong authentication factors, such as biometrics, fail to protect privacy and security of users' biometrics. Protecting an identity management system against counterfeits of identity assets, while preserving unlinkability of the transactions carried out using the identity assets, is another example of conflicting yet critical privacy and security requirements.<br>We demonstrate that careful combinations of cryptographic techniques and other technologies make it feasible to design privacy preserving identity management protocols which address critical and conflicting requirements related to the aforementioned three aspects. Certain techniques, that we have developed for these protocols, are independent contributions with applications beyond the domain of digital identity management. We validate our contributions by providing prototype implementations, experimental evaluations and security proofs.
|
3 |
Extended Abstracts of the Fourth Privacy Enhancing Technologies Convention (PET-CON 2009.1)21 February 2012 (has links) (PDF)
PET-CON, the Privacy Enhancing Technologies Convention, is a forum for researchers, students, developers, and other interested people to discuss novel research, current development and techniques in the area of Privacy Enhancing Technologies. PET-CON was first conceived in June 2007 at the 7th International PET Symposium in Ottawa, Canada. The idea was to set up a bi-annual convention in or nearby Germany to be able to meet more often than only once a year at some major conference.
|
4 |
Extended Abstracts of the Fourth Privacy Enhancing Technologies Convention (PET-CON 2009.1)Köpsell, Stefan, Loesing, Karsten 21 February 2012 (has links)
PET-CON, the Privacy Enhancing Technologies Convention, is a forum for researchers, students, developers, and other interested people to discuss novel research, current development and techniques in the area of Privacy Enhancing Technologies. PET-CON was first conceived in June 2007 at the 7th International PET Symposium in Ottawa, Canada. The idea was to set up a bi-annual convention in or nearby Germany to be able to meet more often than only once a year at some major conference.
|
5 |
Preserving Privacy in Transparency LoggingPulls, Tobias January 2015 (has links)
The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage. / The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage.
|
6 |
A model for compound purposes and reasons as a privacy enhancing technology in a relational databaseVan Staden, W.J.S. (Wynand Johannes Christiaan) 29 July 2011 (has links)
The protection of privacy related information of the individual is receiving increasing attention. Particular focus is on the protection of user interaction with other users or service providers. Protection of this interaction centres on anonymising the user’s actions, or protecting “what we do”. An equally important aspect is protecting the information related to a user that is stored in some electronic way (or protecting “who we are”). This may be profile information on a social networking site, or personal information in a bank’s database. A typical approach to protecting the user (data owner) in this case is to tag their data with the “purpose” the collecting entity (data controller) has for the data. These purposes are in most cases singular in nature (there is “one” purpose – no combinations of purposes – of the data), and provide little in the way of flexibility when specifying a privacy policy. Moreover, in all cases the user accessing the data (data user) does little to state their intent with the data. New types of purposes called compound purposes, which are combinations of singular or other compound purposes, are proposed and examined in this text. In addition to presenting the notion of compound purposes, compound reasons are also presented. Compound reasons represent the intent of the entity using the data (the data user) with the data. Also considered are the benefits of having the data user specifying their intent with data explicitly, the verification of compound reasons (the data user’s statement of intent) against compound purposes, the integration of compound statements in existing technologies such as SQL by providing a model for using compound purposes and reasons in a relational database management system for protecting privacy, and the use of compounds (purposes and reasons) as a method for managing privacy agreements. / Thesis (PhD)--University of Pretoria, 2011. / Computer Science / unrestricted
|
7 |
Technologies respectueuses de la vie privée pour le covoiturage / Privacy-enhancing technologies for ridesharingAïvodji, Ulrich Matchi 24 January 2018 (has links)
L'émergence des téléphones mobiles et objets connectés a profondément changé notre vie quotidienne. Ces dispositifs, grâce à la multitude de capteurs qu'ils embarquent, permettent l'accès à un large spectre de services. En particulier, les capteurs de position ont contribué au développement des services de localisation tels que la navigation, le covoiturage, le suivi de la congestion en temps réel... En dépit du confort offert par ces services, la collecte et le traitement des données de localisation portent de sérieuses atteintes à la vie privée des utilisateurs. En effet, ces données peuvent renseigner les fournisseurs de services sur les points d'intérêt (domicile, lieu de travail, orientation sexuelle), les habitudes ainsi que le réseau social des utilisateurs. D'une façon générale, la protection de la vie privée des utilisateurs peut être assurée par des dispositions légales ou techniques. Même si les mesures d'ordre légal peuvent dissuader les fournisseurs de services et les individus malveillants à enfreindre le droit à la vie privée des utilisateurs, les effets de telles mesures ne sont observables que lorsque l'infraction est déjà commise et détectée. En revanche, l'utilisation des technologies renforçant la protection de la vie privée (PET) dès la phase de conception des systèmes permet de réduire le taux de réussite des attaques contre la vie privée des utilisateurs. L'objectif principal de cette thèse est de montrer la viabilité de l'utilisation des PET comme moyens de protection des données de localisation dans les services de covoiturage. Ce type de service de localisation, en aidant les conducteurs à partager les sièges vides dans les véhicules, contribue à réduire les problèmes de congestion, d'émissions et de dépendance aux combustibles fossiles. Dans cette thèse, nous étudions les problèmes de synchronisation d'itinéraires et d'appariement relatifs au covoiturage avec une prise en compte explicite des contraintes de protection des données de localisation (origine, destination). Les solutions proposées dans cette thèse combinent des algorithmes de calcul d'itinéraires multimodaux avec plusieurs techniques de protection de la vie privée telles que le chiffrement homomorphe, l'intersection sécurisée d'ensembles, le secret partagé, la comparaison sécurisée d'entier. Elles garantissent des propriétés de protection de vie privée comprenant l'anonymat, la non-chainabilité et la minimisation des données. De plus, elles sont comparées à des solutions classiques, ne protégeant pas la vie privée. Nos expérimentations indiquent que les contraintes de protection des données privées peuvent être prise en compte dans les services de covoiturage sans dégrader leurs performances. / The emergence of mobile phones and connected objects has profoundly changed our daily lives. These devices, thanks to the multitude of sensors they embark, allow access to a broad spectrum of services. In particular, position sensors have contributed to the development of location-based services such as navigation, ridesharing, real-time congestion tracking... Despite the comfort offered by these services, the collection and processing of location data seriously infringe the privacy of users. In fact, these data can inform service providers about points of interests (home, workplace, sexual orientation), habits and social network of the users. In general, the protection of users' privacy can be ensured by legal or technical provisions. While legal measures may discourage service providers and malicious individuals from infringing users' privacy rights, the effects of such measures are only observable when the offense is already committed and detected. On the other hand, the use of privacy-enhancing technologies (PET) from the design phase of systems can reduce the success rate of attacks on the privacy of users. The main objective of this thesis is to demonstrate the viability of the usage of PET as a means of location data protection in ridesharing services. This type of location-based service, by allowing drivers to share empty seats in vehicles, helps in reducing congestion, CO2 emissions and dependence on fossil fuels. In this thesis, we study the problems of synchronization of itineraries and matching in the ridesharing context, with an explicit consideration of location data (origin, destination) protection constraints. The solutions proposed in this thesis combine multimodal routing algorithms with several privacy-enhancing technologies such as homomorphic encryption, private set intersection, secret sharing, secure comparison of integers. They guarantee privacy properties including anonymity, unlinkability, and data minimization. In addition, they are compared to conventional solutions, which do not protect privacy. Our experiments indicate that location data protection constraints can be taken into account in ridesharing services without degrading their performance.
|
8 |
PET-Exchange: A Privacy Enhanced Trading Framework : A Framework for Limit-Order Matching using Homomorphic Encryption in Trading / PET-Exchange: Ett Ramverk för Integritetsbevarande Limitordrar i Kontinuerliga Auktioner med Homomorfisk KrypteringWahlman, Jacob January 2022 (has links)
Over the recent decades, an increasing amount of new traders has entered the securities markets in order to trade securities such as stocks and bonds on electronic and physical exchanges. This increase in trader activity can largely be attributed to a simpler trading process including the growth of the electronic securities exchanges allowing for more dynamic and global trading platforms. Ever since their introduction, electronic exchanges have grown in terms of volume traded. The underlying trading mechanisms have mostly stayed the same over the years with some additions and improvements. However, over the recent decade, high-frequency traders (HFT) using algorithmic trading have shifted the playing field using practices that many consider unethical. Furthermore, insider trading continues to cause trust issues in certain trading platforms. Multiple solutions to these kinds of unethical trading behaviors have been proposed. Homomorphic encryption has been proposed as a potential preventative mechanism among the proposed solutions. This thesis analyses the properties and effects of a privacy-preserving framework for trading securities on an electronic stock exchange. The method used to evaluate the effects on trading was to implement a framework for handling trading and matching encrypted orders. The framework was then evaluated against its unencrypted counterpart to compare their performance properties in terms of volume handled, amount of orders matched, and timings of certain instructions. Finally, their security properties were analyzed to understand the proposed solution's potential impact on transparency, fairness, and opportunities for financial crime in an electronic securities exchange. The implementation was evaluated on its privacy-preserving properties by evaluating its ability to prevent information disclosure in trading processes. Furthermore, the performance of the implementation was evaluated using a generated trading session to simulate the market with sample trade data. Finally, from the proposed framework and the findings from this evaluation regarding privacy preservation and performance, a conclusion regarding its applicability as an alternative to off-exchange trading and preventative method against unfair practices and financial crime in trading is presented. The evaluation showed that the privacy-preserving and cryptographic properties of the suggested encrypted exchange were reasonably strong and were able to fulfill its goal of preventing unfair advantages in trading stemming from access to plaintext order information. However, the performance of the suggested implementation shows that more work needs to be performed for it to be viable in public electronic stock exchanges, although the solution could be suitable for small scale trading and privacy-preserving auctions.
|
9 |
Usability Issues in the User Interfaces of Privacy-Enhancing TechnologiesLaTouche, Lerone W. 01 January 2013 (has links)
Privacy on the Internet has become one of the leading concerns for Internet users. These users are not wrong in their concerns if personally identifiable information is not protected and under their control. To minimize the collection of Internet users' personal information and help solve the problem of online privacy, a number of privacy-enhancing technologies have been developed. These so-called privacy-enhancing technologies still have usability issues in the user interfaces because Internet users do not have the choices required to monitor and control their personal data when released in online repositories. Current research shows a need exists to improve the overall usability of privacy-enhancing technology user interfaces. A properly designed privacy-enhancing technology user interface will give the Internet users confidence they can monitor and control all aspects of their personal data. Specific methods and criteria for assessing the usability of privacy-enhancing technology user interfaces either have not been developed or have not been widely published leading to the complexity of the user interfaces, which negatively affects the privacy and security of Internet users' personal data.
This study focused on the development of a conceptual framework, which will provide a sound foundation for use in assessing the user interfaces of Web-based privacy-enhancing technologies for user-controlled e-privacy features. The study investigated the extent to which user testing and heuristic evaluation help identify the lack of user-controlled e-privacy features and usability problems in selected privacy-enhancing technology user interfaces. The outcome of this research was the development of a domain-specific heuristics checklist with criteria for the future evaluation of privacy-enhancing technologies' applications user interfaces. The results of the study show the domain-specific heuristics checklist generated more usability problems and a higher number of severe problems than the general heuristics. This suggests domain-specific heuristics can be used as a discount usability technique, which enforces the concept of usability that the heuristics are easy to use and learn. The domain-specific heuristics checklist should be of interest to privacy and security practitioners involved in the development of privacy-enhancing technologies' user interfaces. This research should supplement the literature on human-computer interaction, personal data protection, and privacy management.
|
10 |
Nymbler: Privacy-enhanced Protection from Abuses of AnonymityHenry, Ryan January 2010 (has links)
Anonymous communications networks help to solve the real and important problem of enabling users to communicate privately over the Internet. However, by doing so, they also introduce an entirely new problem: How can service providers on the Internet---such as websites, IRC networks and mail servers---allow anonymous access while protecting themselves against abuse by misbehaving anonymous users?
Recent research efforts have focused on using anonymous blacklisting systems (also known as anonymous revocation systems) to solve this problem. As opposed to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide a way for users to authenticate anonymously with a service provider, while enabling the service provider to revoke access from individual misbehaving anonymous users without revealing their identities. The literature contains several anonymous blacklisting systems, many of which are impractical for real-world deployment. In 2006, however, Tsang et al. proposed Nymble, which solves the anonymous blacklisting problem very efficiently using trusted third parties. Nymble has inspired a number of subsequent anonymous blacklisting systems. Some of these use fundamentally different approaches to accomplish what Nymble does without using third parties at all; so far, these proposals have all suffered from serious performance and scalability problems. Other systems build on the Nymble framework to reduce Nymble's trust assumptions while maintaining its highly efficient design.
The primary contribution of this thesis is a new anonymous blacklisting system built on the Nymble framework---a nimbler version of Nymble---called Nymbler. We propose several enhancements to the Nymble framework that facilitate the construction of a scheme that minimizes trust in third parties. We then propose a new set of security and privacy properties that anonymous blacklisting systems should possess to protect: 1) users' privacy against malicious service providers and third parties (including other malicious users), and 2) service providers against abuse by malicious users. We also propose a set of performance requirements that anonymous blacklisting systems should meet to maximize their potential for real-world adoption, and formally define some optional features in the anonymous blacklisting systems literature.
We then present Nymbler, which improves on existing Nymble-like systems by reducing the level of trust placed in third parties, while simultaneously providing stronger privacy guarantees and some new functionality. It avoids dependence on trusted hardware and unreasonable assumptions about non-collusion between trusted third parties. We have implemented all key components of Nymbler, and our measurements indicate that the system is highly practical. Our system solves several open problems in the anonymous blacklisting systems literature, and makes use of some new cryptographic constructions that are likely to be of independent theoretical interest.
|
Page generated in 0.1053 seconds