Spelling suggestions: "subject:"desable privacy"" "subject:"51usable privacy""
1 |
Towards Improving Transparency, Intervenability, and Consent in HCIKaregar, Farzaneh January 2018 (has links)
Transparency of personal data processing is enforced by most Western privacy laws, including the new General Data Protection Regulation (GDPR) which will be effective from May 2018. The GDPR specifies that personal data shall be processed lawfully, fairly, and in a transparent manner. It strengthens people's rights for both ex-ante and ex-post transparency and intervenability. Equally important is the strict legal requirements for informed consent established by the GDPR. On the other hand, the legal privacy principles have Human-Computer Interaction (HCI) implications. People should comprehend the principles, be aware of when the principles may be used, and be able to use them. Transparent information about personal data processing should be concise, intelligible, and provided in an easily accessible form, pursuant to the GDPR. Nonetheless, the answer to the question about how HCI implications can be addressed depends on the attempts to decrease the gap between legal and user-centric transparency, intervenability, and consent. Enhancing individuals' control in a usable way helps people to be aware of the flow of their personal information, control their data, make informed decisions, and finally preserve their privacy. The objective of this thesis is to propose usable tools and solutions, to enhance people's control and enforce legal privacy principles, especially transparency, intervenability, and informed consent. To achieve the goal of the thesis, different ways to improve ex-ante transparency and informed consent are investigated by designing and testing new solutions to make effective consent forms. Moreover, ex-post transparency and intervenability are improved by designing a transparency enhancing tool and investigating users' perceptions of data portability and transparency in the tool. The results of this thesis contribute to the body of knowledge by mapping legal privacy principles to HCI solutions, unveiling HCI problems and answers when aiming for legal compliance, and proposing effective designs to obtain informed consent. / The new General Data Protection Regulation (GDPR) strengthens people’s rights for transparency, intervenability, and consent. The legal privacy principles have Human-Computer Interaction (HCI) implications. Besides aiming for legal compliance, it is of paramount importance to investigate how to provide individuals with usable and user-centric transparency, intervenability, and consent. The objective of this thesis is to propose usable tools and solutions, to enhance people's control and enforce legal privacy principles, especially transparency, intervenability, and informed consent. To achieve the goal of the thesis, different ways to improve ex-ante transparency and informed consent are investigated by designing and testing new solutions to make effective consent forms. Moreover, ex-post transparency and intervenability are improved by designing a transparency enhancing tool and investigating users' perceptions of data portability and transparency in the tool. The results of this thesis contribute to the body of knowledge by mapping legal privacy principles to HCI solutions, unveiling HCI problems and answers when aiming for legal compliance, and proposing effective designs to obtain informed consent. / <p>The 3. article was in manuscript form at the time of the licentiate defense: Karegar, F. / User Evaluations of an App Interface for Cloud-based Identity Management / / Manuskript (preprint)</p>
|
2 |
Social Cybersecurity: Reshaping Security Through An Empirical Understanding of Human Social BehaviorDas, Sauvik 01 May 2017 (has links)
Despite substantial effort made by the usable security community at facilitating the use of recommended security systems and behaviors, much security advice is ignored and many security systems are underutilized. I argue that this disconnect can partially be explained by the fact that security behaviors have myriad unaccounted for social consequences. For example, by using two-factor authentication, one might be perceived as “paranoid”. By encrypting an e-mail correspondence, one might be perceived as having something to hide. Yet, to date, little theoretical work in usable security has applied theory from social psychology to understand how these social consequences affect people’s security behaviors. Likewise, little systems work in usable security has taken social factors into consideration. To bridge these gaps in literature and practice, I begin to build a theory of social cybersecurity and apply those theoretical insights to create systems that encourage better cybersecurity behaviors. First, through a series of interviews, surveys and a large-scale analysis of how security tools diffuse through the social networks of 1.5 million Facebook users, I empirically model how social influences affect the adoption of security behaviors and systems. In so doing, I provide some of the first direct evidence that security behaviors are strongly driven by social influence, and that the design of a security system strongly influences its potential for social spread. Specifically, security systems that are more observable, inclusive, and stewarded are positively affected by social influence, while those that are not are negatively affected by social influence. Based on these empirical results, I put forth two prescriptions: (i) creating socially grounded interface “nudges” that encourage better cybersecurity behaviors, and (ii) designing new, more socially intelligent end-user facing security systems. As an example of a social “nudge”, I designed a notification that informs Facebook users that their friends use optional security systems to protect their own accounts. In an experimental evaluation with 50,000 Facebook users, I found that this social notification was significantly more effective than a non-social control notification at attracting clicks to improve account security and in motivating the adoption of promoted, optional security tools. As an example of a socially intelligent cybersecurity system, I designed Thumprint: an inclusive authentication system that authenticates and identifies individual group members of a small, local group through a single, shared secret knock. Through my evaluations, I found that Thumprint is resilient to casual but motivated adversaries and that it can reliably differentiate multiple group members who share the same secret knock. Taken together, these systems point towards a future of socially intelligent cybersecurity that encourages better security behaviors. I conclude with a set of descriptive and prescriptive takeaways, as well as a set of open problems for future work. Concretely, this thesis provides the following contributions: (i) an initial theory of social cybersecurity, developed from both observational and experimental work, that explains how social influences affect security behaviors; (ii) a set of design recommendations for creating socially intelligent security systems that encourage better cybersecurity behaviors; (iii) the design, implementation and comprehensive evaluation of two such systems that leverage these design recommendations; and (iv) a reflection on how the insights uncovered in this work can be utilized alongside broader design considerations in HCI, security and design to create an infrastructure of useful, usable and socially intelligent cybersecurity systems.
|
3 |
Privacy agents in the IoT : considerations on how to balance agent autonomy and user control in privacy decisionsColnago, Jessica Helena 20 June 2016 (has links)
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-01-16T11:11:48Z
No. of bitstreams: 1
DissJHC.pdf: 5301084 bytes, checksum: 69f3b369ca40bc9639ba8d1f296e4d6e (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-17T12:10:12Z (GMT) No. of bitstreams: 1
DissJHC.pdf: 5301084 bytes, checksum: 69f3b369ca40bc9639ba8d1f296e4d6e (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-17T12:10:25Z (GMT) No. of bitstreams: 1
DissJHC.pdf: 5301084 bytes, checksum: 69f3b369ca40bc9639ba8d1f296e4d6e (MD5) / Made available in DSpace on 2017-01-17T12:10:32Z (GMT). No. of bitstreams: 1
DissJHC.pdf: 5301084 bytes, checksum: 69f3b369ca40bc9639ba8d1f296e4d6e (MD5)
Previous issue date: 2016-06-20 / Não recebi financiamento / This thesis explored aspects that can help balance the level of user control and system autonomy for intelligent privacy agents in the context of the Internet of Things. This proposed balance could be reached considering aspects related to wanting to be interrupted to have control and being able to be interrupted to exert this control. Through literature review of
interruption and privacy literature, variables related to these two perspectives were identified. This led to the variable set "Intelligent Privacy Interruptions”. To verify and validate this set, two research actions were
performed. The first one was an online survey that allowed us to perform a sanity check that these variables were acceptable in this work’s context. The second was an experience sampling user study with 21 participants
that allowed us to better understand how user behavior is informed by these variables. Based on these two interventions it was possible to note that the selected variables seem to show relevance and that they can be used to inform the development and design of privacy agents. The limitations of the partial results notwithstanding, through a quantitative
analysis of data collected form the user study and the qualitative analysis of the exit interviews, it was possible to note a common mental process between the participants of the user study when deciding whether to withhold or delegate decision control to the agent. Future studies should be performed to verify the possibility of expansion and creation of a
behavior and preference model that can be integrated to the decisionmaking system of intelligent privacy agents. / Este trabalho investigou aspectos que podem ajudar a balancear o nível de controle de usuários e de autonomia de agentes inteligentes de privacidade no contexto da Internet das Coisas. Entende-se que esse
balanceamento proposto poderia ser alcançado considerando aspectos
relacionados a “querer” ser interrompido para ter controle e “poder” ser interrompido para exercer o controle. Por meio de revisão da literatura de interrupções e privacidade, variáveis relacionadas a esses dois aspectos
foram identificadas, embasando a proposta de um conjunto de variáveis para “Interrupções de Privacidade Inteligentes”. Para verificar e validar esse conjunto de variáveis, duas ações de pesquisa foram feitas. A primeira foi um questionário online que serviu como uma verificação inicial de que as variáveis são adequadas ao novo contexto proposto por esse trabalho. A segunda foi um estudo de amostragem de experiência com 21 usuários para se entender melhor como essas variáveis podem vir a informar o comportamento de usuários. Os resultados obtidos sugerem
que as variáveis selecionadas apresentam relevância e que podem ser usadas para informar o desenvolvimento e design de agentes de privacidade. Embora os resultados ainda sejam limitados, principalmente pela duração do estudo e grupo e número de usuários, através da análise quantitativa dos dados coletadas no estudo com usuários e da análise qualitativa das entrevistas realizadas pós-estudo notou-se um processo mental comum entre os usuários participantes do estudo para as tomadas de decisão de reter o controle ou delegá-lo ao agente. Estudos futuros devem ser realizados, procurando verificar a possibilidade de expandir o
relacionamento das variáveis para a criação de um modelo de comportamento e preferência dos usuários que seja integrável ao sistema
de decisão de agentes inteligentes de privacidade.
|
4 |
Designing for user awareness and usability : An evaluation of authorization dialogs on a mobile deviceLindegren, Daniel January 2017 (has links)
Personal data is often disclosed with every registration, sharing, or request of an online service. With the increased usage of things connected to the Internet, users' information being collected and stored, the risks related to unknowingly sharing personal data increases. Sharing of personal information is a sensitive subject and can hurt people’s assets, dignity, personal integrity and other social aspects. In general, users’ concerns have grown regarding protecting their personal information which has led to the development of multiple privacy-oriented systems. In scenarios where users are logging onto a website or system, they rarely notice, understand or have desire to read the conditions to which they are implicitly agreeing. These systems are often referred to as identity management systems or single sign-on systems. Recent studies have shown that users are not aware of what data transactions take place by using various authentication solutions. It is critical for these types of system dealing with privacy that researchers examine users' understanding of the concepts through interface design. The purpose of this study is to investigate the usability and user awareness of data transactions for identity management systems on mobile devices by constructing and evaluating different design concepts. Therefore, four different mobile prototypes were designed (called CREDENTIAL Wallet) and explored to measure the usability and also the user awareness of users’ disclosures. 20 usability tests were conducted per prototype. Multiple conclusions can be drawn from this study. The findings showed that the drag-and-drop prototype scored a high user awareness score in terms of participants remembering their shared data and having a good idea of them not sharing more data than they had actually shared. Consequently, the drag-and-drop prototype achieved the highest usability result. A prototype that utilized swiping was created to fit the mobile medium. The prototype showed the highest user awareness score in the context of participants stating what data they had shared. However, people using the swiping prototype thought they were sharing more data than they actually were. Data show that users have an incorrect mental model of the sharing of their fingerprint pattern. Finally, the writing concerns recommendations and challenges of identity management systems – e.g. the importance of tutorial screens. Future studies within the CREDENTIAL project are already underway concerning users' incorrect mental model of sharing fingerprint to the service provider side. / CREDENTIAL
|
5 |
Designing for Usable Privacy and Transparency in Digital Transactions / Designing for Usable Privacy and Transparency in Digital Transactions : Exploring and enhancing the usability and user experience aspects of selected privacy and transparency technologiesAngulo, Julio January 2015 (has links)
People engage with multiple online services and carry out a range of different digital transactions with these services. Registering an account, sharing content in social networks, or requesting products or services online are a few examples of such digital transactions. With every transaction, people take decisions and make disclosures of personal data. Despite the possible benefits of collecting data about a person or a group of people, massive collection and aggregation of personal data carries a series of privacy and security implications which can ultimately result in a threat to people's dignity, their finances, and many other aspects of their lives. For this reason, privacy and transparency enhancing technologies are being developed to help people protect their privacy and personal data online. However, some of these technologies are usually hard to understand, difficult to use, and get in the way of people's momentary goals. The objective of this thesis is to explore, and iteratively improve, the usability and user experience provided by novel privacy and transparency technologies. To this end, it compiles a series of case studies that address identified issues of usable privacy and transparency at four stages of a digital transaction, namely the information, agreement, fulfilment and after-sales stages. These studies contribute with a better understanding of the human-factors and design requirements that are necessary for creating user-friendly tools that can help people to protect their privacy and to control their personal information on the Internet. / People engage with multiple online services and carry out a range of different digital transactions with these services. Registering an account, sharing content in social networks, or requesting products or services online are a few examples of such digital transactions. With every transaction, people take decisions and make disclosures of personal data. Despite the possible benefits of collecting data about a person or a group of people, massive collection and aggregation of personal data carries a series of privacy and security implications which can ultimately result in a threat to people's dignity, their finances, and many other aspects of their lives. For this reason, privacy and transparency enhancing technologies are being developed to help people protect their privacy and personal data online. However, some of these technologies are usually hard to understand, difficult to use, and get in the way of people's momentary goals. The objective of this thesis is to explore, and iteratively improve, the usability and user experience provided by novel privacy and transparency technologies. To this end, it compiles a series of case studies that address identified issues of usable privacy and transparency at four stages of a digital transaction, namely the information, agreement, fulfilment and after-sales stages. These studies contribute with a better understanding of the human-factors and design requirements that are necessary for creating user-friendly tools that can help people to protect their privacy and to control their personal information on the Internet.
|
6 |
Usable privacy for digital transactions : Exploring the usability aspects of three privacy enhancing mechanismsAngulo, Julio January 2012 (has links)
The amount of personal identifiable information that people distribute over different online services has grown rapidly and considerably over the last decades. This has led to increased probabilities for identity theft, profiling and linkability attacks, which can in turn not only result in a threat to people’s personal dignity, finances, and many other aspects of their lives, but also to societies in general. Methods and tools for securing people’s online activities and protecting their privacy on the Internet, so called Privacy Enhancing Technologies (PETs), are being designed and developed. However, these technologies are often seen by ordinary users as complicated and disruptive of their primary tasks. In this licentiate thesis, I investigate the usability aspects of three main privacy and security enhancing mechanisms. These mechanisms have the goal of helping and encouraging users to protect their privacy on the Internet as they engage in some of the steps necessary to complete a digital transaction. The three mechanisms, which have been investigated within the scope of different research projects, comprise of (1) graphical visualizations of service providers’ privacy policies and user-friendly management and matching of users’ privacy preferences “on the fly”, (2) methods for helping users create appropriate mental models of the data minimization property of anonymous credentials, and (3) employing touch-screen biometrics as a method to authenticate users into mobile devices and verify their identities during a digital transaction. Results from these investigations suggest that these mechanisms can make digital transactions privacy-friendly and secure while at the same time delivering convenience and usability for ordinary users.
|
Page generated in 0.0499 seconds