Spelling suggestions: "subject:"privacy calculus"" "subject:"privacy icalculus""
11 |
Measures of Privacy Protection on Social EnvironmentsAlemany Bordera, José 13 October 2020 (has links)
Tesis por compendio / [EN] Nowadays, online social networks (OSNs) have become a mainstream cultural phenomenon for millions of Internet users. Social networks are an ideal environment for
generating all kinds of social benefits for users. Users share experiences, keep in touch
with their family, friends and acquaintances, and earn economic benefits from the
power of their influence (which is translated into new job opportunities). However,
the use of social networks and the action of sharing information imply the loss of the
users’ privacy.
Recently, a great interest in protecting the privacy of users has emerged. This situation
has been due to documented cases of regrets in users’ actions, company scandals produced by misuse of personal information, and the biases introduced by privacy mechanisms. Social network providers have included improvements in their systems to reduce
users’ privacy risks; for example, restricting privacy policies by default, adding new privacy settings, and designing quick and easy shortcuts to configure user privacy settings.
In the privacy researcher area, new advances are proposed to improve privacy mechanisms, most of them focused on automation, fine-grained systems, and the usage of
features extracted from the user’s profile information and interactions to recommend
the best privacy policy for the user. Despite these advances, many studies have shown
that users’ concern for privacy does not match the decisions they ultimately make in
social networks. This misalignment in the users’ behavior might be due to the complexity of the privacy concept itself. This drawback causes users to disregard privacy risks,
or perceive them as temporarily distant. Another cause of users’ behavior misalignment might be due to the complexity of the privacy decision-making process. This is
because users should consider all possible scenarios and the factors involved (e.g., the
number of friends, the relationship type, the context of the information, etc.) to make
an appropriate privacy decision.
The main contributions of this thesis are the development of metrics to assess privacy
risks, and the proposal of explainable privacy mechanisms (using the developed metrics) to assist and raise awareness among users during the privacy decision process.
Based on the definition of the concept of privacy, the dimensions of information scope
and information sensitivity have been considered in this thesis to assess privacy risks.
For explainable privacy mechanisms, soft paternalism techniques and gamification elements that make use of the proposed metrics have been designed. These mechanisms
have been integrated into the social network PESEDIA and evaluated in experiments
with real users. PESEDIA is a social network developed in the framework of the Master’s
thesis of the Ph.D. student [15], this thesis, and the national projects “Privacy in Social Educational Environments during Childhood and Adolescence” (TIN2014-55206-
R) and “Intelligent Agents for Privacy Advice in Social Networks” (TIN2017-89156-R).
The findings confirm the validity of the proposed metrics for computing the users’ scope
and the sensitivity of social network publications. For the scope metric, the results also
showed the possibility of estimating it through local and social centrality metrics for
scenarios with limited information access. For the sensitivity metric, the results also
remarked the users’ misalignment for some information types and the consensus for a
majority of them. The usage of these metrics as part of messages about potential consequences of privacy policy choices and information sharing actions to users showed
positive effects on users’ behavior regarding privacy. Furthermore, the findings of exploring the users’ trade-off between costs and benefits during disclosure actions of personal information showed significant relationships with the usual social circles (family
members, friends, coworkers, and unknown users) and their properties. This allowed
designing better privacy mechanisms that appropriately restrict access to information and reduce regrets. Finally, gamification elements applied to social networks and
users’ privacy showed a positive effect on the users’ behavior towards privacy and safe
practices in social networks. / [ES] En la actualidad, las redes sociales se han convertido en un fenómeno cultural dominante para millones de usuarios de Internet. Las redes sociales son un entorno ideal
para la generación de todo tipo de beneficios sociales para los usuarios. Los usuarios
comparten experiencias, mantienen el contacto con sus familiares, amigos y conocidos,
y obtienen beneficios económicos gracias al poder de su influencia (lo que se traduce en
nuevas oportunidades de trabajo). Sin embargo, el uso de las redes sociales y la acción
de compartir información implica la perdida de la privacidad de los usuarios.
Recientemente ha emergido un gran interés en proteger la privacidad de los usuarios. Esta situación se ha debido a los casos de arrepentimientos documentados en las
acciones de los usuarios, escándalos empresariales producidos por usos indebidos de
la información personal, y a los sesgos que introducen los mecanismos de privacidad.
Los proveedores de redes sociales han incluido mejoras en sus sistemas para reducir los
riesgos en privacidad de los usuarios; por ejemplo, restringiendo las políticas de privacidad por defecto, añadiendo nuevos elementos de configuración de la privacidad, y
diseñando accesos fáciles y directos para configurar la privacidad de los usuarios. En el
campo de la investigación de la privacidad, nuevos avances se proponen para mejorar
los mecanismos de privacidad la mayoría centrados en la automatización, selección de
grano fino, y uso de características extraídas de la información y sus interacciones para
recomendar la mejor política de privacidad para el usuario. A pesar de estos avances,
muchos estudios han demostrado que la preocupación de los usuarios por la privacidad no se corresponde con las decisiones que finalmente toman en las redes sociales.
Este desajuste en el comportamiento de los usuarios podría deberse a la complejidad
del propio concepto de privacidad. Este inconveniente hace que los usuarios ignoren
los riesgos de privacidad, o los perciban como temporalmente distantes. Otra causa
del desajuste en el comportamiento de los usuarios podría deberse a la complejidad
del proceso de toma de decisiones sobre la privacidad. Esto se debe a que los usuarios
deben considerar todos los escenarios posibles y los factores involucrados (por ejemplo, el número de amigos, el tipo de relación, el contexto de la información, etc.) para
tomar una decisión apropiada sobre la privacidad.
Las principales contribuciones de esta tesis son el desarrollo de métricas para evaluar los riesgos de privacidad, y la propuesta de mecanismos de privacidad explicables
(haciendo uso de las métricas desarrolladas) para asistir y concienciar a los usuarios
durante el proceso de decisión sobre la privacidad. Atendiendo a la definición del
concepto de la privacidad, las dimensiones del alcance de la información y la sensibilidad de la información se han considerado en esta tesis para evaluar los riesgos de privacidad. En cuanto a los mecanismos de privacidad explicables, se han diseñado utilizando técnicas de paternalismo blando y elementos de gamificación que hacen uso de
las métricas propuestas. Estos mecanismos se han integrado en la red social PESEDIA
y evaluado en experimentos con usuarios reales. PESEDIA es una red social desarrollada en el marco de la tesina de Master del doctorando [15], esta tesis y los proyectos
nacionales “Privacidad en Entornos Sociales Educativos durante la Infancia y la Adolescencia” (TIN2014-55206-R) y “Agentes inteligentes para asesorar en privacidad en
redes sociales” (TIN2017-89156-R).
Los resultados confirman la validez de las métricas propuestas para calcular el alcance
de los usuarios y la sensibilidad de las publicaciones de las redes sociales. En cuanto
a la métrica del alcance, los resultados también mostraron la posibilidad de estimarla
mediante métricas de centralidad local y social para escenarios con acceso limitado a
la información. En cuanto a la métrica de sensibilidad, los resultados también pusieron
de manifiesto la falta de concordancia de los usuarios en el caso de algunos tipos de información y el consenso en el caso de la mayoría de ellos. El uso de estas métricas como
parte de los mensajes sobre las posibles consecuencias de las opciones de política de
privacidad y las acciones de intercambio de información a los usuarios mostró efectos
positivos en el comportamiento de los usuarios con respecto a la privacidad. Además,
los resultados de la exploración de la compensación de los usuarios entre los costos y
los beneficios durante las acciones de divulgación de información personal mostraron
relaciones significativas con los círculos sociales habituales (familiares, amigos, compañeros de trabajo y usuarios desconocidos) y sus propiedades. Esto permitió diseñar
mejores mecanismos de privacidad que restringen adecuadamente el acceso a la información y reducen los arrepentimientos. Por último, los elementos de gamificación
aplicados a las redes sociales y a la privacidad de los usuarios mostraron un efecto positivo en el comportamiento de los usuarios hacia la privacidad y las prácticas seguras
en las redes sociales. / [CA] En l’actualitat, les xarxes socials s’han convertit en un fenomen cultural dominant per
a milions d’usuaris d’Internet. Les xarxes socials són un entorn ideal per a la generació
de tota mena de beneficis socials per als usuaris. Els usuaris comparteixen experiències, mantenen el contacte amb els seus familiars, amics i coneguts, i obtenen beneficis
econòmics gràcies al poder de la seva influència (el que es tradueix en noves oportunitats de treball). No obstant això, l’ús de les xarxes socials i l’acció de compartir
informació implica la perduda de la privacitat dels usuaris.
Recentment ha emergit un gran interès per protegir la privacitat dels usuaris. Aquesta
situació s’ha degut als casos de penediments documentats en les accions dels usuaris,
escàndols empresarials produïts per usos indeguts de la informació personal, i als caires
que introdueixen els mecanismes de privacitat. Els proveïdors de xarxes socials han inclòs millores en els seus sistemes per a reduir els riscos en privacitat dels usuaris; per exemple, restringint les polítiques de privacitat per defecte, afegint nous elements de configuració de la privacitat, i dissenyant accessos fàcils i directes per a configurar la privacitat dels usuaris. En el camp de la recerca de la privacitat, nous avanços es proposen
per a millorar els mecanismes de privacitat la majoria centrats en l’automatització,
selecció de gra fi, i ús de característiques extretes de la informació i les seues interaccions per a recomanar la millor política de privacitat per a l’usuari. Malgrat aquests
avanços, molts estudis han demostrat que la preocupació dels usuaris per la privacitat
no es correspon amb les decisions que finalment prenen en les xarxes socials. Aquesta
desalineació en el comportament dels usuaris podria deure’s a la complexitat del propi
concepte de privacitat. Aquest inconvenient fa que els usuaris ignorin els riscos de privacitat, o els percebin com temporalment distants. Una altra causa de la desalineació
en el comportament dels usuaris podria deure’s a la complexitat del procés de presa de
decisions sobre la privacitat. Això es deu al fet que els usuaris han de considerar tots
els escenaris possibles i els factors involucrats (per exemple, el nombre d’amics, el tipus
de relació, el context de la informació, etc.) per a prendre una decisió apropiada sobre
la privacitat.
Les principals contribucions d’aquesta tesi són el desenvolupament de mètriques per a
avaluar els riscos de privacitat, i la proposta de mecanismes de privacitat explicables
(fent ús de les mètriques desenvolupades) per a assistir i conscienciar als usuaris durant
el procés de decisió sobre la privacitat. Atesa la definició del concepte de la privacitat,
les dimensions de l’abast de la informació i la sensibilitat de la informació s’han considerat en aquesta tesi per a avaluar els riscos de privacitat. Respecte als mecanismes
de privacitat explicables, aquests s’han dissenyat utilitzant tècniques de paternalisme bla i elements de gamificació que fan ús de les mètriques propostes. Aquests mecanismes s’han integrat en la xarxa social PESEDIA i avaluat en experiments amb usuaris
reals. PESEDIA és una xarxa social desenvolupada en el marc de la tesina de Màster del
doctorant [15], aquesta tesi i els projectes nacionals “Privacitat en Entorns Socials Educatius durant la Infància i l’Adolescència” (TIN2014-55206-R) i “Agents Intel·ligents
per a assessorar en Privacitat en xarxes socials” (TIN2017-89156-R).
Els resultats confirmen la validesa de les mètriques propostes per a calcular l’abast de
les accions dels usuaris i la sensibilitat de les publicacions de les xarxes socials. Respecte a la mètrica de l’abast, els resultats també van mostrar la possibilitat d’estimarla mitjançant mètriques de centralitat local i social per a escenaris amb accés limitat
a la informació. Respecte a la mètrica de sensibilitat, els resultats també van posar
de manifest la falta de concordança dels usuaris en el cas d’alguns tipus d’informació
i el consens en el cas de la majoria d’ells. L’ús d’aquestes mètriques com a part dels
missatges sobre les possibles conseqüències de les opcions de política de privacitat i les
accions d’intercanvi d’informació als usuaris va mostrar efectes positius en el comportament dels usuaris respecte a la privacitat. A més, els resultats de l’exploració de la
compensació dels usuaris entre els costos i els beneficis durant les accions de divulgació
d’informació personal van mostrar relacions significatives amb els cercles socials habituals (familiars, amics, companys de treball i usuaris desconeguts) i les seves propietats. Això ha permés dissenyar millors mecanismes de privacitat que restringeixen
adequadament l’accés a la informació i redueixen els penediments. Finalment, els elements de gamificació aplicats a les xarxes socials i a la privacitat dels usuaris van
mostrar un efecte positiu en el comportament dels usuaris cap a la privacitat i les pràctiques segures en les xarxes socials. / Alemany Bordera, J. (2020). Measures of Privacy Protection on Social Environments [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/151456 / Compendio
|
12 |
The Privacy Paradox: Factors influencing information disclosure in the use of the Internet of Things (IoT) in South AfricaDavids, Natheer 21 January 2021 (has links)
The Internet of Things (IoT) has been acknowledged as one of the most innovative forms of technology since the computer, because of the influence it can have on multiple sectors of physical and virtual environments. The growth of IoT is expected to continue, by 2020 the number of connected devices is estimated to reach 50 billion. Recent developments in IoT provide an unprecedented opportunity for personalised services and other benefits. To exploit these potential benefits as best as possible, individuals are willing to provide their personal information despite potential privacy breaches. Therefore, this paper examines factors that influence the willingness to disclose personal information in the use of IoT in South Africa (SA) with the use of the privacy calculus as the theoretical underpinnings of this research. The privacy calculus accentuates that a risk-benefit trade off occurs when an individual decides to disclose their personal information, however, it is assumed that there are more factors than perceived risks and perceived benefits that influence information disclosure. After analysing previous literature, this study identified the following factors; information sensitivity, privacy concerns, social influence, perceived benefits, (perceived) privacy risks and privacy knowledge as possible key tenants in relation to willingness to disclose personal information. This research took on an objective ontological view, with the underlying epistemological stance being positivistic. The research incorporated a deductive approach, employing the use of a conceptual model which was constructed from a combination of studies orientated around privacy, the privacy calculus and the privacy paradox. Data for this research was collected using the quantitative research approach, through the use of an anonymous online questionnaire, where the targeted population was narrowed down to the general public residing within SA that make use of IoT devices and/or services. Data was collected using Qualtrics and analysed using SmartPLS 3. SmartPLS 3 was used to test for correlations between the factors which influence information disclosure in the use of IoT by utilising the complete bootstrapping method. A key finding was that the privacy paradox is apparent within SA, where individuals pursue enjoyment and predominantly use IoT for leisure purposes, while individuals are more likely to adopt self-withdrawal tendencies when faced with heightened privacy concerns or potential risks.
|
13 |
The Privacy Club : An exploratory study of the privacy paradox in digital loyalty programsJohansson, Lilly, Rystadius, Gustaf January 2022 (has links)
Background: Digital loyalty programs collect extensive personal data, but literature has so far neglected the aspect of privacy concerns within the programs. The privacy paradox denotes the contradictory behavior amongst consumers stating privacy risk beliefs and actual behavior. Existing literature is calling for a dual perspective of the privacy paradox and digital loyalty programs to find the underlying reasons for the contradictory behavior. Purpose: The purpose of this study was to explore (1) if and when privacy concerns existed in digital loyalty programs and (2) why consumers overruled their privacy concerns in digital loyalty programs. Method: A qualitative method with 18 semi-structured interviews were conducted through a non-probability purposive sampling of consumers within digital loyalty programs. The findings were then analyzed through a thematic analysis to finally construct a model based upon the given research purpose. Conclusion: The findings suggest that consumers experience privacy concerns in digital loyalty programs from external exposure to privacy breaches and when consumers felt their mental construct of terms and conditions were violated. Four themes were found to influence why consumers overrule their privacy concerns and share personal data with digital loyalty programs, relating to cognitive biases, value of rewards received, and digital trust for the program provider. The findings were synthesized into a model illustrating the consumer assessment of personal data sharing in digital loyalty programs and the interconnection between the influences.
|
14 |
Beyond Privacy Concerns: Examining Individual Interest in Privacy in the Machine Learning EraBrown, Nicholas James 12 June 2023 (has links)
The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We use the enhanced APCO model as the theoretical lens to investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making. In a scenario-based experiment with 499 participants, we present various company privacy policies to participants to examine their trust and privacy considerations, then ask them to share reasons why they would or would not opt in to share their voice data to train a companies' voice recognition software. We find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns, which thereby mediate their decisions to share their voice data. Furthermore, we manipulate four factors of a privacy policy to operationalize various cognitive biases actively present in the minds of consumers and find that default trust and salience biases significantly affect participants' privacy decision making. Our results provide a deeper contextualized understanding of privacy-related concerns that may arise in human-augmented ML system configurations and highlight the managerial importance of considering the role of human involvement in supervised machine learning settings. Importantly, we introduce perceived human involvement as a new construct to the information privacy discourse.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. Researchers refer to this as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. Often, privacy concerns are situational and can be elicited through the setup of boundary conditions and the framing of different privacy scenarios. Drawing on the cognitive model of empowerment and interest, we propose a multidimensional privacy interest construct that captures consumers' situational and dispositional attitudes toward privacy, which can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. This construct comprises four dimensions—impact, awareness, meaningfulness, and competence—and is conceptualized as a consumer's assessment of contextual factors affecting their privacy perceptions and their global predisposition to respond to those factors. Importantly, interest was originally included in the privacy calculus but is largely absent in privacy studies and theoretical conceptualizations. Following MacKenzie et al. (2011), we developed and empirically validated a privacy interest scale. This study contributes to privacy research and practice by reconceptualizing a construct in the original privacy calculus theory and offering a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors. / Doctor of Philosophy / The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making and find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns. This thereby influences their decisions to share their voice data. Our results highlight the importance of understanding consumers' willingness to contribute their data to generate complete and diverse data sets to help companies reduce algorithmic biases and systematic unfairness in the decisions and outputs rendered by ML systems.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. This is referred to as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. We propose privacy interest as an alternative to privacy concern and assert that it can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. We found that privacy interest was more effective than privacy concern in predicting consumers' mobilization behaviors, such as publicly complaining about privacy issues to companies and third-party organizations, requesting to remove their information from company databases, and reducing their self-disclosure behaviors. By contrast, privacy concern was more effective than privacy interest in predicting consumers' behaviors to misrepresent their identity. By developing and empirically validating the privacy interest scale, we offer interest in privacy as a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors.
|
15 |
Using privacy calculus theory to explore entrepreneurial directions in mobile location-based advertising: Identifying intrusiveness as the critical risk factorGutierrez, A., O'Leary, S., Rana, Nripendra P., Dwivedi, Y.K., Calle, T. 25 October 2019 (has links)
Yes / Location-based advertising is an entrepreneurial and innovative means for advertisers to reach out through personalised messages sent directly to mobile phones using their geographic location. The mobile phone users' willingness to disclose their location and other personal information is essential for the successful implementation of mobile location-based advertising (MLBA). Despite the potential enhancement of the user experience through such personalisation and the improved interaction with the marketer, there is an increasing tension between that personalisation and mobile users' concerns about privacy. While the privacy calculus theory (PCT) suggests that consumers make privacy-based decisions by evaluating the benefits any information may bring against the risk of its disclosure, this study examines the specific risks and benefits that influence consumers' acceptance of MLBA. A conceptual model is proposed based on the existing literature and a standardised survey was developed and targeted at individuals with known interests in the subject matter. From these requests, 252 valid responses were received and used to evaluate the key benefits and risks of MLBA from the users' perspectives. While the results confirmed the importance of internet privacy concerns (IPC) as an important determinant, they also indicate that monetary rewards and intrusiveness have a notably stronger impact on acceptance intentions towards MLBA. Intrusiveness is the most important risk factor in determining mobile users' intentions to accept MLBA and therefore establishing effective means of minimising the perceived intrusiveness of MLBA can be expected to have the greatest impact on achieving effective communications with mobile phone users.
|
16 |
Machine Learning and Text Mining for Advanced Information Sharing Systems in Blockchain and Cybersecurity ApplicationsHajian, Ava 07 1900 (has links)
This research explores the role of blockchain technology in advanced information sharing systems with the applications of energy systems and healthcare. Essay 1 proposes a blockchain application to improve resilience in smart grids by addressing cyber security and peer-to-peer trading potentials. The results show that blockchain-based smart contracts are positively related to smart grid resilience. The findings also show that blockchain-based smart contracts significantly contribute to zero trust cybersecurity, which results in a better capability to mitigate cyber-attacks. Essay 2 proposes a blockchain application to improve electronic health record (EHR) systems by increasing patient's empowerment. Confirmatory factor analysis is used for the validity and reliability tests of the model. The results show that blockchain-based information systems can empower patients by providing the perception of control over their health records. The usage of blockchain technology motivates patients to share information with healthcare provider systems and has the advantage of reducing healthcare costs and improving diagnosis management. Essay 3 contributes to the existing literature by using a multimethod approach to propose three new mediators for blockchain-based healthcare information systems: digital health care, healthcare improvement, and peer-to-peer trade capability. Based on the findings from the text analysis, we propose a research model drawing upon stimulus-organism-response theory. Through three experimental studies, we test the research model to explain the patient's willingness to share their health records with others, including researchers. A post hoc analysis is conducted to segment patients and predict their behavior using four machine learning algorithms. The finding was that merely having peer-to-peer trade capability by ignoring healthcare improvement does not necessarily incentivize patients to share their medical reports.
|
Page generated in 0.05 seconds