Spelling suggestions: "subject:"forminformation privacy"" "subject:"forminformation rivacy""
1 |
A vision for global privacy bridges: Technical and legal measures for international data marketsSpiekermann-Hoff, Sarah, Novotny, Alexander January 2015 (has links) (PDF)
From the early days of the information economy, personal data has been its most valuable asset. Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil". Most of this business is done without the knowledge and active informed consent of the people. But as data breaches and abuses are made public through the media, consumers react. They become irritated about companies' data handling practices, lose trust, exercise political pressure and start to protect their privacy with the help of technical tools. As a result, companies' Internet business models that are based on personal data are unsettled. An open conflict is arising between business demands for data and a desire for privacy. As of 2015 no true answer is in sight of how to resolve this conflict. Technologists, economists and regulators are struggling to develop technical solutions and policies that meet businesses' demand for more data while still maintaining privacy. Yet, most of the proposed solutions fail to account for market complexity and provide no pathway to technological and legal implementation. They lack a bigger vision for data use and privacy. To break this vicious cycle, we propose and test such a vision of a personal information market with privacy. We accumulate technical and legal measures that have been proposed by technical and legal scholars over the past two decades. And out of this existing knowledge, we compose something new: a four-space market model for personal data.
|
2 |
A Privacy Calculus Model for Personal Mobile DevicesBott, Gregory J 11 August 2017 (has links)
Personal mobile devices (PMDs) initiated a multi-dimensional paradigmatic shift in personal computing and personal information collection fueled by the indispensability of the Internet and the increasing functionality of the devices. From 2005 to 2016, the perceived necessity of conducting transactions on the Internet moved from optional to indispensable. The context of these transactions changes from traditional desktop and laptop computers, to the inclusion of smartphones and tablets (PMDs). However, the traditional privacy calculus published by (Dinev and Hart 2006) was conceived before this technological and contextual change, and several core assumptions of that model must be re-examined and possibly adapted or changed to account for this shift. This paradigm shift impacts the decision process individuals use to disclose personal information using PMDs. By nature of their size, portability, and constant proximity to the user, PMDs collect, contain, and distribute unprecedented amounts of personal information. Even though the context within which people are sharing information has changed significantly, privacy calculus research applied to PMDs has not moved far from the seminal work by Dinev and Hart (2006). The traditional privacy calculus risk-benefit model is limited in the PMD context because users are unaware of how much personal information is being shared, how often it is shared, or to whom it is shared. Furthermore, the traditional model explains and predicts intent to disclose rather than actual disclosure. However, disclosure intentions are a poor predictor of actual information disclosure. Because of perceived indispensability of the information and the inability to assess potential risk, the deliberate comparison of risks to benefits prior to disclosure—a core assumption of the traditional privacy calculus—may not be the most effective basis of a model to predict and explain disclosure. The present research develops a Personal Mobile Device Privacy Calculus model designed to predict and explain disclosure behavior within the specific context of actual disclosure of personal information using PMDs.
|
3 |
Privacy Preservation for Cloud-Based Data Sharing and Data AnalyticsZheng, Yao 21 December 2016 (has links)
Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques suitable for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services.
Cloud computing has enabled users to store, process, and communicate their personal information through third-party services. It has also raised privacy issues regarding losing control over data, mass harvesting of information, and un-consented disclosure of personal content. Above all, the main concern is the lack of understanding about data privacy in cloud environments. Currently, the cloud service providers either advocate the principle of third-party doctrine and deny users' rights to protect their data stored in the cloud; or rely the notice-and-choice framework and present users with ambiguous, incomprehensible privacy statements without any meaningful privacy guarantee.
In this regard, our research has three main contributions. First, to capture users' privacy expectations in cloud environments, we conceptually divide personal data into two categories, i.e., visible data and invisible data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users' information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding.
Second, to address users' privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely individual control and use limitation. The individual control framework emphasizes users' capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users' expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services.
Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging in photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users' digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity.
The picture that is emerging from the above works is a bleak one. Regarding to personal data, the reality is we can no longer control them all. As communication technologies evolve, the scope of personal data has expanded beyond local, discrete silos, and integrated into the Internet. The traditional understanding of privacy must be updated to reflect these changes. In addition, because privacy is a particularly nuanced problem that is governed by context, there is no one-size-fit-all solution. While some cases can be salvaged either by cryptography or by other means, in others a rethinking of the trade-offs between utility and privacy appears to be necessary. / Ph. D. / Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services.
Our research has three main contributions. First, to capture users’ privacy expectations in the cloud computing paradigm, we conceptually divide personal data into two categories, <i>i.e., visible</i> data and <i>invisible</i> data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users’ information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding.
Second, to address users’ privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely <i>individual control</i> and <i>use limitation</i>. The individual control framework emphasizes users’ capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users’ expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services.
Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging for photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users’ digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity.
|
4 |
Exploring Internet Users ¡¦Information Privacy Concerns -Use the Intention-Based modelsChen, Yan-Bang 26 July 2000 (has links)
No description available.
|
5 |
Internationalizing the right to know conceptualizations of access to information in human rights law /Bishop, Cheryl Ann. January 2009 (has links)
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009. / Includes bibliographical references (leaves 247-268). Also available online
|
6 |
USER CONTROLLED PRIVACY BOUNDARIES FOR SMART HOMESRyan David Fraser (15299059) 17 April 2023 (has links)
<p> </p>
<p>The rise of Internet of Things (IoT) technologies into the substantial commercial market that it is today comes with several challenges. Not only do these systems face the traditional challenges of security and reliability faced by traditional information technology (IT) products, but they also face the challenge of loss of privacy. The concern of user data privacy is most prevalent when these technologies come into the home environment. In this dissertation quasi-experimental research is conducted on the ability of users to protect private data in a heterogeneous smart home network. For this work the experiments are conducted and verified on eight different smart home devices using network traffic analysis and discourse analysis to identify privacy concerns. The results of the research show that data privacy within the confines of the user’s home often cannot be ensured while maintaining smart home device functionality. This dissertation discusses how those results can inform users and manufacturers alike in the use and development of future smart home technologies to better protect privacy concerns.</p>
|
7 |
MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN CLASSIFICATION MODELSJiacheng Li (17775408) 12 January 2024 (has links)
<p dir="ltr">Neural network-based machine learning models are now prevalent in our daily lives, from voice assistants~\cite{lopez2018alexa}, to image generation~\cite{ramesh2021zero} and chatbots (e.g., ChatGPT-4~\cite{openai2023gpt4}). These large neural networks are powerful but also raise serious security and privacy concerns, such as whether personal data used to train these models are leaked by these models. One way to understand and address this privacy concern is to study membership inference (MI) attacks and defenses~\cite{shokri2017membership,nasr2019comprehensive}. In MI attacks, an adversary seeks to infer if a given instance was part of the training data. We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new {\em set regularizer} using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy. </p><p dir="ltr"><br></p><p dir="ltr">Furthermore, we considers the challenge of performing membership inference attacks in a federated learning setting ---for image classification--- where an adversary can only observe the communication between the central node and a single client (a passive white-box attack). Passive attacks are one of the hardest-to-detect attacks, since they can be performed without modifying how the behavior of the central server or its clients, and assumes {\em no access to private data instances}. The key insight of our method is empirically observing that, near parameters that generalize well in test, the gradient of large overparameterized neural network models statistically behave like high-dimensional independent isotropic random vectors. Using this insight, we devise two attacks that are often little impacted by existing and proposed defenses. Finally, we validated the hypothesis that our attack depends on the overparametrization by showing that increasing the level of overparametrization (without changing the neural network architecture) positively correlates with our attack effectiveness.</p><p dir="ltr">Finally, we observe that training instances have different degrees of vulnerability to MI attacks. Most instances will have low loss even when not included in training. For these instances, the model can fit them well without concerns of MI attacks. An effective defense only needs to (possibly implicitly) identify instances that are vulnerable to MI attacks and avoids overfitting them. A major challenge is how to achieve such an effect in an efficient training process. Leveraging two distinct recent advancements in representation learning: counterfactually-invariant representations and subspace learning methods, we introduce a novel Membership-Invariant Subspace Training (MIST) method to defend against MI attacks. MIST avoids overfitting the vulnerable instances without significant impact on other instances. We have conducted extensive experimental studies, comparing MIST with various other state-of-the-art (SOTA) MI defenses against several SOTA MI attacks. We find that MIST outperforms other defenses while resulting in minimal reduction in testing accuracy. </p><p dir="ltr"><br></p>
|
8 |
An Empirical Investigation of the Relationship between Computer Self-Efficacy and Information Privacy ConcernsAwwal, Mohammad Abdul 01 January 2011 (has links)
The Internet and the growth of Information Technology (IT) and their enhanced capabilities to collect personal information have given rise to many privacy issues. Unauthorized access of personal information may result in identity theft, stalking, harassment, and other invasions of privacy. Information privacy concerns are impediments to broad-scale adoption of the Internet for purchasing decisions. Computer self-efficacy has been shown to be an effective predictor of behavioral intention and a critical determinant of intention to use Information Technology. This study investigated the relationship between an individual's computer self-efficacy and information privacy concerns; and also examined the differences among different age groups and between genders regarding information privacy concerns and their relationships with computer self-efficacy.
A paper-based survey was designed to empirically assess computer self-efficacy and information privacy concerns. The survey was developed by combining existing validated scales for computer self-efficacy and information privacy concerns. The target population of this study was the residents of New Jersey, U.S.A. The assessment was done by using the mall-intercept approach in which individuals were asked to fill out the survey. The sample size for this study was 400 students, professionals, and mature adults.
The Shapiro-Wilk test was used for testing data normality and the Spearman rank-order test was used for correlation analyses. MANOVA test was used for comparing mean values of computer self-efficacy and information privacy concerns between genders and among age groups. The results showed that the correlation between computer self-efficacy and information privacy concerns was significant and positive; and there were differences between genders and among age groups regarding information privacy concerns and their relationships with computer self-efficacy.
This study contributed to the body of knowledge about the relationships among antecedents and consequences of information privacy concerns and computer self-efficacy. The findings of this study can help corporations to improve e-commerce by targeting privacy policy-making efforts to address the explicit areas of consumer privacy concerns. The results of this study can also help IT practitioners to develop privacy protection tools and processes to address specific consumer privacy concerns.
|
9 |
Understanding privacy leakage concerns in Facebook : a longitudinal case studyJamal, Arshad January 2013 (has links)
This thesis focuses on examining users’ perceptions of privacy leakage in Facebook – the world’s largest and most popular social network site (SNS). The global popularity of this SNS offers a hugely tempting resource for organisations engaged in online business. The personal data willingly shared between online friends’ networks intuitively appears to be a natural extension of current advertising strategies such as word-of-mouth and viral marketing. Therefore organisations are increasingly adopting innovative ways to exploit the detail-rich personal data of SNS users for business marketing. However, commercial use of such personal information has provoked outrage amongst Facebook users and has radically highlighted the issue of privacy leakage. To date, little is known about how SNS users perceive such leakage of privacy. So a greater understanding of the form and nature of SNS users’ concerns about privacy leakage would contribute to the current literature as well as help to formulate best practice guidelines for organisations. Given the fluid, context-dependent and temporal nature of privacy, a longitudinal case study representing the launch of Facebook’s social Ads programme was conducted to investigate the phenomenon of privacy leakage within its real-life setting. A qualitative user blogs commentary was collected between November 2007 and December 2010 during the two-stage launch of the social Ads programme. Grounded theory data analysis procedures were used to analyse users’ blog postings. The resulting taxonomy shows that business integrity, user control, transparency, data protection breaches, automatic information broadcast and information leak are the core privacy leakage concerns of Facebook users. Privacy leakage concerns suggest three limits, or levels: organisational, user and legal, which provide the basis to understanding the nature and scope of the exploitation of SNS users’ data for commercial purposes. The case study reported herein is novel, as existing empirical research has not identified and analysed privacy leakage concerns of Facebook users.
|
10 |
Engineering Privacy by Design: Are engineers ready to live up to the challenge?Bednar, Kathrin, Spiekermann, Sarah, Langheinrich, Marc January 2019 (has links) (PDF)
Organizations struggle to comply with legal requirements as well as customers'
calls for better
data protection. On the implementation level, incorporation of privacy protections in products
and services depends on the commitment of the engineers who design them. We interviewed six
senior engineers, who work for globally leading IT corporations and research institutions, to inves-
tigate their motivation and ability to comply with privacy regulations. Our findings point to a lack
of perceived responsibility, control, autonomy, and frustrations with interactions with the legal
world. While we increasingly call on engineers to go beyond functional requirements and be
responsive to human values in our increasingly technological society, we may be facing the
dilemma of asking engineers to live up to a challenge they are currently not ready to embrace.
|
Page generated in 0.0755 seconds