• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1248
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2401
  • 648
  • 562
  • 523
  • 512
  • 353
  • 334
  • 308
  • 299
  • 239
  • 236
  • 219
  • 212
  • 199
  • 184
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Understanding the Impact of Data Privacy Regulations on Software and Its Stakeholders

Franke, Lucas James 06 July 2023 (has links)
The General Data Protection Regulation (GDPR) is a comprehensive data privacy law that limits how businesses can collect personal information about their consumers living in the European Union. For our research, we aimed to evaluate the impact that the GDPR has on the open-source community, an online community that encourages open collaboration between software developers. We conducted a quantitative analysis of GitHub pull requests in which we compared pull requests explicitly related to the GDPR to other non-GDPR pull requests from the same projects. We also conducted a qualitative pilot study in which we interviewed software developers with experience implementing GDPR requirements in industry or in open-source. From our research, we found that GDPR-related pull requests had significantly more activity than other pull requests, but that open-source developers did not perceive a significant impact on their software development processes when implementing GDPR compliance. Industry developers, on the other hand, had a more negative outlook on the GDPR, and found implementation to be difficult. Our results indicate a need to involve software developers in the lawmaking process in order to create direct and realistic expectations for developers when implementing privacy policies. / Master of Science / The General Data Protection Regulation (GDPR) is a comprehensive data privacy law that limits how businesses can collect personal information about their consumers living in the European Union. For our research, we aimed to evaluate the impact that the GDPR has on the open-source community, an online community that encourages open collaboration between software developers. We conducted a quantitative analysis of GitHub, a major online open-source platform. We compared pull requests (major contributions to a project) explicitly related to the GDPR to other non-GDPR pull requests from the same projects. We also conducted a qualitative pilot study in which we interviewed software developers with experience implementing GDPR requirements in industry or in open-source. From our research, we found that GDPR-related pull requests had significantly more activity than other pull requests, but that open-source developers did not perceive a significant impact on their software development processes when implementing GDPR compliance. Industry developers, on the other hand, had a more negative outlook on the GDPR, and found implementation to be difficult. Our results indicate a need to involve software developers in the lawmaking process in order to create direct and realistic expectations for developers when implementing privacy policies.
172

Private in Public - Public in Private: A Library on H Street NE Washington, DC

Rutledge, Kathleen Anne 03 February 2011 (has links)
The thesis investigates private versus public space and the natural tendency for an individual to seek out its own place within a group. More specifically, the project studies whether private and public could not only occupy the same geographic space independently, but also activate one another. A library was chosen as the program for its opportunity to serve as a "third place" in the community. A "third place" is a neutral ground that is neither a home nor workplace. The benefit of such a place is to stimulate conversation and interaction, to provide a way to either hide or be seen, and to encourage social cohesion as people meet that may not have through normal daily life. The site is on the corner of 12th and H Streets NE in Washington, DC. Its location in a rebounding streetscape demands that the library give the surrounding context a large role in its design. Public space is a priority, and the building is porous to extend the exterior into the interior and vice versa. The library's ever-changing role in a city inspires flexibility in the design and a life beyond normal library hours. / Master of Architecture
173

Understanding Community Privacy through Focus Group Studies

Codio, Sherley 21 May 2012 (has links)
Just as an individual is rightly concerned about the privacy of their personally identifying information, so also is a group of people, a community, concerned about the privacy of sensitive information entrusted to their care. Our research seeks to develop a better understanding of the factors contributing to the sensitivity of community information, of the privacy threats that are recognized by the community, and of the means by which the community attempts to fulfill their privacy responsibilities. We are also interested in seeing how the elements of a community privacy model that we developed are related to the findings from the studies of communities. This thesis presents the results of a series of focus group sessions conducted in corporate settings. Three focus group interviews were conducted using participants from two information technology companies and one research group from the university. Three themes emerged from the analysis of these focus group interviews which are described as privacy awareness, situated disclosures, and confinement of sensitive information. These three themes capture the character and complexity of community oriented privacy and expose breakdowns in current approaches. / Master of Science
174

Privacy Preserving Network Security Data Analytics

DeYoung, Mark E. 24 April 2018 (has links)
The problem of revealing accurate statistics about a population while maintaining privacy of individuals is extensively studied in several related disciplines. Statisticians, information security experts, and computational theory researchers, to name a few, have produced extensive bodies of work regarding privacy preservation. Still the need to improve our ability to control the dissemination of potentially private information is driven home by an incessant rhythm of data breaches, data leaks, and privacy exposure. History has shown that both public and private sector organizations are not immune to loss of control over data due to lax handling, incidental leakage, or adversarial breaches. Prudent organizations should consider the sensitive nature of network security data and network operations performance data recorded as logged events. These logged events often contain data elements that are directly correlated with sensitive information about people and their activities -- often at the same level of detail as sensor data. Privacy preserving data publication has the potential to support reproducibility and exploration of new analytic techniques for network security. Providing sanitized data sets de-couples privacy protection efforts from analytic research. De-coupling privacy protections from analytical capabilities enables specialists to tease out the information and knowledge hidden in high dimensional data, while, at the same time, providing some degree of assurance that people's private information is not exposed unnecessarily. In this research we propose methods that support a risk based approach to privacy preserving data publication for network security data. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a database which holds network security data representative of a contextualized network and people's interaction with the network mid-points and end-points without the problems of identifiability. / Ph. D. / Network security data is produced when people interact with devices (e.g., computers, printers, mobile telephones) and networks (e.g., a campus wireless network). The network security data can contain identifiers, like user names, that strongly correlate with real world people. In this work we develop methods to protect network security data from privacy-invasive misuse by the ’honest-but-curious’ authorized data users and unauthorized malicious attackers. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a data set which holds network security data representative of people’s interaction with a contextualized network without the problems of identifiability.
175

Privacy Preservation for Cloud-Based Data Sharing and Data Analytics

Zheng, Yao 21 December 2016 (has links)
Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques suitable for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services. Cloud computing has enabled users to store, process, and communicate their personal information through third-party services. It has also raised privacy issues regarding losing control over data, mass harvesting of information, and un-consented disclosure of personal content. Above all, the main concern is the lack of understanding about data privacy in cloud environments. Currently, the cloud service providers either advocate the principle of third-party doctrine and deny users' rights to protect their data stored in the cloud; or rely the notice-and-choice framework and present users with ambiguous, incomprehensible privacy statements without any meaningful privacy guarantee. In this regard, our research has three main contributions. First, to capture users' privacy expectations in cloud environments, we conceptually divide personal data into two categories, i.e., visible data and invisible data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users' information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding. Second, to address users' privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely individual control and use limitation. The individual control framework emphasizes users' capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users' expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services. Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging in photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users' digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity. The picture that is emerging from the above works is a bleak one. Regarding to personal data, the reality is we can no longer control them all. As communication technologies evolve, the scope of personal data has expanded beyond local, discrete silos, and integrated into the Internet. The traditional understanding of privacy must be updated to reflect these changes. In addition, because privacy is a particularly nuanced problem that is governed by context, there is no one-size-fit-all solution. While some cases can be salvaged either by cryptography or by other means, in others a rethinking of the trade-offs between utility and privacy appears to be necessary. / Ph. D. / Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services. Our research has three main contributions. First, to capture users’ privacy expectations in the cloud computing paradigm, we conceptually divide personal data into two categories, <i>i.e., visible</i> data and <i>invisible</i> data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users’ information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding. Second, to address users’ privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely <i>individual control</i> and <i>use limitation</i>. The individual control framework emphasizes users’ capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users’ expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services. Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging for photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users’ digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity.
176

Privacy Notice and Choice in Practice

Leon-Najera, Pedro Giovanni 01 December 2014 (has links)
In the United States, notice and choice remain the most commonly used mechanisms to protect people’s privacy online. This approach relies on the assumption that users provided with notice will make informed choices that align with their privacy expectations. The goal of this research is to empirically inform industry and regulatory efforts that rely on notice and choice to protect people’s online privacy. To do so, we present a set of case studies covering different aspects of privacy notice and choice in four domains: online behavioral advertising (OBA), online social networks (OSN), financial privacy notices, and websites’ machine-readable privacy notices. We investigate users’ privacy preferences, information needs, and ability to exercise choices in the OBAdomain. Based on our results, we provide recommendations to improve the design of notice and choice methods currently in use in this domain. In the context of OSNs, we explore the effect of nudging notices designed to encourage more thoughtful disclosures among Facebook users and recommend changes to the Facebook user interface aimed to mitigate problematic disclosures. We demonstrate how standardized notices enable large-scale evaluations and comparisons of companies’ privacy practices and argue that standardized privacy notices have an enormous potential to improve transparency and benefit users, privacy-respectful companies, and oversight entities. We argue that, in today’s complex Internet ecosystem, an approach that relies on users to make privacy decisions should also empower them with user-friendly interfaces, relevant information, and the tools they need to make privacy decisions. Finally, we further argue that notice and choice are necessary, but not sufficient to protect online privacy, and that government regulation is necessary to establish necessary additional protections including access, redress, accountability, and enforcement.
177

Protecting Online Privacy

Winkler, Stephanie D. 01 January 2016 (has links)
Online privacy has become one of the greatest concerns in the United States today. There are currently multiple stakeholders with interests in online privacy including the public, industry, and the United States government. This study examines the issues surrounding the protection of online privacy. Privacy laws in the United States are currently outdated and do little to protect online privacy. These laws are unlikely to be changed as both the government and industry have interests in keeping these privacy laws lax. To bridge the gap between the desired level of online privacy and what is provided legally users may turn to technological solutions.
178

Cloud Privacy Audit Framework: A Value-Based Design

Coss, David 01 January 2013 (has links)
The rapid expansion of cloud technology provides enormous capacity, which allows for the collection, dissemination and re-identification of personal information. It is the cloud’s resource capabilities such as these that fuel the concern for privacy. The impetus of these concerns are not too far removed from those expressed by Mason in 1986, when he identified privacy as one of the biggest ethical issues facing the information age. There seems to be continuous ebb and flow relationship with respect to privacy concerns and the development of new information communication technologies such as cloud computing. Privacy issues are a concern to all types of stakeholders in the cloud. Individuals using the cloud are exposed to privacy threats when they are persuaded to provide personal information unwantedly. An Organization using a cloud service is at risk of non-compliance to internal privacy policies or legislative privacy regulations. The cloud service provider has a privacy risk of legal liability and credibility concerns if sensitive information is exposed. The data subject is at risk of having personal information exposed. In essence everyone who is involved in cloud computing has some level of privacy risk that needs to be evaluated before, during and after they or an organization they interact with adopts a cloud technology solution. This resonates a need for organizations to develop privacy practices that are socially responsible towards the protection of their stakeholders’ information privacy. This research is about understanding the relationship between individual values and their privacy objectives. There is a lack of clarity in organizations as to what individuals consider privacy to be. Therefore, it is essential to understand an individual’s privacy values. Individuals seem to have divergent perspectives on the nature and scope of how their personal information is to be kept private in different modes of technologies. This study is concerned with identifying individual privacy objectives for cloud computing. We argue that privacy is an elusive concept due to the evolving relationship between technology and privacy. Understanding and identifying individuals’ privacy objectives are an influential step in the process of protecting the privacy in cloud computing environments. The aim of this study is to identify individual privacy values and develop cloud privacy objectives, which can be used to design a privacy audit for cloud computing environments. We used Keeney’s (1992) value focused thinking approach to identify individual privacy values with respect to emerging cloud technologies, and to develop an understanding of how cloud privacy objectives are shaped by the individual’s privacy values. We discuss each objective and how they relate to privacy concerns in cloud computing. We also use the cloud privacy objectives in a design science study to design a cloud privacy audit framework. We then discuss the how this research helps privacy managers develop a cloud privacy strategy, evaluate cloud privacy practices and develop a cloud privacy audit to ensure privacy. Lastly, future research directions are proposed.
179

Internet Privacy : A look into the construct of Privacy Knowledge

Nordström, Michael, Sevcenko, Sergej January 2012 (has links)
Background:                With the increasing use of personalized marketing and the increasing ability to collect information on consumers, the consumers’ concern of privacy is increasing. Therefore it is important to understand what effects privacy concern, and how marketers can minimize this concern. Previous research suggest that factors such as computer knowledge, internet knowledge, and regulation awareness all affect privacy concern, however we believe that these are all related to each other in a construct we call Privacy Knowledge. Purpose:                        To investigate the construct of Privacy Knowledge and to what degree it influences a consumer’s attitude towards informational privacy. Method:                        In order to validate the Privacy Knowledge construct and measure its relationship to Privacy Concern we employed a deductive methodology which was comprised of questionnaires. The questionnaires were composed of summative Likert Scales, three of which had been previous validated by previous research. We utilized a quota sampling technique in order to gather enough data from each age group. The results were then analyzed by tools such as Factor Analysis, ANOVA tests, and Multiple Regression Analysis. Conclusion:                   Through the Factor Analysis we found that the factors Internet Knowledge, Computer Knowledge, and Regulation Awareness were better organized as Basic IT Knowledge, Advanced IT Knowledge and Regulation Awareness. Privacy Knowledge was found to be positively related to Privacy Concern. However we could only conclude of the three factors which make up Privacy Knowledge, Basic IT Knowledge had an effect on Privacy Concern. We believe this is due to the exclusion of other factors affecting Privacy Concern such as situational factors and suggest conducting further research on the matter including these variables.
180

Privacy paradox or bargained-for-exchange : capturing the relationships among privacy concerns, privacy management, self-disclosure, and social capital

Hsu, Shih-Hsien 16 January 2015 (has links)
The dissertation seeks to bridge the gap between privacy and social capital on SNS use by bringing the essential elements of social networking, privacy concerns, privacy management, self-disclosure, and social capital together to examine their complex relationships and the daily challenges every SNS user faces. The major purposes of this dissertation were to revisit the privacy paradox phenomenon, update the current relationships among privacy concerns, self-disclosure, and social capital on Facebook, integrate these relationships into a quantitative model, and explore the role of privacy management in these relationships. The goal was realized by using Amazon.com’s Mechanical Turk to test a theoretical model that used survey data from 522 respondents. The findings from the dissertation show the impact of the structural factor—Facebook social network intensity and diversity—and the impact of individuals’ self-disclosure on Facebook on their perceived bridging and bonding social capital. This dissertation employed various measurements of key variables to update the current status of the privacy paradox phenomenon—the disconnection between privacy concerns and self- disclosure on social media—and found the break of the traditional privacy paradox and the existence of the social privacy paradox. Findings also show that private information about personal information, thoughts, and ideas shared on Facebook become assets in using Facebook and accumulating social capital. Meanwhile, higher privacy concerns reduce the level of self-disclosure on Facebook. Therefore, privacy concerns become a barrier in Facebook use and in accumulating social capital within these networks. This dissertation further examined the mediating role of privacy management to solve the dilemma. Findings confirmed that privacy management is important in redirecting the relationships among privacy concerns, self-disclosure, and social capital. People who have higher privacy concerns tend to disclose fewer personal thoughts and ideas on Facebook and miss the opportunity to accumulate social capital. However, when they employ more privacy management strategies, they are more willing to self-disclose and thus accumulate more social capital on Facebook networks. Lastly, the proposed integrated model examined through SEM analysis confirms the delicate relationships among the social networking characteristics, privacy concerns, privacy management, self-disclosure, and social capital. / text

Page generated in 0.0358 seconds