• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1227
  • 167
  • 136
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 18
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2361
  • 633
  • 552
  • 516
  • 501
  • 350
  • 327
  • 307
  • 297
  • 231
  • 229
  • 217
  • 207
  • 196
  • 182
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Perspectives on privacy : a sociological analysis

Day, Katherine Julie January 1985 (has links)
No description available.
162

Aspects of statistical disclosure control

Smith, Duncan Geoffrey January 2012 (has links)
This work concerns the evaluation of statistical disclosure control risk by adopting the position of the data intruder. The underlying assertion is that risk metrics should be based on the actual inferences that an intruder can make. Ideally metrics would also take into account how sensitive the inferences would be, but that is subjective. A parallel theme is that of the knowledgeable data intruder; an intruder who has the technical skills to maximally exploit the information contained in released data. This also raises the issue of computational costs and the benefits of using good algorithms. A metric for attribution risk in tabular data is presented. It addresses the issue that most measures for tabular data are based on the risk of identification. The metric can also take into account assumed levels of intruder knowledge regarding the population, and it can be applied to both exact and perturbed collections of tables. An improved implementation of the Key Variable Mapping System (Elliot, et al., 2010) is presented. The problem is more precisely defined in terms of categorical variables rather than responses to survey questions. This allows much more efficient algorithms to be developed, leading to significant performance increases. The advantages and disadvantages of alternative matching strategies are investigated. Some are shown to dominate others. The costs of searching for a match are also considered, providing insight into how a knowledgeable intruder might tailor a strategy to balance the probability of a correct match and the time and effort required to find a match. A novel approach to model determination in decomposable graphical models is described. It offers purely computational advantages over existing schemes, but allows data sets to be more thoroughly checked for disclosure risk. It is shown that a Bayesian strategy for matching between a sample and a population offers much higher probabilities of a correct match than traditional strategies would suggest. The Special Uniques Detection Algorithm (Elliot et al., 2002) (Manning et al., 2008), for identifying risky sample counts of 1, is compared against Bayesian (using Markov Chain Monte Carlo and simulated annealing) alternatives. It is shown that the alternatives are better at identifying risky sample uniques, and can do so with reduced computational costs.
163

Privacy, surveillance and HIV/AIDS in the workplace : a South African case study.

Muskat-Gorska, Zuzanna 19 March 2009 (has links)
The study focus on socio-legal dimension of medical data surveillance in the workplace on the example of the South African workplace response to HIV/AIDS. The strating point is the problem of growing data gathering and monitoring as an institutional feature of the information/surveillance society. Studying the problem in the context of workplace aims at indicating possibilities for social partners to respond to the new developments in the area of workplace surveillance and HIV/AIDS management in particular. The empirical data has been drawn from document analysis and interviews with trade union and business representatives from South Africa, involved in developing workplace response to HIV/AIDS. Particularly, the study is interested in identifying ways in which trade unions can make personal data treatment a trade union issue.
164

Factors influencing the use of privacy settings in location-based social networks

Oladimeji, Henry January 2017 (has links)
The growth of location-based social networks (LBSN) such as Facebook and Twitter has been rapid in recent years. In LBSNs, users provide location information on public profiles that potentially can be used in harmful ways. LBSNs have privacy settings that allow users to control the privacy level of their profiles, thus limiting access to location information by other users; but for various reasons users seldom make use of them. Using the protection motivation theory (PMT) as a theoretical lens, this dissertation examines whether users can be encouraged to use LBSN privacy settings through fear appeals. Fear appeals have been used in various studies to arouse fear in users, in order to motivate them to comply to an adaptive behaviour through the threat of impending danger. However, within the context of social networking, it is not yet clear how fear-inducing arguments will ultimately influence the use of privacy settings by users. The purpose of this study is to investigate the influence of fear appeals on user compliance, with recommendations to enact the use of privacy settings toward the alleviation of privacy threats. Using a survey methodology, 248 social-network users completed an instrument measuring the variables conceptualized by PMT. Partial Least Squares Structural Equation Modelling (PLS-SEM) was used to test the validity and reliability, and to analyze the data. Analysis of the responses show that PMT provides an explanation for the intention to use privacy settings by social-network users. Risk susceptibility, response efficacy, self-efficacy and response cost were found to have a positive impact on the intention to use privacy settings, while sharing benefits and maladaptive behaviours were found to have a negative impact on the intention to use privacy settings. However, risk severity and fear were not found to be significant predictors of the intention to use privacy settings. This study contributes to existing research on PMT in a sense that fear appeal should focus more on coping appraisal, rather than on threat appraisal which is consistent with the results of most studies on protection motivation.
165

Perceptions of privacy and career impression management : the case of facebook.

Pilcer, Danielle 21 June 2012 (has links)
Facebook (FB) is a ubiquitous category of web 2.0 technology that has embedded itself in the present day reality of people worldwide. It represents the constantly evolving online environment and brings to light the associated implications of synthesising people’s online private and work life. FB can act as a platform for employees to create and manage the impressions formed of them in their work context. On the backdrop of the social capital theory, this research explored the relationships between FB experience, perceptions of FB privacy and FB career impression management (FB CIM) and specifically whether perceptions of FB privacy moderated the impact of FB experience on FB CIM. Phase 1 was concerned with creating reliable scales through the implementation of a pilot study. Phase 2 initiated the main study with a convenience sample of 217 respondents, made up of FB users and non- users, recruited online on social networking sites and within a South African based IT organisation. They completed an online survey consisting of biographical information; FB experience, perceptions of FB privacy and FB CIM items (self-developed scales). From the analyses conducted it was found that the constructed scales were reliable, with co-efficient alpha’s yielding scores of above 0.6; and structurally valid as seen with the factor analyses. It was found that younger respondents experienced higher FB experience than older respondents (r=-0.39). FB experience was related to perceptions of FB privacy with an increase in FB experience being related to increased levels of trust (r=0.16) (part of the perceptions of FB privacy subscale). FB experience was associated with increased FB CIM activities (self- monitoring r=0.26; work relations r=0.23) with FB experience being the strongest predictor of FB CIM. As such FB experience and one’s perceived importance of FB privacy may have an influence on the degree to which one actively engages in FB CIM.
166

Privatizing the Volume and Timing of Blockchain Transactions

Miller, Trevor John 20 March 2023 (has links)
With current state-of-the-art privacy-preserving blockchain solutions, users can submit transactions to a blockchain while maintaining full anonymity and not leaking the contents of the transaction through cryptographic techniques like zero-knowledge proofs and homomorphic encryption. However, the architecture of a blockchain consists of a decentralized network where every network participant maintains their own local copy of the blockchain and updates it upon every added transaction. As a result, the volume of blockchain transactions and the timestamp of each blockchain transaction for an application is publicly available. This is problematic for applications with time-sensitive or volume-sensitive outcomes because users may want this information to be privatized, such as not leaking the lateness of student examinations. However, this is not possible with existing blockchain research. In this thesis, we propose a blockchain system for multi-party applications that does not leak any useful information from the volume and timing metadata of the application's transactions, including maintaining the privacy of a time-sensitive or volume-sensitive outcome. We achieve this by adding sufficient noise using indistinguishable decoy transactions such that an adversary cannot deduce which transactions actually impacted the outcome of the application. This is facilitated in a manner where anyone can publicly verify the application's execution to be correct, fair, and honest. We demonstrate and evaluate our approach by implementing a Dutch auction that supports decoy bid transactions on a private Ethereum blockchain network. / Master of Science / Blockchains are distributed, append-only, digital ledgers whose current state is continuously agreed upon through the consensus of network participants and not by any centralized party. These characteristics make them unique for many applications because they enable the application to be facilitated and executed in a public, verifiable, decentralized, and tamper-proof manner. For example, Bitcoin, the most popular cryptocurrency, uses blockchains to continuously maintain a permanent, verifiable ledger of payment transactions. However, one downside of this public architecture is that the volume of transactions and the timestamp of each transaction can always be publicly observed (e.g. the timestamp of every Bitcoin payment is public). This is problematic for applications with time-sensitive or volume-sensitive outcomes because users may want this volume and timing information to be privatized, such as not leaking the lateness of student examinations which could have severe consequences like violating student privacy laws. But currently with state-of-the-art blockchain research, privatizing this information is not possible. In this thesis, we demonstrate our approach that enables these time-sensitive and volume-sensitive applications to be implemented on blockchains in a manner that can maintain the privacy of these time-sensitive or volume-sensitive outcomes without sacrificing the application's integrity or verifiability. We then demonstrate and evaluate our approach through implementing a Dutch auction that supports decoy bid transactions on a private blockchain network.
167

We hit turbulence: governing screenshot collection and sharing of digital messages

Shore, Alexis 08 March 2024 (has links)
Individuals rely on digital messaging to form and maintain intimate relationships, trusting that the mediums through which they communicate are protective of their privacy. The screenshot feature revokes attempts to establish such trust. This tool allows individuals to capture and store pieces of a private conversation as a separate file on their device, rendering them usable and shareable with third parties. While the screenshot feature serves utilitarian purposes, this dissertation focuses on its ability to breach privacy expectations, termed within communication privacy management theory (CPM) as privacy turbulence. This dissertation extends the scope of CPM beyond its interpersonal bounds, recognizing the power of platforms to create rules that users follow when making decisions about others' information. Experimental results suggest that blurring received messages upon use of the screenshot feature (i.e., obscurity) and creating an explicit confidentiality expectation (i.e., explicit privacy rule) significantly reduce screenshot collection and sharing, respectively. Additionally, reflections from participants reveal that many individuals are willing to compromise others’ privacy on digital messaging platforms while simultaneously expecting protection of their own. Qualitative analysis of relevant case law and complaints/opinions from the Federal Trade Commission (FTC) reveal inconsistencies both within law and policy and as compared to empirical evidence. Judges have provided overly broad definitions of “authorization” and lofty thresholds to sustain individual harm, making statutory regulation of screenshot collection and sharing unlikely. However, guidance from the FTC demonstrates a more nuanced regulatory approach to privacy that recognizes the influence of platform design. Results from this study suggest that design-based strategies—both ex-ante and ex-post—would be a promising first step toward adjusting the norms around screenshot collection and sharing of digital messages. Together, the results of this dissertation will inform policymakers and platform designers of the privacy harm enabled by the screenshot feature, providing tangible recommendations to create messaging platforms that are truly private. / 2026-03-08T00:00:00Z
168

Privacy Preservation for Cloud-Based Data Sharing and Data Analytics

Zheng, Yao 21 December 2016 (has links)
Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques suitable for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services. Cloud computing has enabled users to store, process, and communicate their personal information through third-party services. It has also raised privacy issues regarding losing control over data, mass harvesting of information, and un-consented disclosure of personal content. Above all, the main concern is the lack of understanding about data privacy in cloud environments. Currently, the cloud service providers either advocate the principle of third-party doctrine and deny users' rights to protect their data stored in the cloud; or rely the notice-and-choice framework and present users with ambiguous, incomprehensible privacy statements without any meaningful privacy guarantee. In this regard, our research has three main contributions. First, to capture users' privacy expectations in cloud environments, we conceptually divide personal data into two categories, i.e., visible data and invisible data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users' information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding. Second, to address users' privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely individual control and use limitation. The individual control framework emphasizes users' capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users' expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services. Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging in photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users' digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity. The picture that is emerging from the above works is a bleak one. Regarding to personal data, the reality is we can no longer control them all. As communication technologies evolve, the scope of personal data has expanded beyond local, discrete silos, and integrated into the Internet. The traditional understanding of privacy must be updated to reflect these changes. In addition, because privacy is a particularly nuanced problem that is governed by context, there is no one-size-fit-all solution. While some cases can be salvaged either by cryptography or by other means, in others a rethinking of the trade-offs between utility and privacy appears to be necessary. / Ph. D.
169

Privacy and Security in IPv6 Addressing

Groat, Stephen Lawrence 12 May 2011 (has links)
Due to an exponentially larger address space than Internet Protocol version 4 (IPv4), the Internet Protocol version 6 (IPv6) uses new methods to assign network addresses to Internet nodes. StateLess Address Auto Configuration (SLAAC) creates an address using a static value derived from the Media Access Control (MAC) address of a network interface as host portion, or interface identifier (IID). The Dynamic Host Configuration Protocol version 6 (DHCPv6) uses a client-server model to manage network addresses, providing stateful address configuration. While DHCPv6 can be configured to assign randomly distributed addresses, the DHCP Unique Identifier (DUID) was designed to remain static for clients as they move between different DHCPv6 subnets and networks. Both the IID and DUID are static values which are publicly exposed, creating a privacy and security threat for users and nodes. The static IID and DUID allow attackers to violate unsuspecting IPv6 users' privacy and security with ease. These static identifiers make geographic tracking and network traffic correlation over multiple sessions simple. Also, different classes of computer and network attacks, such as system-specific attacks and Denial-of-Service (DoS) attacks, are easier to successfully employ due to these identifiers. This research identifies and tests the validity of the privacy and security threat of static IIDs and DUIDs. Solutions which mitigate or eliminate the threat posed by static identifiers in IPv6 are identified. / Master of Science
170

Privacy Preserving Network Security Data Analytics

DeYoung, Mark E. 24 April 2018 (has links)
The problem of revealing accurate statistics about a population while maintaining privacy of individuals is extensively studied in several related disciplines. Statisticians, information security experts, and computational theory researchers, to name a few, have produced extensive bodies of work regarding privacy preservation. Still the need to improve our ability to control the dissemination of potentially private information is driven home by an incessant rhythm of data breaches, data leaks, and privacy exposure. History has shown that both public and private sector organizations are not immune to loss of control over data due to lax handling, incidental leakage, or adversarial breaches. Prudent organizations should consider the sensitive nature of network security data and network operations performance data recorded as logged events. These logged events often contain data elements that are directly correlated with sensitive information about people and their activities -- often at the same level of detail as sensor data. Privacy preserving data publication has the potential to support reproducibility and exploration of new analytic techniques for network security. Providing sanitized data sets de-couples privacy protection efforts from analytic research. De-coupling privacy protections from analytical capabilities enables specialists to tease out the information and knowledge hidden in high dimensional data, while, at the same time, providing some degree of assurance that people's private information is not exposed unnecessarily. In this research we propose methods that support a risk based approach to privacy preserving data publication for network security data. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a database which holds network security data representative of a contextualized network and people's interaction with the network mid-points and end-points without the problems of identifiability. / Ph. D.

Page generated in 0.086 seconds