• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1237
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2380
  • 641
  • 556
  • 520
  • 508
  • 352
  • 332
  • 308
  • 299
  • 235
  • 234
  • 218
  • 210
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Social privacy : perceptions of veillance, relationships, and space with online social networking services

Dumbleton, Steven Philip Holt January 2016 (has links)
This research seeks to examine the experience of social privacy around online social networking services. In particular, it examines how individuals experience social privacy through the perception of veillance, relationships and space. It highlights that individuals need varying types of veillance and relationships in order to experience the social privacy they desire. It also highlights that individuals used the perception of space to indicate acceptable convention within that space; seeking spaces, both real and metaphorical, that they perceived to afford them the experience of social privacy. Through the application of phenomenological methods drawn from ethnography this study explores how the experience of social privacy is perceived. It does this through examining the perception of veillance, relationships and space in separation, though notes that the individual perceives all three simultaneously. It argues that the varying conditions of these perceptions afford the individuals the experience of social privacy. Social privacy is, therefore, perceived as a socially afforded emotional experience.
392

A systematic methodology for privacy impact assessments: a design science approach

Spiekermann-Hoff, Sarah, Oetzel, Marie Caroline January 2014 (has links) (PDF)
For companies that develop and operate IT applications that process the personal data of customers and employees, a major problem is protecting these data and preventing privacy breaches. Failure to adequately address this problem can result in considerable damage to the company's reputation and finances, as well as negative effects for customers or employees (data subjects). To address this problem, we propose a methodology that systematically considers privacy issues by using a step-by-step privacy impact assessment (PIA). Existing PIA approaches cannot be applied easily because they are improperly structured or imprecise and lengthy. We argue that companies that employ our PIA can achieve "privacy-by-design", which is widely heralded by data protection authorities. In fact, the German Federal Office for Information Security (BSI) ratified the approach we present in this article for the technical field of RFID and published it as a guideline in November 2011. The contribution of the artefacts we created is twofold: First, we provide a formal problem representation structure for the analysis of privacy requirements. Second, we reduce the complexity of the privacy regulation landscape for practitioners who need to make privacy management decisions for their IT applications.
393

A Privacy-Policy Language and a Matching Engine for U-PrIM

Oggolder, Michael January 2013 (has links)
A privacy-policy matching engine may support users in determining if their privacy preferencesmatch with a service provider’s privacy policy. Furthermore, third parties, such asData Protection Agencies (DPAs), may support users in determining if a service provider’sprivacy policy is a reasonable privacy policy for a given service by issuing recommendationsfor reasonable data handling practises for different services. These recommendations needto be matched with service provider’s privacy policies, to determine if a privacy policy isreasonable or not, and with user’s privacy preferences, to determine if a set of preferencesare reasonable or not.In this thesis we propose a design of a new privacy-policy language, called the UPrIMPolicy Language (UPL). UPL is modelled on the PrimeLife Policy Language (PPL)and tries to improve some of PPL’s shortcomings. UPL also tries to include informationdeemed mandatory for service providers according to the European Data Protection Directive95/46/EC (DPD). In order to demonstrate the features of UPL, we developed aproof-of-concept matching engine and a set of example instances of UPL. The matchingengine is able to match preferences, policies and recommendations in any combination.The example instances are modelled on four stages of data disclosure found in literature.
394

Lessons from Québec: towards a national policy for information privacy in our information society

Boyer, Nicole-Anne 05 1900 (has links)
While on the broadest level this paper argues for a rethinking of governance in our "information society," the central thesis of this paper argues for a national policy for data protection in the private sector. It does so through three sets of lessons from the Quebec data protection experience. These include lessons for I) the policy model, (2) the policy process, (3) the policy area as it relates to the policy problem as well as general questions about governance in an information polity. The methodology for this paper is based on a four-part sequential analysis. The first part is a theoretical and empirical exploration of the problem, which is broadly defined as the "tension over personal information." The second part looks comparatively at how other jurisdictions have responded to the problem. The third part assesses which model is the better policy alternative for Canada and concludes that Quebec regulatory route is better than the national status quo. The fourth part uses a comparative public policy framework, as well as interviews, to understand the policy processes in Quebec and Ottawa so that we can highlight the opportunities and constraints for a national data protection policy in the private sector. / Arts, Faculty of / Political Science, Department of / Graduate
395

Policy Merger System for P3P in a Cloud Aggregation Platform

Olurin, Olumuyiwa January 2013 (has links)
The need for aggregating privacy policies is present in a variety of application areas today. In traditional client/server models, websites host services along with their policies in different private domains. However, in a cloud-computing platform where aggregators can merge multiple services, users often face complex decisions in terms of choosing the right services from service providers. In this computing paradigm, the ability to aggregate policies as well as services will be useful and more effective for users that are privacy conscious regarding their sensitive or personal information. This thesis studies the problems associated with the Platform for Privacy Preference (P3P) language, and the present issues with communicating and understanding the P3P language. Furthermore, it discusses some efficient strategies and algorithms for the matching and the merging processes, and then elaborates on some privacy policy conflicts that may occur after merging policies. Lastly, the thesis presents a tool for matching and merging P3P policies. If successful, the merge produces an aggregate policy that is consistent with the policies of all participating service providers.
396

Privacy Concerns and Personality Traits Influencing Online Behavior: A Structural Model

Grams, Brian C. 05 1900 (has links)
The concept of privacy has proven difficult to analyze because of its subjective nature and susceptibility to psychological and contextual influences. This study challenges the concept of privacy as a valid construct for addressing individuals' concerns regarding online disclosure of personal information, based on the premise that underlying behavioral traits offer a more reliable and temporally stable measure of privacy-oriented behavior than do snapshots of environmentally induced emotional states typically measured by opinion polls. This study investigated the relationship of personality characteristics associated with individuals' general privacy-related behavior to their online privacy behaviors and concerns. Two latent constructs, Functional Privacy Orientation and Online Privacy Orientation, were formulated. Functional Privacy Orientation is defined as a general measure of individuals' perception of control over their privacy. It was measured using the factors General Disclosiveness, Locus of Control, Generalized Trust, Risk Orientation, and Risk Propensity as indicator variables. Online Privacy Orientation is defined as a measure of individuals' perception of control over their privacy in an online environment. It was measured using the factors Willingness to Disclose Online, Level of Privacy Concern, Information Management Privacy Concerns, and Reported Online Disclosure as indicator variables. A survey questionnaire that included two new instruments to measure online disclosure and a willingness to disclose online was used to collect data from a sample of 274 adults. Indicator variables for each of the latent constructs, Functional Privacy Orientation and Online Privacy Orientation, were evaluated using corrected item-total correlations, factor analysis, and coefficient alpha. The measurement models and relationship between Functional Privacy Orientation and Online Privacy Orientation were assessed using exploratory factor analysis and structural equation modeling respectively. The structural model supported the hypothesis that Functional Privacy Orientation significantly influences Online Privacy Orientation. Theoretical, methodological, and practical implications and suggestions for analysis of privacy concerns and behavior are presented.
397

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

January 2020 (has links)
abstract: Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge. This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information. There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
398

Understanding Susceptibility to Social Engineering Attacks Through Online Privacy Behaviors

Glaris Lancia Raja Arul (11794286) 19 December 2021 (has links)
<p>Human-based social engineering attacks continue to grow in popularity, with increasing numbers of cases reported yearly. This can be accredited to the ease with which common social engineering attacks can be launched, and the abundance of information available online that attackers can use against their targets. Current mitigative strategies and awareness trainings against social engineering attacks incorporate an understanding of the major factors that influence individual susceptibility to social engineering attacks. These strategies emphasize an engagement in secure behaviors and practices, especially with respect to identifying the key indicators in any form of communication or situation that can classify it as a social engineering attack. There is also an emphasis on restricting the amount of information that individuals should share about themselves in workplace settings. However, these approaches do not comprehensively consider the different intrinsic motivations that individuals develop to engage in the protective behaviors necessary to assure their safety against social engineering attacks, regardless of environment. Individual attitudes and behaviors about online privacy could hold the key to defending oneself by way of restricting unwarranted access to associated information online. Psychological traits and attitudes developed in response to the perception of social engineering as a threat could act as motivators for engaging in privacy protective behaviors, which in turn could affect the extent to which an individual can protect themselves from social engineering attacks. This thesis investigates the role of privacy protective behaviors in impacting an individual’s susceptibility to social engineering attacks and the impacts of specific privacy factors as motivating antecedents to engagement in privacy protective behaviors.</p>
399

Complying with the GDPR in the context of continuous integration

Li, Ze Shi 08 April 2020 (has links)
The full enforcement of the General Data Protection Regulation (GDPR) that began on May 25, 2018 forced any organization that collects and/or processes personal data from European Union citizens to comply with a series of stringent and comprehensive privacy regulations. Many software organizations struggled to comply with the entirety of the GDPR's regulations both leading up and even after the GDPR deadline. Previous studies on the subject of the GDPR have primarily focused on finding implications for users and organizations using surveys or interviews. However, there is a dearth of in-depth studies that investigate compliance practices and compliance challenges in software organizations. In particular, small and medium enterprises are often neglected in these previous studies, despite small and medium enterprises representing the majority of organizations in the EU. Furthermore, organizations that practice continuous integration have largely been ignored in studies on GDPR compliance. Using design science methodology, we conducted an in-depth study over the span of 20 months regarding GDPR compliance practices and challenges in collaboration with a small, startup organization. Our first step helped identify our collaborator's business problems. Subsequently, we iteratively developed two artifacts to address those business problems: a set of privacy requirements operationalized from GDPR principles, and an automated GDPR tool that tests these GDPR-derived privacy requirements. This design science approach resulted in five implications for research and for practice about ongoing challenges to compliance. For instance, our research reveals that GDPR regulations can be partially operationalized and tested through automated means, which is advantageous for achieving long term compliance. In contrast, more research is needed to create more efficient and effective means to disseminate and manage GDPR knowledge among software developers. / Graduate
400

Privacy Preserving Systems With Crowd Blending

Mohsen Minaei (9525917) 16 December 2020 (has links)
<p>Over the years, the Internet has become a platform where individuals share their thoughts and personal information. In some cases, these content contain some damaging or sensitive information, which a malicious data collector can leverage to exploit the individual. Nevertheless, what people consider to be sensitive is a relative matter: it not only varies from one person to another but also changes through time. Therefore, it is hard to identify what content is considered sensitive or damaging, from the viewpoint of a malicious entity that does not target specific individuals, rather scavenges the data-sharing platforms to identify sensitive information as a whole. However, the actions that users take to change their privacy preferences or hide their information assists these malicious entities in discovering the sensitive content. </p><p><br></p><p>This thesis offers Crowd Blending techniques to create privacy-preserving systems while maintaining platform utility. In particular, we focus on two privacy tasks for two different data-sharing platforms— i) concealing content deletion on social media platforms and ii) concealing censored information in cryptocurrency blockchains. For the concealment of the content deletion problem, first, we survey the users of social platforms to understand their deletion privacy expectations. Second, based on the users’ needs, we propose two new privacy-preserving deletion mechanisms for the next generation of social platforms. Finally, we compare the effectiveness and usefulness of the proposed mechanisms with the current deployed ones through a user study survey. For the second problem of concealing censored information in cryptocurrencies, we present a provably secure stenography scheme using cryptocurrencies. We show the possibility of hiding censored information among transactions of cryptocurrencies.</p>

Page generated in 0.1344 seconds