• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1244
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2395
  • 646
  • 560
  • 523
  • 511
  • 352
  • 333
  • 308
  • 299
  • 238
  • 235
  • 218
  • 211
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

At a Loss for Words: Using Performance to Explain How Friends Communicate About Infertility

Binion, Kelsey Elizabeth 06 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In the United States, approximately one in five women are unable to get pregnant after one year of trying. Due to the pervasiveness of pronatalism in Western society, having a child is widely assumed to be a natural and expected part of womanhood. Society’s master narratives reinforce these ideals and stigmatize the experiences of women who have infertility. This multi-phase research study examined how women discuss their infertility journey with their friends. The study’s aims were to understand friendships within the context of infertility, how the relationship affects a woman’s identity, and the communicative behaviors used in conversations. Fifteen interviews were conducted with women who experienced or are experiencing infertility and had discussed their past or current challenges with a friend. Results of a phronetic iterative analysis suggested that women who have personal experience with infertility (a) disclose to close/best friends, (b) communicate their identity as “broken,” (c) desire emotional support, and (d) strategically navigate conversations as they encounter positive and negative messages. These results were transformed into a performance, which included six monologues and a talkback. The purpose of the arts-based methodology was to disseminate results and assess the performance’s impact. Seventy-three individuals attended one of the two performances in April 2023, and 50 attendees completed the post-performance evaluation. The quantitative results suggest that attendees felt informed about the complexities of infertility, gained a new perspective, received advice about how to have future conversations, and did not feel offended by the content. Through a thematic analysis, four themes emerged from the two talkback sessions and evaluation comments: being informed about infertility as a health condition, appreciating the theatrical format to learn, connecting to the performance to understand the illness experience, and feeling comfortable navigating conversations about infertility. Despite the variance in infertility experiences, friends are essential social support figures as women navigate infertility, and there are best practices when having a conversation, as demonstrated in the performance. This study’s implications include providing communication strategies to support women with infertility and recognizing that an arts-based methodology can highlight counterstories, inform about a stigmatized health issue, and engage the community.
342

Social media and its effect on privacy

Adams, Brittney 01 August 2012 (has links)
While research has been conducted on social media, few comparisons have been made in regards to the privacy issues that exist within the most common social media networks, such as Facebook, Google Plus, and Twitter. Most research has concentrated on technical issues with the networks and on the effects of social media in fields such as medicine, law, and science. Although the effects on these fields are beneficial to the people related to them, few studies have shown how everyday users are affected by the use of social media. Social media networks affect the privacy of users because the networks control what happens to user contact information, posts, and other delicate disclosures that users make on those networks. Social media networks also have the ability to sync with phone and tablet applications. Because the use of these applications requires additional contact information from users, social media networks are entrusted with keeping user information secure. This paper analyzes newspaper articles, magazine articles, and research papers pertaining to social media to determine what effects social media has on the user's privacy and how much trust should be placed in social media networks such as Facebook. It provides a comprehensive view of the most used social media networks in 2012 and offers methods and suggestions for users to help protect themselves against privacy invasion.
343

Privacy in row houses of Montreal

Rahbar, Mehrdad January 1996 (has links)
No description available.
344

Efficient Private Data Outsourcing

Steele, Aaron M. 17 August 2011 (has links)
No description available.
345

Mining Privacy Settings to Find Optimal Privacy-Utility Tradeoffs for Social Network Services

Guo, Shumin 23 May 2014 (has links)
No description available.
346

Languages for specifying protection requirements in data base systems - a semantic model /

Hartson, H. Rex January 1975 (has links)
No description available.
347

Design and operations of a secure computer system /

Muftic, Sead January 1976 (has links)
No description available.
348

Design of event-driven protection mechanisms /

Cohen, David January 1977 (has links)
No description available.
349

MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN CLASSIFICATION MODELS

Jiacheng Li (17775408) 12 January 2024 (has links)
<p dir="ltr">Neural network-based machine learning models are now prevalent in our daily lives, from voice assistants~\cite{lopez2018alexa}, to image generation~\cite{ramesh2021zero} and chatbots (e.g., ChatGPT-4~\cite{openai2023gpt4}). These large neural networks are powerful but also raise serious security and privacy concerns, such as whether personal data used to train these models are leaked by these models. One way to understand and address this privacy concern is to study membership inference (MI) attacks and defenses~\cite{shokri2017membership,nasr2019comprehensive}. In MI attacks, an adversary seeks to infer if a given instance was part of the training data. We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new {\em set regularizer} using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy. </p><p dir="ltr"><br></p><p dir="ltr">Furthermore, we considers the challenge of performing membership inference attacks in a federated learning setting ---for image classification--- where an adversary can only observe the communication between the central node and a single client (a passive white-box attack). Passive attacks are one of the hardest-to-detect attacks, since they can be performed without modifying how the behavior of the central server or its clients, and assumes {\em no access to private data instances}. The key insight of our method is empirically observing that, near parameters that generalize well in test, the gradient of large overparameterized neural network models statistically behave like high-dimensional independent isotropic random vectors. Using this insight, we devise two attacks that are often little impacted by existing and proposed defenses. Finally, we validated the hypothesis that our attack depends on the overparametrization by showing that increasing the level of overparametrization (without changing the neural network architecture) positively correlates with our attack effectiveness.</p><p dir="ltr">Finally, we observe that training instances have different degrees of vulnerability to MI attacks. Most instances will have low loss even when not included in training. For these instances, the model can fit them well without concerns of MI attacks. An effective defense only needs to (possibly implicitly) identify instances that are vulnerable to MI attacks and avoids overfitting them. A major challenge is how to achieve such an effect in an efficient training process. Leveraging two distinct recent advancements in representation learning: counterfactually-invariant representations and subspace learning methods, we introduce a novel Membership-Invariant Subspace Training (MIST) method to defend against MI attacks. MIST avoids overfitting the vulnerable instances without significant impact on other instances. We have conducted extensive experimental studies, comparing MIST with various other state-of-the-art (SOTA) MI defenses against several SOTA MI attacks. We find that MIST outperforms other defenses while resulting in minimal reduction in testing accuracy. </p><p dir="ltr"><br></p>
350

Algorithms in Privacy & Security for Data Analytics and Machine Learning

Liang, Yuting January 2020 (has links)
Applications employing very large datasets are increasingly common in this age of Big Data. While these applications provide great benefits in various domains, their usage can be hampered by real-world privacy and security risks. In this work we propose algorithms which aim to provide privacy and security protection in different aspects of these applications. First, we address the problem of data privacy. When the datasets used contain personal information, they must be properly anonymized in order to protect the privacy of the subjects to which the records pertain. A popular privacy preservation technique is the k-anonymity model which guarantees that any record in the dataset must be indistinguishable from at least k-1 other records in terms of quasi-identifiers (i.e. the subset of attributes that can be used to deduce the identity of an individual). Achieving k-anonymity while considering the competing goal of data utility can be a challenge, especially for datasets containing large numbers of records. We formulate k-anonymization as an optimization problem with the objective to maximize data utility, and propose two practical algorithms for solving this problem. Secondly, we address the problem of application security; specifically, for predictive models using Deep Learning, where adversaries can use minimally perturbed inputs (a.k.a. adversarial examples) to cause a neural network to produce incorrect outputs. We propose an approach which protects against adversarial examples in image classification-type networks. The approach relies on two mechanisms: 1) a mechanism that increases robustness at the expense of accuracy; and, 2) a mechanism that improves accuracy. We show that an approach combining the two mechanisms can provide protection against adversarial examples while retaining accuracy. We provide experimental results to demonstrate the effectiveness of our algorithms for both problems. / Thesis / Master of Science (MSc) / Applications employing very large datasets are increasingly common in this age of Big Data. While these applications provide great benefits in various domains, their usage can be hampered by real-world privacy and security risks. In this work we propose algorithms which aim to provide privacy and security protection in different aspects of these applications. We address the problem of data privacy; when the datasets used contain personal information, they must be properly anonymized in order to protect the privacy of the subjects to which the records pertain. We propose two practical algorithms for anonymization which are also utility-centric. We address the problem of application security, specifically for Deep Learning applications where adversaries can use minimally perturbed inputs to cause a neural network to produce incorrect outputs. We propose an approach which protects against these attacks. We provide experimental results to demonstrate the effectiveness of our algorithms for both problems.

Page generated in 0.0547 seconds