• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1244
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2395
  • 646
  • 560
  • 523
  • 511
  • 352
  • 333
  • 308
  • 299
  • 238
  • 235
  • 218
  • 211
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Genetic information and insurance : a contextual analysis of legal and regulatory means of promoting just distributions

Lemmens, Trudo January 2003 (has links)
No description available.
692

On the Sample Complexity of Privately Learning Gaussians and their Mixtures / Privately Learning Gaussians and their Mixtures

Aden-Ali, Ishaq January 2021 (has links)
Multivariate Gaussians: We provide sample complexity upper bounds for semi-agnostically learning multivariate Gaussians under the constraint of approximate differential privacy. These are the first  finite sample upper bounds for general Gaussians which do not impose restrictions on the parameters of the distribution. Our bounds are near-optimal in the case when the covariance is known to be the identity, and conjectured to be near-optimal in the general case. From a technical standpoint, we provide analytic tools for arguing the existence of global "locally small" covers from local covers of the space. These are exploited using modifications of recent techniques for for differentially private hypothesis selection. Mixtures of Gaussians: We consider the problem of learning mixtures of Gaussians under the constraint of approximate differential privacy. We provide the first sample complexity upper bounds for privately learning mixtures of unbounded axis-aligned (or even unbounded univariate) Gaussians. To prove our results, we design a new technique for privately learning mixture distributions. A class of distributions F is said to be list-decodable if there is an algorithm that, given "heavily corrupted" samples from a distribution f in F, outputs a list of distributions, H, such that one of the distributions in H approximates f. We show that if F is privately list-decodable then we can privately learn mixtures of distributions in F. Finally, we show axis-aligned Gaussian distributions are privately list-decodable, thereby proving mixtures of such distributions are privately learnable. / Thesis / Master of Science (MSc) / Is it possible to estimate an unknown probability distribution given random samples from it? This is a fundamental problem known as distribution learning (or density estimation) that has been studied by statisticians for decades, and in recent years has become a topic of interest for computer scientists. While distribution learning is a mature and well understood problem, in many cases the samples (or data) we observe may consist of sensitive information belonging to individuals and well-known solutions may inadvertently result in the leakage of private information. In this thesis we study distribution learning under the assumption that the data is generated from high-dimensional Gaussians (or their mixtures) with the aim of understanding how many samples an algorithm needs before it can guarantee a good estimate. Furthermore, to protect against leakage of private information, we consider approaches that satisfy differential privacy — the gold standard for modern private data analysis.
693

Data Centric Defenses for Privacy Attacks

Abhyankar, Nikhil Suhas 14 August 2023 (has links)
Recent research shows that machine learning algorithms are highly susceptible to attacks trying to extract sensitive information about the data used in model training. These attacks called privacy attacks, exploit the model training process. Contemporary defense techniques make alterations to the training algorithm. Such defenses are computationally expensive, cause a noticeable privacy-utility tradeoff, and require control over the training process. This thesis presents a data-centric approach using data augmentations to mitigate privacy attacks. We present privacy-focused data augmentations to change the sensitive data submitted to the model trainer. Compared to traditional defenses, our method provides more control to the individual data owner to protect one's private data. The defense is model-agnostic and does not require the data owner to have any sort of control over the model training. Privacypreserving augmentations are implemented for two attacks namely membership inference and model inversion using two distinct techniques. While the proposed augmentations offer a better privacy-utility tradeoff on CIFAR-10 for membership inference, they reduce the reconstruction rate to ≤ 1% while reducing the classification accuracy by only 2% against model inversion attacks. This is the first attempt to defend model inversion and membership inference attacks using decentralized privacy protection. / Master of Science / Privacy attacks are threats posed to extract sensitive information about the data used to train machine learning models. As machine learning is used extensively for many applications, they have access to private information like financial records, medical history, etc depending on the application. It has been observed that machine learning models can leak the information they contain. As models tend to 'memorize' training data to some extent, even removing the data from the training set cannot prevent privacy leakage. As a result, the research community has focused its attention on developing defense techniques to prevent this information leakage. However, the existing defenses rely heavily on making alterations to the way a machine learning model is trained. This approach is termed as a model-centric approach wherein the model owner is responsible to make changes to the model algorithm to preserve data privacy. By doing this, the model performance is degraded while upholding data privacy. Our work introduces the first data-centric defense which provides the tools to protect the data to the data owner. We demonstrate the effectiveness of the proposed defense in providing protection while ensuring that the model performance is maintained to a great extent.
694

Data mining in healthcare : A security and privacy perspective

Vimark, Sara January 2023 (has links)
Data mining has become an essential tool in various domains, including healthcare, for finding patterns and relationships in large datasets to solve business issues. However, given the sensitivity of healthcare data, safeguarding confidentiality and privacy to protect patient information is highly prioritized. This literature review focuses on security and privacy methods used in data mining within the healthcare field. The study examines various techniques employed to secure and preserve the privacy of healthcare data and explores their applications. The review addresses research questions about security and privacy techniques in healthcare data mining and their specific use cases. By summarizing the current state of security and privacy methods, this review aims to contribute to the knowledge base of data mining in healthcare and provide insights for future research. The results show that anonymization, cryptography, blockchain, differential privacy, and randomization techniques are the most prevalent methods. However, more research is needed to provide sufficiently secure methods that still preserve the data's utility.
695

Searching for the silver lining of the US cloud

Di Gleria, Sonja January 2022 (has links)
We live in a society where more and more services are available online, and to an increasing extent, people expect that there should be a digital solution. The demand for digitalization of the public sector is increasing. However, at the same time, there are requirements for public activities to handle tax funds responsibly and not buy more expensive solutions than necessary. Here, cloud providers are often used to solve the equation of being both efficient and economical - and not least secure. The problem is that after a judgment in the Court of Justice in the European Union (Schrems II), cloud-based solutions supplied by US-based providers appear to be legally prohibited as their use violates the GDPR. GDPR complicates the digitization work by creating uncertainty about what a public organization is allowed to do. The research question to help shed light on this issue is “How can the public sector in Sweden use US cloud providers in the light of Schrems II?” This research uses design science as a research method to find the critical factors to support the use of US cloud service providers and use the factors as requirements. As the problem is practical, action research is used as a research strategy. The primary data collection methods are interviews of subject matter experts for their knowledge and direct insight into the problem, document research of mostly official documents as a knowledge base for the research with their validity and reliability, and a variant of brainstorming for new perspectives. Thematic analysis is used to analyze the results and help define the requirements for using US cloud providers in the public sector, along with explanation and root cause analysis. The GDPR is clear about third country transfers, but the additional laws and demands cause uncertainties on applying it and for which kind of data. The critical factors found are contributing laws, data classification, risk management, internal procurement,routines, employee knowledge level, and the need for documentation. These results led to a conclusion that open, public data is the only kind of data for which it is possible to use US cloud providers. After carefully examining the critical factors, some public organizations have chosen to use US cloud services for other data types, as they decided it was the safer choice. EU and the US have just agreed on the principles of a new trans-Atlantic data transfer treaty. This treaty must solve several problems to guarantee an adequate level of protection, and the probability that this will be met creates continued uncertainty in the affected organizations. One thing is clear - an organization that meets the critical requirements is firmly facing whatever future may come.
696

User Privacy Perception and Concerns Regarding the Use of Cloud-Based Assistants

Awojobi, Abiodun 12 1900 (has links)
Cloud-based assistants like the Google Home and the Amazon Alexa have become ubiquitous in our homes. Users can simply communicate with the devices using a smartphone application. There are privacy concerns associated with cloud-based assistants. For example, users do not know what type of information is being sent to the device manufacturer, if the device is stealthily listening to conversations, data retention, or who else has access to the data. Privacy is about perception. The goal of this study is to determine user privacy concerns regarding cloud-based assistants by adopting a quantitative research method. The study used a privacy decision framework that lists three core components, which are technology controls, understanding user privacy preference, and government regulations. The research used Dervin's sensemaking model to describe users' privacy perception using the privacy decision framework and improved on a privacy perception survey instrument from previous dissertations. An understanding of user privacy concerns with cloud-based assistants is required to provide a comprehensive privacy guidance to stakeholders. The significance of this study is in the identification of the privacy perception of users of cloud-based assistants and the extent to which the components of the theoretical framework can impact user privacy perception. The results of this study serve as a guide for device manufacturers and other stakeholders in prioritizing privacy design decisions.
697

Fundamental Constraints And Provably Secure Constructions Of Anonymous Communication Protocols

Debajyoti Das (11190285) 27 July 2021 (has links)
<div>Anonymous communication networks (ACNs) are critical to communication privacy over the internet as they enable</div><div>individuals to maintain their privacy from untrusted intermediaries and endpoints.</div><div>Typically, ACNs involve messages traveling through some intermediaries before arriving at their destinations, and therefore they introduce network latency and bandwidth overheads. </div><div><br></div><div>The goal of this work is to investigate the fundamental constraints of anonymous communication (AC) protocols.</div><div>We analyze the relationship between bandwidth overhead, latency overhead, and sender anonymity or recipient anonymity against a global passive (network-level) adversary. </div><div>We confirm the widely believed trilemma </div><div>that an AC protocol can only achieve two out of the following three properties: </div><div>strong anonymity (i.e., anonymity up to a negligible chance),</div><div>low bandwidth overhead, and low latency overhead. </div><div><br></div><div>We further study anonymity against a stronger global passive adversary that can additionally passively compromise some of the AC protocol nodes.</div><div>For a given number of compromised nodes, </div><div>we derive as a necessary constraint a relationship between bandwidth and latency overhead whose violation make it impossible for an AC protocol to achieve strong anonymity. </div><div>We analyze prominent AC protocols from the literature and depict to which extent those satisfy our necessary constraints. </div><div>Our fundamental necessary constraints offer a guideline not only for improving existing AC systems but also for designing novel AC protocols with non-traditional bandwidth and latency overhead choices.</div><div><br></div><div>Using the guidelines indicated by our fundamental necessary constraints we provide two efficient protocol constructions.</div><div>First, we design a mixnet-based AC protocol Streams that provides provable mixing guarantees with the expense of latency overhead. Streams realizes a trusted third party stop-and-go mix as long as each message stays in the system for $\omega(\log \eta)$ rounds.</div><div>Second, we offer a DC-net based design OrgAn that can provide strong sender anonymity with constant latency at the expense of bandwidth overhead. OrgAn solves the problem of regular requirements of key and slot agreement present in typical DC-net based protocols, by utilizing a client/relay/server architecture.</div>
698

Bucketization Techniques for Encrypted Databases: Quantifying the Impact of Query Distributions

Raybourn, Tracey 06 May 2013 (has links)
No description available.
699

Trust Negotiation for Open Database Access Control

Porter, Paul A. 09 May 2006 (has links) (PDF)
Hippocratic databases are designed to protect the privacy of the individuals whose personal information they contain. This thesis presents a model for providing and enforcing access control in an open Hippocratic database system. Previously unknown individuals can gain access to information in the database by authenticating to roles through trust negotiation. Allowing qualified strangers to access the database increases the usefulness of the system without compromising privacy. This thesis presents the design and implementation of two methods for filtering information from database queries. First, we extend a query modification method for use in an open database system. Second, we introduce a novel filtering method that overcomes some limitations of the query modification method. We also provide results showing that the two methods have comparable performance that is suitable for interactive response time with our sample data set.
700

Challenging Policies That Do Not Play Fair: A Credential Relevancy Framework Using Trust Negotiation Ontologies

Leithead, Travis S. 29 August 2005 (has links) (PDF)
This thesis challenges the assumption that policies will "play fair" within trust negotiation. Policies that do not "play fair" contain requirements for authentication that are misleading, irrelevant, and/or incorrect, based on the current transaction context. To detect these unfair policies, trust negotiation ontologies provide the context to determine the relevancy of a given credential set for a particular negotiation. We propose a credential relevancy framework for use in trust negotiation that utilizes ontologies to process the set of all available credentials C and produce a subset of credentials C' relevant to the context of a given negotiation. This credential relevancy framework reveals the credentials inconsistent with the current negotiation and detects potentially malicious policies that request these credentials. It provides a general solution for detecting policies that do not "play fair," such as those used in credential phishing attacks, malformed policies, and malicious strategies. This thesis motivates the need for a credential relevancy framework, outlines considerations for designing and implementing it (including topics that require further research), and analyzes a prototype implementation. The credential relevancy framework prototype, analyzed in this thesis, has the following two properties: first, it incurs less than 10% extra execution time compared to a baseline trust negotiation prototype (e.g., TrustBuilder); second, credential relevance determination does not compromise the desired goals of trust negotiation—transparent and automated authentication in open systems. Current trust negotiation systems integrated with a credential relevancy framework will be enabled to better defend against users that do not always "play fair" by incorporating a credential relevancy framework.

Page generated in 0.0697 seconds