• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 31
  • 24
  • 21
  • 19
  • 13
  • 13
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Privacy-aware publication and utilization of healthcare data

Park, Yubin 28 October 2014 (has links)
Open access to health data can bring enormous social and economical benefits. However, such access can also lead to privacy breaches, which may result in discrimination in insurance and employment markets. Privacy is a subjective and contextual concept, thus it should be interpreted from both systemic and information perspectives to clearly understand potential breaches and consequences. This dissertation investigates three popular use cases of healthcare data: specifically, 1) synthetic data publication, 2) aggregate data utilization, and 3) privacy-aware API implementation. For each case, we develop statistical models that improve the privacy-utility Pareto frontier by leveraging a variety of machine learning techniques such as information theoretic privacy measures, Bayesian graphical models, non-parametric modeling, and low-rank factorization techniques. It shows that much utility can be extracted from health records while maintaining strong privacy guarantees and protection of sensitive health information. / text
2

Protect Data Privacy in E-Healthcare in Sweden

An, Nan January 2007 (has links)
<p>Sweden healthcare adopted much ICT (information and communication technology). It is a highly information intensive place. This thesis gives a brief description of the background of healthcare in Sweden and ICT adoption in healthcare, introduces an Information system security model, describes the technology and law about data privacy and carries out a case through questionnaire and interview.</p>
3

Personal data protection maturity model for the micro financial sector in Peru

Garcia, Arturo, Calle, Luis, Raymundo, Carlos, Dominguez, Francisco, Moguerza, Javier M. 27 June 2018 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / The micro financial sector is a strategic element in the economy of developing countries since it facilitates the integration and development of all social classes and let the economic growth. In this point is the growth of data is high every day in sector like the micro financial, resulting from transactions and operations carried out with these companies on a daily basis. Appropriate management of the personal data privacy policies is therefore necessary because, otherwise, it will comply with personal data protection laws and regulations and let take quality information for decision-making and process improvement. The present study proposes a personal data protection maturity model based on international standards of privacy and information security, which also reveals personal data protection capabilities in organizations. Finally, the study proposes a diagnostic and tracing assessment tool that was carried out for five companies in the micro financial sector and the obtained results were analyzed to validate the model and to help in success of data protection initiatives. / Revisión por pares
4

Fine-Grained Anomaly Detection For In Depth Data Protection

Shagufta Mehnaz (9012230) 23 June 2020 (has links)
Data represent a key resource for all organizations we may think of. Thus, it is not surprising that data are the main target of a large variety of attacks. Security vulnerabilities and phishing attacks make it possible for malicious software to steal business or privacy sensitive data and to undermine data availability such as in recent ransomware attacks.Apart from external malicious parties, insider attacks also pose serious threats to organizations with sensitive information, e.g., hospitals with patients’ sensitive information. Access control mechanisms are not always able to prevent insiders from misusing or stealing data as they often have data access permissions. Therefore, comprehensive solutions for data protection require combining access control mechanisms and other security techniques,such as encryption, with techniques for detecting anomalies in data accesses. In this the-sis, we develop fine-grained anomaly detection techniques for ensuring in depth protection of data from malicious software, specifically, ransomware, and from malicious insiders.While anomaly detection techniques are very useful, in many cases the data that is used for anomaly detection are very sensitive, e.g., health data being shared with untrusted service providers for anomaly detection. The owners of such data would not share their sensitive data in plain text with an untrusted service provider and this predicament undoubtedly hinders the desire of these individuals/organizations to become more data-driven. In this thesis, we have also built a privacy-preserving framework for real-time anomaly detection.
5

Relational Data Curation by Deduplication, Anonymization, and Diversification

Huang, Yu January 2020 (has links)
Enterprises acquire large amounts of data from a variety of sources with the goal of extracting valuable insights and enabling informed analysis. Unfortunately, organizations continue to be hindered by poor data quality as they wrangle with their data to extract value since most real datasets are rarely error-free. Poor data quality is a pervasive problem that spans across all industries causing unreliable data analysis, and costing billions of dollars. The large body of datasets, the pace of data acquisition, and the heterogeneity of data sources pose challenges towards achieving high-quality data. These challenges are further exacerbated with data privacy and data diversity requirements. In this thesis, we study and propose solutions to address data duplication, managing the trade-off between data cleaning and data privacy, and computing diverse data instances. In the first part of this thesis, we address the data duplication problem. We propose a duplication detection framework, which combines word-embeddings with constraints among attributes to improve the accuracy of deduplication. We propose a set of constraint-based statistical features to capture the semantic relationship among attributes. We showed that our techniques achieve comparative accuracy on real datasets. In the second part of this thesis, we study the problem of data privacy and data cleaning, and we present a Privacy-Aware data Cleaning-As-a-Service (PACAS) framework to protect privacy during the cleaning process. Our evaluation shows that PACAS safeguards semantically related sensitive values, and provides lower repair errors compared to existing privacy-aware cleaning techniques. In the third part of this thesis, we study the problem of finding a diverse anonymized data instance where diversity is measured via a set of diversity constraints, and propose an algorithm to seek a k-anonymous relation with value suppression as well as satisfying given diversity constraints. We conduct extensive experiments using real and synthetic data showing the effectiveness of our techniques, and improvement over existing baselines. / Thesis / Doctor of Philosophy (PhD)
6

Data-level privacy through data perturbation in distributed multi-application environments

de Souza, Tulio January 2016 (has links)
Wireless sensor networks used to have a main role as a monitoring tool for environmental purposes and animal tracking. This spectrum of applications, however, has dramatically grown in the past few years. Such evolution means that what used to be application-specific networks are now multi application environments, often with federation capabilities. This shift results in a challenging environment for data privacy, mainly caused by the broadening of the spectrum of data access points and involved entities. This thesis first evaluates existing privacy preserving data aggregation techniques to determine how suitable they are for providing data privacy in this more elaborate environment. Such evaluation led to the design of the set difference attack, which explores the fact that they all rely purely on data aggregation to achieve privacy, which is shown through simulation not to be suitable to the task. It also indicates that some form of uncertainty is required in order to mitigate the attack. Another relevant finding is that the attack can also be effective against standalone networks, by exploring the node availability factor. Uncertainty is achieved via the use of differential privacy, which offers a strong and formal privacy guarantee through data perturbation. In order to make it suitable to work in a wireless sensor network environment, which mainly deals with time-series data, two new approaches to address it have been proposed. These have a contrasting effect when it comes to utility and privacy levels, offering a flexible balance between privacy and data utility for sensed entities and data analysts/consumers. Lastly, this thesis proposes a framework to assist in the design of privacy preserving data aggregation protocols to suit application needs while at the same time complying with desired privacy requirements. The framework's evaluation compares and contrasts several scenarios to demonstrate the level of flexibility and effectiveness that the designed protocols can provide. Overall, this thesis demonstrates that data perturbation can be made significantly practical through the proposed framework. Although some problems remain, with further improvements to data correlation methods and better use of some intrinsic characteristics of such networks, the use of data perturbation may become a practical and efficient privacy preserving mechanism for wireless sensor networks.
7

Security Issues in Heterogeneous Data Federations

Leighton, Gregory Unknown Date
No description available.
8

Fundamental Limits in Data Privacy: From Privacy Measures to Economic Foundations

January 2016 (has links)
abstract: Data privacy is emerging as one of the most serious concerns of big data analytics, particularly with the growing use of personal data and the ever-improving capability of data analysis. This dissertation first investigates the relation between different privacy notions, and then puts the main focus on developing economic foundations for a market model of trading private data. The first part characterizes differential privacy, identifiability and mutual-information privacy by their privacy--distortion functions, which is the optimal achievable privacy level as a function of the maximum allowable distortion. The results show that these notions are fundamentally related and exhibit certain consistency: (1) The gap between the privacy--distortion functions of identifiability and differential privacy is upper bounded by a constant determined by the prior. (2) Identifiability and mutual-information privacy share the same optimal mechanism. (3) The mutual-information optimal mechanism satisfies differential privacy with a level at most a constant away from the optimal level. The second part studies a market model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The value of epsilon units of privacy is measured by the minimum payment such that an individual's equilibrium strategy is to report data in an epsilon-differentially private manner. For the setting with binary private data that represents individuals' knowledge about a common underlying state, asymptotically tight lower and upper bounds on the value of privacy are established as the number of individuals becomes large, and the payment--accuracy tradeoff for learning the state is obtained. The lower bound assures the impossibility of using lower payment to buy epsilon units of privacy, and the upper bound is given by a designed reward mechanism. When the individuals' valuations of privacy are unknown to the data collector, mechanisms with possible negative payments (aiming to penalize individuals with "unacceptably" high privacy valuations) are designed to fulfill the accuracy goal and drive the total payment to zero. For the setting with binary private data following a general joint probability distribution with some symmetry, asymptotically optimal mechanisms are designed in the high data quality regime. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
9

Protect Data Privacy in E-Healthcare in Sweden

An, Nan January 2007 (has links)
Sweden healthcare adopted much ICT (information and communication technology). It is a highly information intensive place. This thesis gives a brief description of the background of healthcare in Sweden and ICT adoption in healthcare, introduces an Information system security model, describes the technology and law about data privacy and carries out a case through questionnaire and interview.
10

Data Centric Defenses for Privacy Attacks

Abhyankar, Nikhil Suhas 14 August 2023 (has links)
Recent research shows that machine learning algorithms are highly susceptible to attacks trying to extract sensitive information about the data used in model training. These attacks called privacy attacks, exploit the model training process. Contemporary defense techniques make alterations to the training algorithm. Such defenses are computationally expensive, cause a noticeable privacy-utility tradeoff, and require control over the training process. This thesis presents a data-centric approach using data augmentations to mitigate privacy attacks. We present privacy-focused data augmentations to change the sensitive data submitted to the model trainer. Compared to traditional defenses, our method provides more control to the individual data owner to protect one's private data. The defense is model-agnostic and does not require the data owner to have any sort of control over the model training. Privacypreserving augmentations are implemented for two attacks namely membership inference and model inversion using two distinct techniques. While the proposed augmentations offer a better privacy-utility tradeoff on CIFAR-10 for membership inference, they reduce the reconstruction rate to ≤ 1% while reducing the classification accuracy by only 2% against model inversion attacks. This is the first attempt to defend model inversion and membership inference attacks using decentralized privacy protection. / Master of Science / Privacy attacks are threats posed to extract sensitive information about the data used to train machine learning models. As machine learning is used extensively for many applications, they have access to private information like financial records, medical history, etc depending on the application. It has been observed that machine learning models can leak the information they contain. As models tend to 'memorize' training data to some extent, even removing the data from the training set cannot prevent privacy leakage. As a result, the research community has focused its attention on developing defense techniques to prevent this information leakage. However, the existing defenses rely heavily on making alterations to the way a machine learning model is trained. This approach is termed as a model-centric approach wherein the model owner is responsible to make changes to the model algorithm to preserve data privacy. By doing this, the model performance is degraded while upholding data privacy. Our work introduces the first data-centric defense which provides the tools to protect the data to the data owner. We demonstrate the effectiveness of the proposed defense in providing protection while ensuring that the model performance is maintained to a great extent.

Page generated in 0.0388 seconds