• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Effect of 5-anonymity on a classifier based on neural network that is applied to the adult dataset

Paulson, Jörgen January 2019 (has links)
Privacy issues relating to having data made public is relevant with the introduction of the GDPR. To limit problems related to data becoming public, intentionally or via an event such as a security breach, anonymization of datasets can be employed. In this report, the impact of the application of 5-anonymity to the adult dataset on a classifier based on a neural network predicting whether people had an income exceeding $50,000 was investigated using precision, recall and accuracy. The classifier was trained using the non-anonymized data, the anonymized data, and the non-anonymized data with those attributes which were suppressed in the anonymized data removed. The result was that average accuracy dropped from 0.82 to 0.76, precision from 0.58 to 0.50, and recall increased from 0.82 to 0.87. The average values and distributions seem to support the estimation that the majority of the performance impact of anonymization in this case comes from the suppression of attributes.
2

Adaptable Privacy-preserving Model

Brown, Emily Elizabeth 01 January 2019 (has links)
Current data privacy-preservation models lack the ability to aid data decision makers in processing datasets for publication. The proposed algorithm allows data processors to simply provide a dataset and state their criteria to recommend an xk-anonymity approach. Additionally, the algorithm can be tailored to a preference and gives the precision range and maximum data loss associated with the recommended approach. This dissertation report outlined the research’s goal, what barriers were overcome, and the limitations of the work’s scope. It highlighted the results from each experiment conducted and how it influenced the creation of the end adaptable algorithm. The xk-anonymity model built upon two foundational privacy models, the k-anonymity and l-diversity models. Overall, this study had many takeaways on data and its power in a dataset.
3

Application of anonymization techniques (k-anonymity, generalization, and suppression) on an employee database: Use case – Swedish municipality

Oyedele, Babatunde January 2023 (has links)
This thesis explores data anonymization techniques within the context of a Swedish municipality with a focus on safeguarding data privacy, enhancement of decision-making, and assessing re-identification risks. The investigation, grounded in a literature review and an experimental study, employed the ARX anonymization tool on a sample municipality employee database. Three distinct human resource management (HRM) datasets, analogous to the employee database, were created and anonymized using the ARX tool to ascertain the efficacy and re-identification risks of the employed techniques.  A key finding indicates an inverse relationship between dataset size and re-identification risk, enhancing data utility with larger datasets. This suggests that larger datasets are more conducive to anonymization, motivating organizations to engage in anonymization efforts for internal analytics and open data publishing.  The study contributes to Information Security discourse, emphasizing the criticality of data anonymization in preserving privacy and ensuring data utility in the era of big data. The research faced constraints due to privacy considerations, necessitating the use of similar, rather than actual, datasets, potentially affecting the results and limiting full representation for future techniques. The thesis primarily addresses HRM applications, indicating the scope for future research into other municipal or organizational governance areas. In conclusion, it underscores the necessity of data anonymization in the face of tightening regulations and sophisticated privacy breaches. This positions the organization strategically for compliance, minimizes data breach risks and upholds anonymization as a fundamental principle of Information Security.
4

Anonymization of directory-structured sensitive data / Anonymisering av katalogstrukturerad känslig data

Folkesson, Carl January 2019 (has links)
Data anonymization is a relevant and important field within data privacy, which tries to find a good balance between utility and privacy in data. The field is especially relevant since the GDPR came into force, because the GDPR does not regulate anonymous data. This thesis focuses on anonymization of directory-structured data, which means data structured into a tree of directories. In the thesis, four of the most common models for anonymization of tabular data, k-anonymity, ℓ-diversity, t-closeness and differential privacy, are adapted for anonymization of directory-structured data. This adaptation is done by creating three different approaches for anonymizing directory-structured data: SingleTable, DirectoryWise and RecursiveDirectoryWise. These models and approaches are compared and evaluated using five metrics and three attack scenarios. The results show that there is always a trade-off between utility and privacy when anonymizing data. Especially it was concluded that the differential privacy model when using the RecursiveDirectoryWise approach gives the highest privacy, but also the highest information loss. On the contrary, the k-anonymity model when using the SingleTable approach or the t-closeness model when using the DirectoryWise approach gives the lowest information loss, but also the lowest privacy. The differential privacy model and the RecursiveDirectoryWise approach were also shown to give best protection against the chosen attacks. Finally, it was concluded that the differential privacy model when using the RecursiveDirectoryWise approach, was the most suitable combination to use when trying to follow the GDPR when anonymizing directory-structured data.
5

Privacy and utility assessment within statistical data bases / Mesure de la vie privée et de l’utilité des données dans les bases de données statistiques

Sondeck, Louis-Philippe 15 December 2017 (has links)
Les données personnelles sont d’une importance avérée pour presque tous les secteurs d’activité économiques grâce à toute la connaissance qu’on peut en extraire. Pour preuve, les plus grandes entreprises du monde que sont: Google, Amazon, Facebook et Apple s’en servent principalement pour fournir de leurs services. Cependant, bien que les données personnelles soient d’une grande utilité pour l’amélioration et le développement de nouveaux services, elles peuvent aussi, de manière intentionnelle ou non, nuire à la vie privée des personnes concernées. En effet, plusieurs études font état d’attaques réalisées à partir de données d’entreprises, et ceci, bien qu’ayant été anonymisées. Il devient donc nécessaire de définir des techniques fiables, pour la protection de la vie privée des personnes tout en garantissant l’utilité de ces données pour les services. Dans cette optique, l’Europe a adopté un nouveau règlement (le Règlement Général sur la Protection des Données) (EU, 2016) qui a pour but de protéger les données personnelles des citoyens européens. Cependant, ce règlement ne concerne qu’une partie du problème puisqu’il s’intéresse uniquement à la protection de la vie privée, alors que l’objectif serait de trouver le meilleur compromis entre vie privée et utilité des données. En effet, vie privée et utilité des données sont très souvent inversement proportionnelles, c’est ainsi que plus les données garantissent la vie privée, moins il y reste d’information utile. Pour répondre à ce problème de compromis entre vie privée et utilité des données, la technique la plus utilisée est l’anonymisation des données. Dans la littérature scientifique, l’anonymisation fait référence soit aux mécanismes d’anonymisation, soit aux métriques d’anonymisation. Si les mécanismes d’anonymisation sont utiles pour anonymiser les données, les métriques d’anonymisation sont elles, nécessaires pour valider ou non si le compromis entre vie privée et utilité des données a été atteint. Cependant, les métriques existantes ont plusieurs défauts parmi lesquels, le manque de précision des mesures et la difficulté d’implémentation. De plus, les métriques existantes permettent de mesurer soit la vie privée, soit l’utilité des données, mais pas les deux simultanément; ce qui rend plus complexe l’évaluation du compromis entre vie privée et utilité des données. Dans cette thèse, nous proposons une approche nouvelle, permettant de mesurer à la fois la vie privée et l’utilité des données, dénommée Discrimination Rate (DR). Le DR est une métrique basée sur la théorie de l’information, qui est pratique et permet des mesures d’une grande finesse. Le DR mesure la capacité des attributs à raffiner un ensemble d’individus, avec des valeurs comprises entre 0 et 1; le meilleur raffinement conduisant à un DR de 1. Par exemple, un identifiant a un DR égale à 1 étant donné qu’il permet de raffiner complètement un ensemble d’individus. Grâce au DR nous évaluons de manière précise et comparons les mécanismes d’anonymisation en termes d’utilité et de vie privée (aussi bien différentes instanciations d’un même mécanisme, que différents mécanismes). De plus, grâce au DR, nous proposons des définitions formelles des identifiants encore appelés informations d’identification personnelle. Ce dernier point est reconnu comme l’un des problèmes cruciaux des textes juridiques qui traitent de la protection de la vie privée. Le DR apporte donc une réponse aussi bien aux entreprises qu’aux régulateurs, par rapport aux enjeux que soulève la protection des données personnelles / Personal data promise relevant improvements in almost every economy sectors thanks to all the knowledge that can be extracted from it. As a proof of it, some of the biggest companies in the world, Google, Amazon, Facebook and Apple (GAFA) rely on this resource for providing their services. However, although personal data can be very useful for improvement and development of services, they can also, intentionally or not, harm data respondent’s privacy. Indeed, many studies have shown how data that were intended to protect respondents’ personal data were finally used to leak private information. Therefore, it becomes necessary to provide methods for protecting respondent’s privacy while ensuring utility of data for services. For this purpose, Europe has established a new regulation (The General Data Protection Regulation) (EU, 2016) that aims to protect European citizens’ personal data. However, the regulation only targets one side of the main goal as it focuses on privacy of citizens while the goal is about the best trade-off between privacy and utility. Indeed, privacy and utility are usually inversely proportional and the greater the privacy, the lower the data utility. One of the main approaches for addressing the trade-off between privacy and utility is data anonymization. In the literature, anonymization refers either to anonymization mechanisms or anonymization metrics. While the mechanisms are useful for anonymizing data, metrics are necessary to validate whether or not the best trade-off has been reached. However, existing metrics have several flaws including the lack of accuracy and the complexity of implementation. Moreover existing metrics are intended to assess either privacy or utility, this adds difficulties when assessing the trade-off between privacy and utility. In this thesis, we propose a novel approach for assessing both utility and privacy called Discrimination Rate (DR). The DR is an information theoretical approach which provides practical and fine grained measurements. The DR measures the capability of attributes to refine a set of respondents with measurements scaled between 0 and 1, the best refinement leading to single respondents. For example an identifier has a DR equals to 1 as it completely refines a set of respondents. We are therefore able to provide fine grained assessments and comparison of anonymization mechanisms (whether different instantiations of the same mechanism or different anonymization mechanisms) in terms of utility and privacy. Moreover, thanks to the DR, we provide formal definitions of identifiers (Personally Identifying Information) which has been recognized as one of the main concern of privacy regulations. The DR can therefore be used both by companies and regulators for tackling the personal data protection issues
6

An Improved Utility Driven Approach Towards K-Anonymity Using Data Constraint Rules

Morton, Stuart Michael 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / As medical data continues to transition to electronic formats, opportunities arise for researchers to use this microdata to discover patterns and increase knowledge that can improve patient care. Now more than ever, it is critical to protect the identities of the patients contained in these databases. Even after removing obvious “identifier” attributes, such as social security numbers or first and last names, that clearly identify a specific person, it is possible to join “quasi-identifier” attributes from two or more publicly available databases to identify individuals. K-anonymity is an approach that has been used to ensure that no one individual can be distinguished within a group of at least k individuals. However, the majority of the proposed approaches implementing k-anonymity have focused on improving the efficiency of algorithms implementing k-anonymity; less emphasis has been put towards ensuring the “utility” of anonymized data from a researchers’ perspective. We propose a new data utility measurement, called the research value (RV), which extends existing utility measurements by employing data constraints rules that are designed to improve the effectiveness of queries against the anonymized data. To anonymize a given raw dataset, two algorithms are proposed that use predefined generalizations provided by the data content expert and their corresponding research values to assess an attribute’s data utility as it is generalizing the data to ensure k-anonymity. In addition, an automated algorithm is presented that uses clustering and the RV to anonymize the dataset. All of the proposed algorithms scale efficiently when the number of attributes in a dataset is large.
7

Privacy preserving software engineering for data driven development

Tongay, Karan Naresh 14 December 2020 (has links)
The exponential rise in the generation of data has introduced many new areas of research including data science, data engineering, machine learning, artificial in- telligence to name a few. It has become important for any industry or organization to precisely understand and analyze the data in order to extract value out of the data. The value of the data can only be realized when it is put into practice in the real world and the most common approach to do this in the technology industry is through software engineering. This brings into picture the area of privacy oriented software engineering and thus there is a rise of data protection regulation acts such as GDPR (General Data Protection Regulation), PDPA (Personal Data Protection Act), etc. Many organizations, governments and companies who have accumulated huge amounts of data over time may conveniently use the data for increasing business value but at the same time the privacy aspects associated with the sensitivity of data especially in terms of personal information of the people can easily be circumvented while designing a software engineering model for these types of applications. Even before the software engineering phase for any data processing application, often times there can be one or many data sharing agreements or privacy policies in place. Every organization may have their own way of maintaining data privacy practices for data driven development. There is a need to generalize or categorize their approaches into tactics which could be referred by other practitioners who are trying to integrate data privacy practices into their development. This qualitative study provides an understanding of various approaches and tactics that are being practised within the industry for privacy preserving data science in software engineering, and discusses a tool for data usage monitoring to identify unethical data access. Finally, we studied strategies for secure data publishing and conducted experiments using sample data to demonstrate how these techniques can be helpful for securing private data before publishing. / Graduate

Page generated in 0.0202 seconds