• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 60
  • 33
  • 32
  • 18
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Rychlý, škálovatelný, a DoS-rezistentní proof-of-stake konsensuální protokol založen na anonymizační vrstvě / Fast, Scalable and DoS-Resistant Proof-of-Stake Consensus Protocol Based on an Anonymization Layer

Tamaškovič, Marek January 2021 (has links)
V tejto práci sumarizujeme aktuálny výskum protokolov z rodiny Proof-of-Stake ako napr. Algorand, Tendermint a LaKSA. Analyzovali sme ich funkcionalitu a tiež ich problémy. V rámci výskumu sme implementovali a novy protokol z rodiny Dôkaz-Podielom, ktorý rieši nájdené problémy ako priepustnosť, škálovatelnosť a bezpečnosť.
22

Usability heuristics for fast crime data anonymization in resource-constrained contexts

Sakpere, Aderonke Busayo January 2018 (has links)
This thesis considers the case of mobile crime-reporting systems that have emerged as an effective and efficient data collection method in low and middle-income countries. Analyzing the data, can be helpful in addressing crime. Since law enforcement agencies in resource-constrained context typically do not have the expertise to handle these tasks, a cost-effective strategy is to outsource the data analytics tasks to third-party service providers. However, because of the sensitivity of the data, it is expedient to consider the issue of privacy. More specifically, this thesis considers the issue of finding low-intensive computational solutions to protecting the data even from an "honest-but-curious" service provider, while at the same time generating datasets that can be queried efficiently and reliably. This thesis offers a three-pronged solution approach. Firstly, the creation of a mobile application to facilitate crime reporting in a usable, secure and privacy-preserving manner. The second step proposes a streaming data anonymization algorithm, which analyses reported data based on occurrence rate rather than at a preset time on a static repository. Finally, in the third step the concept of using privacy preferences in creating anonymized datasets was considered. By taking into account user preferences the efficiency of the anonymization process is improved upon, which is beneficial in enabling fast data anonymization. Results from the prototype implementation and usability tests indicate that having a usable and covet crime-reporting application encourages users to declare crime occurrences. Anonymizing streaming data contributes to faster crime resolution times, and user privacy preferences are helpful in relaxing privacy constraints, which makes for more usable data from the querying perspective. This research presents considerable evidence that the concept of a three-pronged solution to addressing the issue of anonymity during crime reporting in a resource-constrained environment is promising. This solution can further assist the law enforcement agencies to partner with third party in deriving useful crime pattern knowledge without infringing on users' privacy. In the future, this research can be extended to more than one low-income or middle-income countries.
23

Named-entity recognition with BERT for anonymization of medical records

Bridal, Olle January 2021 (has links)
Sharing data is an important part of the progress of science in many fields. In the largely deep learning dominated field of natural language processing, textual resources are in high demand. In certain domains, such as that of medical records, the sharing of data is limited by ethical and legal restrictions and therefore requires anonymization. The process of manual anonymization is tedious and expensive, thus automated anonymization is of great value. Since medical records consist of unstructured text, pieces of sensitive information have to be identified in order to be masked for anonymization. Named-entity recognition (NER) is the subtask of information extraction named entities, such as person names or locations, are identified and categorized. Recently, models that leverage unsupervised training on large quantities of unlabeled training data have performed impressively on the NER task, which shows promise in their usage for the problem of anonymization. In this study, a small set of medical records was annotated with named-entity tags. Because of the lack of any training data, a BERT model already fine-tuned for NER was then evaluated on the evaluation set. The aim was to find out how well the model would perform on NER on medical records, and to explore the possibility of using the model to anonymize medical records. The most positive result was that the model was able to identify all person names in the dataset. The average accuracy for identifying all entity types was however relatively low. It is discussed that the success of identifying person names shows promise in the model’s application for anonymization. However, because the overall accuracy is significantly worse than that of models fine-tuned on domain-specific data, it is suggested that there might be better methods for anonymization in the absence of relevant training data.
24

Apprentissage automatique de fonctions d'anonymisation pour les graphes et les graphes dynamiques / Automatic Learning of Anonymization for Graphs and Dynamic Graphs

Maag, Maria Coralia Laura 08 April 2015 (has links)
La confidentialité des données est un problème majeur qui doit être considéré avant de rendre publiques les données ou avant de les transmettre à des partenaires tiers avec comme but d'analyser ou de calculer des statistiques sur ces données. Leur confidentialité est principalement préservée en utilisant des techniques d'anonymisation. Dans ce contexte, un nombre important de techniques d'anonymisation a été proposé dans la littérature. Cependant, des méthodes génériques capables de s'adapter à des situations variées sont souhaitables. Nous adressons le problème de la confidentialité des données représentées sous forme de graphe, données qui nécessitent, pour différentes raisons, d'être rendues publiques. Nous considérons que l'anonymiseur n'a pas accès aux méthodes utilisées pour analyser les données. Une méthodologie générique est proposée basée sur des techniques d'apprentissage artificiel afin d'obtenir directement une fonction d'anonymisation et d'optimiser la balance entre le risque pour la confidentialité et la perte dans l'utilité des données. La méthodologie permet d'obtenir une bonne procédure d'anonymisation pour une large catégorie d'attaques et des caractéristiques à préserver dans un ensemble de données. La méthodologie est instanciée pour des graphes simples et des graphes dynamiques avec une composante temporelle. La méthodologie a été expérimentée avec succès sur des ensembles de données provenant de Twitter, Enron ou Amazon. Les résultats sont comparés avec des méthodes de référence et il est montré que la méthodologie proposée est générique et peut s'adapter automatiquement à différents contextes d'anonymisation. / Data privacy is a major problem that has to be considered before releasing datasets to the public or even to a partner company that would compute statistics or make a deep analysis of these data. Privacy is insured by performing data anonymization as required by legislation. In this context, many different anonymization techniques have been proposed in the literature. These techniques are difficult to use in a general context where attacks can be of different types, and where measures are not known to the anonymizer. Generic methods able to adapt to different situations become desirable. We are addressing the problem of privacy related to graph data which needs, for different reasons, to be publicly made available. This corresponds to the anonymized graph data publishing problem. We are placing from the perspective of an anonymizer not having access to the methods used to analyze the data. A generic methodology is proposed based on machine learning to obtain directly an anonymization function from a set of training data so as to optimize a tradeoff between privacy risk and utility loss. The method thus allows one to get a good anonymization procedure for any kind of attacks, and any characteristic in a given set. The methodology is instantiated for simple graphs and complex timestamped graphs. A tool has been developed implementing the method and has been experimented with success on real anonymized datasets coming from Twitter, Enron or Amazon. Results are compared with baseline and it is showed that the proposed method is generic and can automatically adapt itself to different anonymization contexts.
25

Adaptable Privacy-preserving Model

Brown, Emily Elizabeth 01 January 2019 (has links)
Current data privacy-preservation models lack the ability to aid data decision makers in processing datasets for publication. The proposed algorithm allows data processors to simply provide a dataset and state their criteria to recommend an xk-anonymity approach. Additionally, the algorithm can be tailored to a preference and gives the precision range and maximum data loss associated with the recommended approach. This dissertation report outlined the research’s goal, what barriers were overcome, and the limitations of the work’s scope. It highlighted the results from each experiment conducted and how it influenced the creation of the end adaptable algorithm. The xk-anonymity model built upon two foundational privacy models, the k-anonymity and l-diversity models. Overall, this study had many takeaways on data and its power in a dataset.
26

Privacy Aware Smart Surveillance

Shirima, Emil 18 July 2019 (has links)
No description available.
27

Anonymization of Sensitive Data through Cryptography

Holm, Isac, Dahl, Johan January 2023 (has links)
In today's interconnected digital landscape, the protection of sensitive information is of great importance. As a result, the field of cryptography plays a vital role in ensuring individuals' anonymity and data integrity. In this context, this thesis presents a comprehensive analysis of symmetric encryption algorithms, specifically focusing on the Advanced Encryption Standard (AES) and Camellia. By investigating the performance aspects of these algorithms, including encryption time, decryption time, and ciphertext size, the goal is to provide valuable insights for selecting suitable cryptographic solutions. The findings indicate that while there is a difference in performance between the algorithms, the disparity is not substantial in practical terms. Both AES and Camellia, as well as their larger key-size alternatives, demonstrated comparable performance, with AES128 showing marginally faster encryption time. The study's implementation also involves encrypting a data set with sensitive information on students. It encrypts the school classes with separate keys and assigns roles to users, enabling access control based on user roles. The implemented solution successfully addressed the problem of role-based access control and encryption of unique identifiers, as verified through the verification and validation method. The implications of this study extend to industries and society, where cryptography plays a vital role in protecting individuals' anonymity and data integrity. The results presented in this paper can serve as a valuable reference for selecting suitable cryptographic algorithms for various systems and applications, particularly for anonymization of usernames or short, unique identifiers. However, it is important to note that the experiment primarily focused on small data sets, and further investigations may yield different results for larger data sets.
28

Improving the accuracy of statistics used in de-identification and model validation (via the concordance statistic) pertaining to time-to-event data

Caetano, Samantha-Jo January 2020 (has links)
Time-to-event data is very common in medical research. Thus, clinicians and patients need analysis of this data to be accurate, as it is often used to interpret disease screening results, inform treatment decisions, and identify at-risk patient groups (ie, sex, race, gene expressions, etc.). This thesis tackles three statistical issues pertaining to time-to-event data. The first issue was incurred from an Institute for Clinical and Evaluative Sciences lung cancer registry data set, which was de-identified by censoring patients at an earlier date. This resulted in an underestimate of the observed times of censored patients. Five methods were proposed to account for the underestimation incurred by de-identification. A subsequent simulation study was conducted to compare the effectiveness of each method in reducing bias, and mean squared error as well as improving coverage probabilities of four different KM estimates. The simulation results demonstrated that situations with relatively large numbers of censored patients required methodology with larger perturbation. In these scenarios, the fourth proposed method (which perturbed censored times such that they were censored in the final year of study) yielded estimates with the smallest bias, mean squared error, and largest coverage probability. Alternatively, when there were smaller numbers of censored patients, any manipulation to the altered data set worsened the accuracy of the estimates. The second issue arises when investigating model validation via the concordance (c) statistic. Specifically, the c-statistic is intended for measuring the accuracy of statistical models which assess the risk associated with a binary outcome. The c-statistic estimates the proportion of patient pairs where the patient with a higher predicted risk had experienced the event. The definition of a c-statistic cannot be uniquely extended to time-to-event outcomes, thus many proposals have been made. The second project developed a parametric c-statistic which assumes to the true survival times are exponentially distributed to invoke the memoryless property. A simulation study was conducted which included a comparative analysis of two other time-to-event c-statistics. Three different definitions of concordance in the time-to-event setting were compared, as were three different c-statistics. The c-statistic developed by the authors yielded the smallest bias when censoring is present in data, even when the exponentially distributed parametric assumptions do not hold. The c-statistic developed by the authors appears to be the most robust to censored data. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. The third project in this thesis developed and assessed the appropriateness of an empirical time-to-event c-statistic that is derived by estimating the survival times of censored patients via the EM algorithm. A simulation study was conducted for various sample sizes, censoring levels and correlation rates. A non-parametric bootstrap was employed and the mean and standard error of the bias of 4 different time-to-event c-statistics were compared, including the empirical EM c-statistic developed by the authors. The newly developed c-statistic yielded the smallest mean bias and standard error in all simulated scenarios. The c-statistic developed by the authors appears to be the most appropriate when estimating concordance of a time-to-event model. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. / Thesis / Doctor of Philosophy (PhD)
29

De-Anonymization Attack Anatomy and Analysis of Ohio Nursing Workforce Data Anonymization

Miracle, Jacob M. January 2016 (has links)
No description available.
30

An anonymizable entity finder in judicial decisions

Kazemi, Farzaneh January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.0254 seconds