• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 33
  • 25
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Vers une plateforme holistique de protection de la vie privée dans les services géodépendants

Sahnoune, Zakaria 04 1900 (has links)
No description available.
72

Construction of Secure and Efficient Private Set Intersection Protocol

Kumar, Vikas January 2013 (has links) (PDF)
Private set intersection(PSI) is a two party protocol where both parties possess a private set and at the end of the protocol, one party (client) learns the intersection while other party (server) learns nothing. Motivated by some interesting practical applications, several provably secure and efficient PSI protocols have appeared in the literature in recent past. Some of the proposed solutions are secure in the honest-but-curious (HbC) model while the others are secure in the (stronger) malicious model. Security in the latter is traditionally achieved by following the classical approach of attaching a zero knowledge proof of knowledge (ZKPoK) (and/or using the so-called cut-and-choose technique). These approaches prevent the parties from deviating from normal protocol execution, albeit with significant computational overhead and increased complexity in the security argument, which includes incase of ZKPoK, knowledge extraction through rewinding. We critically investigate a subset of the existing protocols. Our study reveals some interesting points about the so-called provable security guarantee of some of the proposed solutions. Surprisingly, we point out some gaps in the security argument of several protocols. We also discuss an attack on a protocol when executed multiple times between the same client and server. The attack, in fact, indicates some limitation in the existing security definition of PSI. On the positive side, we show how to correct the security argument for the above mentioned protocols and show that in the HbC model the security can be based on some standard computational assumption like RSA and Gap Diffie-Hellman problem. For a protocol, we give improved version of that protocol and prove security in the HbC model under standard computational assumption. For the malicious model, we construct two PSI protocols using deterministic blind signatures i.e., Boldyreva’s blind signature and Chaum’s blind signature, which do not involve ZKPoK or cut-and-choose technique. Chaum’s blind signature gives a new protocol in the RSA setting and Boldyreva’s blind signature gives protocol in gap Diffie-Hellman setting which is quite similar to an existing protocol but it is efficient and does not involve ZKPoK.
73

Privacy preserving software engineering for data driven development

Tongay, Karan Naresh 14 December 2020 (has links)
The exponential rise in the generation of data has introduced many new areas of research including data science, data engineering, machine learning, artificial in- telligence to name a few. It has become important for any industry or organization to precisely understand and analyze the data in order to extract value out of the data. The value of the data can only be realized when it is put into practice in the real world and the most common approach to do this in the technology industry is through software engineering. This brings into picture the area of privacy oriented software engineering and thus there is a rise of data protection regulation acts such as GDPR (General Data Protection Regulation), PDPA (Personal Data Protection Act), etc. Many organizations, governments and companies who have accumulated huge amounts of data over time may conveniently use the data for increasing business value but at the same time the privacy aspects associated with the sensitivity of data especially in terms of personal information of the people can easily be circumvented while designing a software engineering model for these types of applications. Even before the software engineering phase for any data processing application, often times there can be one or many data sharing agreements or privacy policies in place. Every organization may have their own way of maintaining data privacy practices for data driven development. There is a need to generalize or categorize their approaches into tactics which could be referred by other practitioners who are trying to integrate data privacy practices into their development. This qualitative study provides an understanding of various approaches and tactics that are being practised within the industry for privacy preserving data science in software engineering, and discusses a tool for data usage monitoring to identify unethical data access. Finally, we studied strategies for secure data publishing and conducted experiments using sample data to demonstrate how these techniques can be helpful for securing private data before publishing. / Graduate
74

Towards causal federated learning : a federated approach to learning representations using causal invariance

Francis, Sreya 10 1900 (has links)
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-i.i.d.), and Out of Distribution (OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this work, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyse empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model. Although Federated Learning allows for participants to contribute their local data without revealing it, it faces issues in data security and in accurately paying participants for quality data contributions. In this report, we also propose an EOS Blockchain design and workflow to establish data security, a novel validation error based metric upon which we qualify gradient uploads for payment, and implement a small example of our Blockchain Causal Federated Learning model to analyze its performance with respect to robustness, privacy and fairness in incentivization. / L’apprentissage fédéré est une approche émergente d’apprentissage automatique distribué préservant la confidentialité pour créer un modèle partagé en effectuant une formation distribuée localement sur les appareils participants (clients) et en agrégeant les modèles locaux en un modèle global. Comme cette approche empêche la collecte et l’agrégation de données, elle contribue à réduire dans une large mesure les risques associés à la vie privée. Cependant, les échantillons de données de tous les clients participants sont généralement pas indépendante et distribuée de manière identique (non-i.i.d.), et la généralisation hors distribution (OOD) pour les modèles appris peut être médiocre. Outre ce défi, l’apprentissage fédéré reste également vulnérable à diverses attaques contre la sécurité dans lesquelles quelques entités participantes malveillantes s’efforcent d’insérer des portes dérobées, dégradant le modèle agrégé généré ainsi que d’inférer les données détenues par les entités participantes. Dans cet article, nous proposons une approche pour l’apprentissage des caractéristiques invariantes (causales) communes à tous les clients participants dans une configuration d’apprentissage fédérée et analysons empiriquement comment elle améliore la précision hors distribution (OOD) ainsi que la confidentialité du modèle appris final. Bien que l’apprentissage fédéré permette aux participants de contribuer leurs données locales sans les révéler, il se heurte à des problèmes de sécurité des données et de paiement précis des participants pour des contributions de données de qualité. Dans ce rapport, nous proposons également une conception et un flux de travail EOS Blockchain pour établir la sécurité des données, une nouvelle métrique basée sur les erreurs de validation sur laquelle nous qualifions les téléchargements de gradient pour le paiement, et implémentons un petit exemple de notre modèle d’apprentissage fédéré blockchain pour analyser ses performances.
75

An Image-based ML Approach for Wi-Fi Intrusion Detection System and Education Modules for Security and Privacy in ML

Rayed Suhail Ahmad (18476697) 02 May 2024 (has links)
<p dir="ltr">The research work presented in this thesis focuses on two highly important topics in the modern age. The first topic of research is the development of various image-based Network Intrusion Detection Systems (NIDSs) and performing a comprehensive analysis of their performance. Wi-Fi networks have become ubiquitous in enterprise and home networks which creates opportunities for attackers to target the networks. These attackers exploit various vulnerabilities in Wi-Fi networks to gain unauthorized access to a network or extract data from end users' devices. The deployment of an NIDS helps detect these attacks before they can cause any significant damages to the network's functionalities or security. Within the scope of our research, we provide a comparative analysis of various deep learning (DL)-based NIDSs that utilize various imaging techniques to detect anomalous traffic in a Wi-Fi network. The second topic in this thesis is the development of learning modules for security and privacy in Machine Learning (ML). The increasing integration of ML in various domains raises concerns about its security and privacy. In order to effectively address such concerns, students learning about the basics of ML need to be made aware of the steps that are taken to develop robust and secure ML-based systems. As part of this, we introduce a set of hands-on learning modules designed to educate students on the importance of security and privacy in ML. The modules provide a theoretical learning experience through presentations and practical experience using Python Notebooks. The modules are developed in a manner that allows students to easily absorb the concepts regarding privacy and security of ML models and implement it in real-life scenarios. The efficacy of this process will be obtained from the results of the surveys conducted before and after providing the learning modules. Positive results from the survey will demonstrate the learning modules were effective in imparting knowledge to the students and the need to incorporate security and privacy concepts in introductory ML courses.</p>

Page generated in 0.0828 seconds