• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Efficient Building Blocks for Secure Multiparty Computation and Their Applications

Donghang Lu (13157568) 27 July 2022 (has links)
<p>Secure multi-party computation (MPC) enables mutually distrusting parties to compute securely over their private data. It is a natural approach for building distributed applications with strong privacy guarantees, and it has been used in more and more real-world privacy-preserving solutions such as privacy-preserving machine learning, secure financial analysis, and secure auctions.</p> <p><br></p> <p>The typical method of MPC is to represent the function with arithmetic circuits or binary circuits, then MPC can be applied to compute each gate privately. The practicality of secure multi-party computation (MPC) has been extensively analyzed and improved over the past decade, however, we are hitting the limits of efficiency with the traditional approaches as the circuits become more complicated. Therefore, we follow the design principle of identifying and constructing fast and provably-secure MPC protocols to evaluate useful high-level algebraic abstractions; thus, improving the efficiency of all applications relying on them. </p> <p><br></p> <p>To begin with, we construct an MPC protocol to efficiently evaluate the powers of a secret value. Then we use it as a building block to form a secure mixing protocol, which can be directly used for anonymous broadcast communication. We propose two different protocols to achieve secure mixing offering different tradeoffs between local computation and communication. Meanwhile, we study the necessity of robustness and fairness in many use cases, and provide these properties to general MPC protocols. As a follow-up work in this direction, we design more efficient MPC protocols for anonymous communication through the use of permutation matrices. We provide three variants targeting different MPC frameworks and input volumes. Besides, as the core of our protocols is a secure random permutation, our protocol is of independent interest to more applications such as secure sorting and secure two-way communication.</p> <p><br></p> <p>Meanwhile, we propose the solution and analysis for another useful arithmetic operation: secure multi-variable high-degree polynomial evaluation over both scalar and matrices. Secure polynomial evaluation is a basic operation in many applications including (but not limited to) privacy-preserving machine learning, secure Markov process evaluation, and non-linear function approximation. In this work, we illustrate how our protocol can be used to efficiently evaluate decision tree models, with both the client input and the tree models being private. We implement the prototypes of this idea and the benchmark shows that the polynomial evaluation becomes significantly faster and this makes the secure comparison the only bottleneck. Therefore, as a follow-up work, we design novel protocols to evaluate secure comparison efficiently with the help of pre-computed function tables. We implement and test this idea using Falcon, a state-of-the-art privacy-preserving machine learning framework and the benchmark results illustrate that we get significant performance improvement by simply replacing their secure comparison protocol with ours.</p> <p><br></p>
2

Towards causal federated learning : a federated approach to learning representations using causal invariance

Francis, Sreya 10 1900 (has links)
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-i.i.d.), and Out of Distribution (OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this work, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyse empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model. Although Federated Learning allows for participants to contribute their local data without revealing it, it faces issues in data security and in accurately paying participants for quality data contributions. In this report, we also propose an EOS Blockchain design and workflow to establish data security, a novel validation error based metric upon which we qualify gradient uploads for payment, and implement a small example of our Blockchain Causal Federated Learning model to analyze its performance with respect to robustness, privacy and fairness in incentivization. / L’apprentissage fédéré est une approche émergente d’apprentissage automatique distribué préservant la confidentialité pour créer un modèle partagé en effectuant une formation distribuée localement sur les appareils participants (clients) et en agrégeant les modèles locaux en un modèle global. Comme cette approche empêche la collecte et l’agrégation de données, elle contribue à réduire dans une large mesure les risques associés à la vie privée. Cependant, les échantillons de données de tous les clients participants sont généralement pas indépendante et distribuée de manière identique (non-i.i.d.), et la généralisation hors distribution (OOD) pour les modèles appris peut être médiocre. Outre ce défi, l’apprentissage fédéré reste également vulnérable à diverses attaques contre la sécurité dans lesquelles quelques entités participantes malveillantes s’efforcent d’insérer des portes dérobées, dégradant le modèle agrégé généré ainsi que d’inférer les données détenues par les entités participantes. Dans cet article, nous proposons une approche pour l’apprentissage des caractéristiques invariantes (causales) communes à tous les clients participants dans une configuration d’apprentissage fédérée et analysons empiriquement comment elle améliore la précision hors distribution (OOD) ainsi que la confidentialité du modèle appris final. Bien que l’apprentissage fédéré permette aux participants de contribuer leurs données locales sans les révéler, il se heurte à des problèmes de sécurité des données et de paiement précis des participants pour des contributions de données de qualité. Dans ce rapport, nous proposons également une conception et un flux de travail EOS Blockchain pour établir la sécurité des données, une nouvelle métrique basée sur les erreurs de validation sur laquelle nous qualifions les téléchargements de gradient pour le paiement, et implémentons un petit exemple de notre modèle d’apprentissage fédéré blockchain pour analyser ses performances.
3

An Image-based ML Approach for Wi-Fi Intrusion Detection System and Education Modules for Security and Privacy in ML

Rayed Suhail Ahmad (18476697) 02 May 2024 (has links)
<p dir="ltr">The research work presented in this thesis focuses on two highly important topics in the modern age. The first topic of research is the development of various image-based Network Intrusion Detection Systems (NIDSs) and performing a comprehensive analysis of their performance. Wi-Fi networks have become ubiquitous in enterprise and home networks which creates opportunities for attackers to target the networks. These attackers exploit various vulnerabilities in Wi-Fi networks to gain unauthorized access to a network or extract data from end users' devices. The deployment of an NIDS helps detect these attacks before they can cause any significant damages to the network's functionalities or security. Within the scope of our research, we provide a comparative analysis of various deep learning (DL)-based NIDSs that utilize various imaging techniques to detect anomalous traffic in a Wi-Fi network. The second topic in this thesis is the development of learning modules for security and privacy in Machine Learning (ML). The increasing integration of ML in various domains raises concerns about its security and privacy. In order to effectively address such concerns, students learning about the basics of ML need to be made aware of the steps that are taken to develop robust and secure ML-based systems. As part of this, we introduce a set of hands-on learning modules designed to educate students on the importance of security and privacy in ML. The modules provide a theoretical learning experience through presentations and practical experience using Python Notebooks. The modules are developed in a manner that allows students to easily absorb the concepts regarding privacy and security of ML models and implement it in real-life scenarios. The efficacy of this process will be obtained from the results of the surveys conducted before and after providing the learning modules. Positive results from the survey will demonstrate the learning modules were effective in imparting knowledge to the students and the need to incorporate security and privacy concepts in introductory ML courses.</p>

Page generated in 0.1442 seconds