• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Secure Data Aggregation Protocol with Byzantine Robustness for Wireless Sensor Networks

Khalifa, Tarek January 2007 (has links)
Sensor networks are dense wireless networks constituting of small and low-cost sensors that collect and disseminate sensory data. They have gained great attention in recent years due to their ability to offer economical and effective solutions in a variety of fields; and their profound suitability to address mission critical problems that are common in health, transportation, and military applications. “Sensor networks” is a technology that is seen to change the world, and as such their deployment is expected to see a rapid growth. Effective security strategy is essential for any sensor network in order to maintain trustful and reliable functionality, protect sensory information, and ensure network component authenticity. Security models and protocols that are typically used in other types of networks, such as wired networks, are not suitable for sensor networks due to their specific hardware specifications. This thesis highlights some of the research done so far in the area of security of wireless sensor networks and proposes a solution to detect Byzantine behaviour - a challenging security threat that many sensor networks face. The proposed solution’s use of cryptography is kept at a minimum to ensure maximum secure bandwidth. Under this solution, a sensor network continues to work normally until an attack is suspected. Once an attack is suspected, a cryptography scheme is enabled to authenticate suspected nodes and to allow the identification of potential external attacks. If an attack seems to persist after the cryptography scheme has been enabled, the same mechanism is used to identify and isolate potentially compromised nodes. The goal is to introduce a degree of intelligence into such networks and consequently improve reliability of data collection, accuracy of aggregated data, and prolong network lifetime.
2

Secure Data Aggregation Protocol with Byzantine Robustness for Wireless Sensor Networks

Khalifa, Tarek January 2007 (has links)
Sensor networks are dense wireless networks constituting of small and low-cost sensors that collect and disseminate sensory data. They have gained great attention in recent years due to their ability to offer economical and effective solutions in a variety of fields; and their profound suitability to address mission critical problems that are common in health, transportation, and military applications. “Sensor networks” is a technology that is seen to change the world, and as such their deployment is expected to see a rapid growth. Effective security strategy is essential for any sensor network in order to maintain trustful and reliable functionality, protect sensory information, and ensure network component authenticity. Security models and protocols that are typically used in other types of networks, such as wired networks, are not suitable for sensor networks due to their specific hardware specifications. This thesis highlights some of the research done so far in the area of security of wireless sensor networks and proposes a solution to detect Byzantine behaviour - a challenging security threat that many sensor networks face. The proposed solution’s use of cryptography is kept at a minimum to ensure maximum secure bandwidth. Under this solution, a sensor network continues to work normally until an attack is suspected. Once an attack is suspected, a cryptography scheme is enabled to authenticate suspected nodes and to allow the identification of potential external attacks. If an attack seems to persist after the cryptography scheme has been enabled, the same mechanism is used to identify and isolate potentially compromised nodes. The goal is to introduce a degree of intelligence into such networks and consequently improve reliability of data collection, accuracy of aggregated data, and prolong network lifetime.
3

Breaking Privacy in Model-Heterogeneous Federated Learning

Haldankar, Atharva Amit 14 May 2024 (has links)
Federated learning (FL) is a communication protocol that allows multiple distrustful clients to collaboratively train a machine learning model. In FL, data never leaves client devices; instead, clients only share locally computed gradients or model parameters with a central server. As individual gradients may leak information about a given client's dataset, secure aggregation was proposed. With secure aggregation, the server only receives the aggregate gradient update from the set of all sampled clients without being able to access any individual gradient. One challenge in FL is the systems-level heterogeneity that is quite often present among client devices. Specifically, clients in the FL protocol may have varying levels of compute power, on-device memory, and communication bandwidth. These limitations are addressed by model-heterogeneous FL schemes, where clients are able to train on subsets of the global model. Despite the benefits of model-heterogeneous schemes in addressing systems-level challenges, the implications of these schemes on client privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the computational heterogeneity among client devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information from target clients. To this end, we propose two novel attacks in the model-heterogeneous setting, even with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where clients train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters are dynamically updated each round. We show that a malicious adversary is able to compromise the model and data confidentiality of a target group of clients. We evaluate our attacks on the MNIST dataset and show that using our techniques, an adversary can reconstruct data samples with high fidelity. / Master of Science / Federated learning (FL) is a communication protocol that allows multiple distrustful users to collaboratively train a machine learning model together. In FL, data never leaves user devices; instead, users only share locally computed gradients or model parameters (e.g. weight and bias values) with an aggregation server. As individual gradients may leak information about a given user's dataset, secure aggregation was proposed. Secure aggregation is a protocol that users and the server run together, where the server only receives the aggregate gradient update from the set of all sampled users instead of each individual user update. In FL, users often have varying levels of compute power, on-device memory, and communication bandwidth. These differences between users are collectively referred to as systems-level (or system) heterogeneity. While there are a number of techniques to address system heterogeneity, one popular approach is to have users train on different subsets of the global model. This approach is known as model-heterogeneous FL. Despite the benefits of model-heterogeneous FL schemes in addressing systems-level challenges, the implications of these schemes on user privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the differences in compute power between user devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information. To this end, we propose two novel attacks in the model-heterogeneous setting with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where users train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters may change each round. We first show that a malicious server is able to obtain individual user updates, despite secure aggregation being in place. Then, we demonstrate how an adversary can utilize those updates to reverse engineer data samples from users. We evaluate our attacks on the MNIST dataset, a commonly used dataset of handwritten digits and their labels. We show that by running our attacks, an adversary can accurately identify what images a user trained on.

Page generated in 0.1052 seconds