• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Machine Learning Approach for Securing Autonomous and Connected Vehicles

Acharya, Abiral January 2021 (has links)
No description available.
2

Game Theoretic Analysis of Defence Algorithms Against Data Poisoning Attack

Ou, Yifan January 2020 (has links)
As Machine Learning (ML) algorithms are deployed to solve a wide variety of tasks in today’s world, data poisoning attack poses a significant threat to ML applications. Although numerous defence algorithms against data poisoning attack have been proposed and shown to be effective, most defence algorithms are analyzed under the assumption of fixed attack strategies, without accounting for the strategic interactions between the attacker and the defender. In this work, we perform game theoretic analysis of defence algorithms against data poisoning attacks on Machine Learning. We study the defence strategy as a competitive game between the defender and the adversary and analyze the game characteristics for several defence algorithms. We propose a game model for the poisoning attack scenario, and prove the characteristics of the Nash Equilibrium (NE) defence strategy for all distance-based defence algorithms. Based on the NE characteristics, we develop an efficient algorithm to approximate for the NE defence strategy. Using fixed attack strategies as the benchmark, we then experimentally evaluate the impact of strategic interactions in the game model. Our approach does not only provide insights about the effectiveness of the analyzed algorithms under optimal poisoning attacks, but also serves as a method for the modellers to determine capable defence algorithms and optimal strategies to employ on their ML models. / Thesis / Master of Science (MSc) / As Machine Learning (ML) algorithms are deployed to solve a wide variety of tasks in today’s world, data poisoning attack poses a significant threat to ML applications. In this work, we study the defence against poisoning attack scenario as a competitive game between the defender and the adversary and analyze the game characteristics for several defence algorithms. Our goal is to identify the optimal defence strategy against poisoning attacks, even when the adversary responds optimally to the defence strategy. We propose a game model for the poisoning attack scenario, and develop an efficient algorithm to approximate for the Nash Equilibrium defence strategy. Our approach does not only provide insights about the effectiveness of the analyzed algorithms under optimal poisoning attacks, but also serves as a method for the modellers to determine capable defence algorithms and optimal strategies to employ on their ML models.
3

PREVENTING DATA POISONING ATTACKS IN FEDERATED MACHINE LEARNING BY AN ENCRYPTED VERIFICATION KEY

Mahdee, Jodayree 06 1900 (has links)
Federated learning has gained attention recently for its ability to protect data privacy and distribute computing loads [1]. It overcomes the limitations of traditional machine learning algorithms by allowing computers to train on remote data inputs and build models while keeping participant privacy intact. Traditional machine learning offered a solution by enabling computers to learn patterns and make decisions from data without explicit programming. It opened up new possibilities for automating tasks, recognizing patterns, and making predictions. With the exponential growth of data and advances in computational power, machine learning has become a powerful tool in various domains, driving innovations in fields such as image recognition, natural language processing, autonomous vehicles, and personalized recommendations. traditional machine learning, data is usually transferred to a central server, raising concerns about privacy and security. Centralizing data exposes sensitive information, making it vulnerable to breaches or unauthorized access. Centralized machine learning assumes that all data is available at a central location, which is only sometimes practical or feasible. Some data may be distributed across different locations, owned by different entities, or subject to legal or privacy restrictions. Training a global model in traditional machine learning involves frequent communication between the central server and participating devices. This communication overhead can be substantial, particularly when dealing with large-scale datasets or resource-constrained devices. / Recent studies have uncovered security issues with most of the federated learning models. One common false assumption in the federated learning model is that participants are the attacker and would not use polluted data. This vulnerability enables attackers to train their models using polluted data and then send the polluted updates to the training server for aggregation, potentially poisoning the overall model. In such a setting, it is challenging for an edge server to thoroughly inspect the data used for model training and supervise any edge device. This study evaluates the vulnerabilities present in federated learning and explores various types of attacks that can occur. This paper presents a robust prevention scheme to address these vulnerabilities. The proposed prevention scheme enables federated learning servers to monitor participants actively in real-time and identify infected individuals by introducing an encrypted verification scheme. The paper outlines the protocol design of this prevention scheme and presents experimental results that demonstrate its effectiveness. / Thesis / Doctor of Philosophy (PhD) / federated learning models face significant security challenges and can be vulnerable to attacks. For instance, federated learning models assume participants are not attackers and will not manipulate the data. However, in reality, attackers can compromise the data of remote participants by inserting fake or altering existing data, which can result in polluted training results being sent to the server. For instance, if the sample data is an animal image, attackers can modify it to contaminate the training data. This paper introduces a robust preventive approach to counter data pollution attacks in real-time. It incorporates an encrypted verification scheme into the federated learning model, preventing poisoning attacks without the need for specific attack detection programming. The main contribution of this paper is a mechanism for detection and prevention that allows the training server to supervise real-time training and stop data modifications in each client's storage before and between training rounds. The training server can identify real-time modifications and remove infected remote participants with this scheme.

Page generated in 0.0741 seconds