• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition Systems

Aslansefat, K., Kabir, Sohag, Abdullatif, Amr R.A., Vasudevan, Vinod, Papadopoulos, Y. 10 August 2021 (has links)
Yes / This article proposes an approach named SafeML II, which applies empirical cumulative distribution function-based statistical distance measures in a designed human-in-the loop procedure to ensure the safety of machine learning-based classifiers in autonomous vehicle software. The application of artificial intelligence (AI) and data-driven decision-making systems in autonomous vehicles is growing rapidly. As autonomous vehicles operate in dynamic environments, the risk that they can face an unknown observation is relatively high due to insufficient training data, distributional shift, or cyber-security attack. Thus, AI-based algorithms should make dependable decisions to improve their interpretation of the environment, lower the risk of autonomous driving, and avoid catastrophic accidents. This paper proposes an approach named SafeML II, which applies empirical cumulative distribution function (ECDF)-based statistical distance measures in a designed human-in-the-loop procedure to ensure the safety of machine learning-based classifiers in autonomous vehicle software. The approach is model-agnostic and it can cover various machine learning and deep learning classifiers. The German Traffic Sign Recognition Benchmark (GTSRB) is used to illustrate the capabilities of the proposed approach. / This work was supported by the Secure and Safe MultiRobot Systems (SESAME) H2020 Project under Grant Agreement 101017258.
2

Trojan Attacks and Defenses on Deep Neural Networks

Yingqi Liu (13943811) 13 October 2022 (has links)
<p>With the fast spread of machine learning techniques, sharing and adopting public deep neural networks become very popular. As deep neural networks are not intuitive for human to understand, malicious behaviors can be injected into deep neural networks undetected. We call it trojan attack or backdoor attack on neural networks. Trojaned models operate normally when regular inputs are provided, and misclassify to a specific output label when the input is stamped with some special pattern called trojan trigger. Deploying trojaned models can cause various severe consequences including endangering human lives (in applications like autonomous driving). Trojan attacks on deep neural networks introduce two challenges. From the attacker's perspective, since the training data or training process is usually not accessible to the attacker, the attacker needs to find a way to carry out the trojan attack without access to training data. From the user's perspective, the user needs to quickly scan the online public deep neural networks and detect trojaned models.</p> <p>We try to address these challenges in this dissertation. For trojan attack without access to training data, We propose to invert the neural network to generate a general trojan trigger, and then retrain the model with reverse-engineered training data to inject malicious behaviors to the model. The malicious behaviors are only activated by inputs stamped with the trojan trigger. To scan and detect trojaned models, we develop a novel technique that analyzes inner neuron behaviors by determining how output activation change when we introduce different levels of stimulation to a neuron. A trojan trigger is then reverse-engineered through an optimization procedure using the stimulation analysis results, to confirm that a neuron is truly compromised. Furthermore, for complex trojan attacks, we propose a novel complex trigger detection method. It leverages a novel symmetric feature differencing method to distinguish features of injected complex triggers from natural features. For trojan attacks on NLP models, we propose a novel backdoor scanning technique. It transforms a subject model to an equivalent but differentiable form. It then inverts a distribution of words denoting their likelihood in the trigger and applies a novel word discriminativity analysis to determine if the subject model is particularly discriminative for the presence of likely trigger words.</p>

Page generated in 0.0318 seconds