• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Blockchain for AI: Smarter Contracts to Secure Artificial Intelligence Algorithms

Badruddoja, Syed 07 1900 (has links)
In this dissertation, I investigate the existing smart contract problems that limit cognitive abilities. I use Taylor's serious expansion, polynomial equation, and fraction-based computations to overcome the limitations of calculations in smart contracts. To prove the hypothesis, I use these mathematical models to compute complex operations of naive Bayes, linear regression, decision trees, and neural network algorithms on Ethereum public test networks. The smart contracts achieve 95\% prediction accuracy compared to traditional programming language models, proving the soundness of the numerical derivations. Many non-real-time applications can use our solution for trusted and secure prediction services.
2

Towards Secure and Safe AI-enabled Systems Through Optimizations

Guanhong Tao (18542383) 15 May 2024 (has links)
<p dir="ltr">Artificial intelligence (AI) is increasingly integrated into critical systems across various sectors, including public surveillance, autonomous driving, and malware detection. Despite their impressive performance and promise, the security and safety of AI-enabled systems remain significant concerns. Like conventional systems that have software bugs or vulnerabilities, applications leveraging AI are also susceptible to such issues. Malicious behaviors can be intentionally injected into AI models by adversaries, creating a backdoor. These models operate normally with benign inputs but consistently misclassify samples containing an attacker-inserted trigger, known as a <i>backdoor attack</i>.</p><p dir="ltr">However, backdoors can not only be injected by an attacker but may also naturally exist in normally trained models. One can find backdoor triggers in benign models that cause any inputs with the trigger to be misclassified, a phenomenon termed <i>natural backdoors</i>. Regardless of whether they are injected or natural, backdoors can take various forms, which increases the difficulty of identifying such vulnerabilities. This challenge is exacerbated when access to AI models is limited.</p><p dir="ltr">This dissertation introduces an optimization-based technique that reverse-engineers trigger patterns exploited by backdoors, whether injected or natural. It formulates how backdoor triggers modify inputs down to the pixel level to approximate their potential forms. The intended changes in output predictions guide the reverse-engineering process, which involves computing the input gradient or sampling possible perturbations when model access is limited. Although various types of backdoors exist, this dissertation demonstrates that they can be effectively clustered into two categories based on their methods of input manipulation. The development of practical reverse-engineering approaches is based on this fundamental classification, leading to the successful identification of backdoor vulnerabilities in AI models.</p><p dir="ltr">To alleviate such security threats, this dissertation introduces a novel hardening technique that enhances the robustness of models against adversary exploitation. It sheds light on the existence of backdoors, which can often be attributed to the small distance between two classes. Based on this analysis, a class distance hardening method is proposed to proactively enlarge the distance between every pair of classes in a model. This method is effective in eliminating both injected and natural backdoors in a variety of forms.</p><p dir="ltr">This dissertation aims to highlight both existing and newly identified security and safety challenges in AI systems. It introduces novel formulations of backdoor trigger patterns and provides a fundamental understanding of backdoor vulnerabilities, paving the way for the development of safer and more secure AI systems.</p>
3

TOWARDS SECURE AND ROBUST 3D PERCEPTION IN THE REAL WORLD: AN ADVERSARIAL APPROACH

Zhiyuan Cheng (19104104) 11 July 2024 (has links)
<p dir="ltr">The advent of advanced machine learning and computer vision techniques has led to the feasibility of 3D perception in the real world, which includes but not limited to tasks of monocular depth estimation (MDE), 3D object detection, semantic scene completion, optical flow estimation (OFE), etc. Due to the 3D nature of our physical world, these techniques have enabled various real-world applications like Autonomous Driving (AD), unmanned aerial vehicle (UAV), virtual/augmented reality (VR/AR) and video composition, revolutionizing the field of transportation and entertainment. However, it is well-documented that Deep Neural Network (DNN) models can be susceptible to adversarial attacks. These attacks, characterized by minimal perturbations, can precipitate substantial malfunctions. Considering that 3D perception techniques are crucial for security-sensitive applications, such as autonomous driving systems (ADS), in the real world, adversarial attacks on these systems represent significant threats. As a result, my goal of research is to build secure and robust real-world 3D perception systems. Through the examination of vulnerabilities in 3D perception techniques under such attacks, my dissertation aims to expose and mitigate these weaknesses. Specifically, I propose stealthy physical-world attacks against MDE, a fundamental component in ADS and AR/VR that facilitates the projection from 2D to 3D. I have advanced the stealth of the patch attack by minimizing the patch size and disguising the adversarial pattern, striking an optimal balance between stealth and efficacy. Moreover, I develop single-modal attacks against camera-LiDAR fusion models for 3D object detection, utilizing adversarial patches. This method underscores that mere fusion of sensors does not assure robustness against adversarial attacks. Additionally, I study black-box attacks against MDE and OFE models, which are more practical and impactful as no model details are required and the models can be compromised through only queries. In parallel, I devise a self-supervised adversarial training method to harden MDE models without the necessity of ground-truth depth labels. This enhanced model is capable of withstanding a range of adversarial attacks, including those in the physical world. Through these innovative designs for both attack and defense, this research contributes to the development of more secure and robust 3D perception systems, particularly in the context of the real world applications.</p>
4

Analyzing Secure and Attested Communication in Mobile Devices

Muhammad Ibrahim (19761798) 01 October 2024 (has links)
<p dir="ltr">To assess the security of mobile devices, I begin by identifying the key entities involved in their operation: the user, the mobile device, and the service or device being accessed. Users rely on mobile devices to interact with services and perform essential tasks. These devices act as gateways, enabling communication between the user and the back-end services. For example, a user may access their bank account via a banking app on their mobile device, which communicates with the bank’s back-end server. In such scenarios, the server must authenticate the user to ensure only authorized individuals can access sensitive information. However, beyond user authentication, it is crucial for connected services and devices to verify the integrity of the mobile device itself. A compromised mobile device can have severe consequences for both the user and the services involved.</p><p dir="ltr">My research focuses on examining the methods used by various entities to attest and verify the integrity of mobile devices. I conduct a comprehensive analysis of mobile device attestation from multiple perspectives. Specifically, I investigate how attestation is carried out by back-end servers of mobile apps, IoT devices controlled by mobile companion apps, and large language models (LLMs) accessed via mobile apps.</p><p dir="ltr">In the first case, back-end servers of mobile apps must attest to the integrity of the device to protect against tampered apps and devices, which could lead to financial loss, data breaches, or intellectual property theft. For instance, a music streaming service must implement strong security measures to verify the device’s integrity before transmitting sensitive content to prevent data leakage or unauthorized access.</p><p dir="ltr">In the second case, IoT devices must ensure they are communicating with legitimate companion apps running on attested mobile devices. Failure to enforce proper attestation for IoT companion apps can expose these devices to malicious attacks. An attacker could inject malicious code into an IoT device, potentially causing physical damage to the device or its surroundings, or even seizing control of the device, leading to critical safety risks, property damage, or harm to human lives.</p><p dir="ltr">Finally, in the third case, malicious apps can exploit prompt injection attacks against LLMs, leading to data leaks or unauthorized access to APIs and services offered by the LLM. These scenarios underscore the importance of secure and attested communication between mobile devices and the services they interact with.</p>

Page generated in 0.046 seconds