Return to search

Evaluating Trust in AI-Assisted Bridge Inspection through VR

The integration of Artificial Intelligence (AI) in collaborative tasks has gained momentum, with particular implications for critical infrastructure maintenance. This study examines the assurance goals of AI—security, explainability, and trustworthiness—within Virtual Reality (VR) environments for bridge maintenance. Adopting a within-subjects design approach, this research leverages VR environments to simulate real-world bridge maintenance scenarios and gauge user interactions with AI tools. With the industry transitioning from paper-based to digital bridge maintenance, this investigation underscores the imperative roles of security and trust in adopting AI-assisted methodologies. Recent advancements in AI assurance within critical infrastructure highlight its monumental role in ensuring safe, explainable, and trustworthy AI-driven solutions. / Master of Science / In today's rapidly advancing world, the traditional methods of inspecting and maintaining our bridges are being revolutionized by digital technology and artificial intelligence (AI). This study delves into the emerging role of AI in bridge maintenance, a field historically reliant on manual inspection. With the implementation of AI, we aim to enhance the efficiency and accuracy of assessments, ensuring that our bridges remain safe and functional. Our research employs virtual reality (VR) to create a realistic setting for examining how users interact with AI during bridge inspections. This immersive approach allows us to observe the decision-making process in a controlled environment that closely mimics real-life scenarios. By doing so, we can understand the potential benefits and challenges of incorporating AI into maintenance routines. One of the critical challenges we face is the balance of trust in AI. Too little trust could undermine the effectiveness of AI assistance, while too much could lead to overreliance and potential biases. Furthermore, the use of digital systems introduces the risk of cyber threats, which could compromise the security and reliability of the inspection data. Our research also investigates the impact of AI-generated explanations on users' decisions. In essence, we explore whether providing rationale behind AI's recommendations helps users make better judgments during inspections. The ultimate objective is to develop AI tools that are not only advanced but also understandable and reliable for those who use them, even if they do not have a deep background in technology. As we integrate AI into bridge inspections, it's vital to ensure that such systems are protected against cyber threats and that they function as reliable companions to human inspectors. This study seeks to pave the way for AI to become a trusted ally in maintaining the safety and integrity of our infrastructure.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/117917
Date29 January 2024
CreatorsPathak, Jignasu Yagnesh
ContributorsComputer Science and Applications, Lourentzou, Ismini, Sarlo, Rodrigo, Luther, Kurt
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0029 seconds