The growing presence of algorithm-generated recommendations in AI-powered services highlights the importance of responsible systems that explain outputs in a human-understandable form, especially in an automotive context. Implementing explainability in recommendations of AI-powered eco-driving is important in ensuring that drivers understand the underlying reasoning behind the recommendations. Previous literature on explainable AI (XAI) has been primarily technological-centered, and only a few studies involve the end-user perspective. There is a lack of knowledge of drivers' needs and requirements for explainability in an AI-powered eco-driving context. This study addresses the attributes that make a “satisfactory” explanation, i,e., a satisfactory interface between humans and AI. This study uses scenario-based interviews to understand the explainability attributes that influence truck drivers' intention to use eco-driving recommendations. The study used thematic analysis to categorize seven attributes into context-dependent (Format, Completeness, Accuracy, Timeliness, Communication) and generic (Reliability, Feedback loop) categories. The study contributes context-dependent attributes along three design dimensions: Presentational, Content-related, and Temporal aspects of explainability. The findings of this study present an empirical foundation into end-users' explainability needs and provide valuable insights for UX and system designers in eliciting end-user requirements.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-51060 |
Date | January 2023 |
Creators | Gjona, Ermela |
Publisher | Högskolan i Halmstad, Akademin för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0026 seconds