Return to search

Navigating uncertainty: enhancing decision-making in healthcare and interpretable AI adoption

This dissertation explores two critical aspects of decision-making under uncertainty: continuous data monitoring in healthcare and the impact of AI interpretability on adoption, trust, and performance in repeated managerial decisions.

The first chapter unveils a novel finding: maintaining low mean arterial pressure (MAP) is associated with an increased risk of adverse events, a stark contrast to current clinical practices. We propose incorporating this low blood pressure threshold into blood pressure management guidelines for LVAD recipients. This change could potentially improve their survival rates and enhance the appeal of this life-saving therapy.

The second and third chapters focus on managerial decision-making, empirically investigating the impact of various interpretability formats on AI adoption and trust when managers have to make repetitive decisions in an uncertain environment. Many business decisions are made in highly uncertain environments, creating additional challenges for decision-makers. An optimal decision can be perceived as wrong in uncertain environments due to the noisy outcome. The experiment results show that providing interpretability does not necessarily increase AI adoption, and it may even hinder adoption. Additionally, AI adoption is significantly higher in the more uncertain business environments, although the level of trust in AI has been lower for all interpretability types. Moreover, the results revealed that AI adoption increases over time under low uncertainty, whereas it decreases under high uncertainty. Notably, continuously presenting AI performance as a benchmark to decision-makers’ own performance promotes trust in AI and mitigates the negative adoption trend under high uncertainty.

Together, the insights from these studies contribute to healthcare management and AI-assisted decision-making under uncertainty. This highlights the importance of continuous monitoring, the impact of various interpretability types, and the provision of AI decision support to enhance decision-making processes and outcomes under uncertainty.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/48763
Date14 May 2024
CreatorsAltintas, Onur
ContributorsSeidmann, Abraham
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0017 seconds