This dissertation reports five experiments exploring the use of AI-based smart agents to support physician-patient interactions. In each experiment, a sample of female participants evaluates video tapes of simulated physician-patient interactions in a setting involving early stage breast cancer diagnosis. Experiment 1 manipulates communication style (empathetic/impassive) for both a human physician (played by an actor) and an avatar that mimics the human. Empathetic styles elicit more liking and trust from patients and are also more persuasive. The avatar loses less than the human physician on desirable patient outcomes when communication style changes from empathetic to impassive. A mediation analysis shows that the communication style and physician type effects flow serially through liking and trust to persuasion.
Experiment 2 reports an extended replication, adding a new avatar with less resemblance to the human physician. The findings match those of Experiment 1: both avatars have similar effects on liking, trust, and persuasion and are similarly anthropomorphized. Experiment 3 examines whether the patient's mindset (hope/fear about the cancer prognosis) influences likely patient outcomes. The mindset manipulation does not influence patient outcomes, but we find support for the core serial mediation model (from liking to trust to persuasion). Experiment 4 explores whether it matters how the avatar is deployed. Introducing the avatar as the physician's assistant lowers its evaluations perhaps because the patients feel deprioritized. The human physician is evaluated significantly higher on all outcome dimensions.
Experiments 1-4 focused on the first phase of a standard three-phased physician-patient interaction protocol. Experiment 5 examines communication style (empathetic/ impassive) and physician type (human/avatar) effects across the three sequential phases. Patient outcomes improve monotonically over the three interaction phases across all study conditions. Overall, our studies show that an empathetic communication style is more effective in eliciting higher levels of liking, trust, and persuasion. The human physician and the avatar elicit similar levels of these desirable patient interaction outcomes. The avatar loses less when communication style changes from empathetic to impassive, suggesting that patients may have lower expectations of empathy from avatars. Thus, if carefully deployed, smart agents acting as physicians' avatars may effectively support physician-patient interactions. / Doctor of Philosophy / Healthcare professionals often have the difficult task of breaking bad news to patients. Research has shown that physician's communication style influences patient outcomes (liking, trust, persuasion, and compliance). Some physicians may adopt an impassive communication style to avoid emotional involvement with patients and some others may be overly empathetic and are prone to be perceived as inauthentic. These deficiencies persist despite an emphasis on developing physician communication skills.
As in other service domains, a new generation of humanoid service robots (HSRs) offers potential for supporting physician-patient interactions. The effectiveness of such Artificial Intelligence (AI)/smart agent supported physician-patient interactions will rest, in part, on the communication style designed into the smart agents. A patient interacting with a smart agent emulating a human physician may assess different cognitive capabilities (knowledge and expertise), attribute different motivations, and make different socio-cultural appraisals than when they interact with the physician in-person.
This research examines whether communication style (empathetic versus impassive) implemented via facial expression and vocal delivery elicits different patient responses when interacting with a smart agent (a physician' avatar) versus the physician in person. Findings suggest that, an empathetic (vs impassive) communication style elicits more positive patient responses, avatar physicians fare at par or better than the human physician and lose less on the patient outcomes when the communication style changes from empathetic to impassive.
The avatars' appearance does not play a role in persuasion. Avatars were similarly anthropomorphized and participants' mindset (Hope/Fear) did not influence the outcomes. However, if the avatars are introduced as assistants (versus standalone physicians) there is a possibility that patients may feel downgraded/deprioritized, leading to lower evaluations for the avatars than the human physician. The contrast created when the human physician introduces the avatar may have unintended consequences that lower the avatar's evaluation. Without a direct contrast, patients may be more receptive to avatar interactions, particularly as they become more familiar in service environments.
Our findings suggest that, if carefully deployed, smart agents acting as physicians' avatars may effectively support physician-patient interactions. Indeed, patients may have lower expectations of empathy from an avatar versus a human physician. This can facilitate more effective physician-patient interactions and elicit positive downstream effects on patient liking, trust and compliance.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/107853 |
Date | 21 January 2022 |
Creators | Ravella, Haribabu |
Contributors | Business, Chakravarti, Dipankar, Bagchi, Rajesh, Jiang, Juncai, Herr, Paul Michael |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Dissertation |
Format | ETD, application/pdf |
Rights | Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International, http://creativecommons.org/licenses/by-nc-nd/4.0/ |
Page generated in 0.0027 seconds