INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rather, only in healthy controls. Little is actually known if those with mental health conditions would want to use conversational agents, and how comfortable they might feel hearing results they would normally hear from a clinician, instead from a chatbot.
OBJECTIVES: We asked patients with mental health conditions to ask a chatbot to read a results document to them and tell us how they found the experience. To our knowledge, this is one of the earliest studies to consider actual patient perspectives on conversational agents for mental health, and would inform whether this avenue of research is worth pursuing in the future. Our specific aims are to first and foremost determine the usability of such conversational agent tools, second, to determine their likely adoption among individuals with mental health disorders, and third, to determine whether those using them would grow a sense of artificial trust with the agent.
METHODS: We designed and implemented a conversational agent specific to mental health tracking along with a supporting scale able to measure its efficacy in the selected domains of Adoption, Usability, and Trust. These specific domains were selected based on the phases of interaction during a conversation that patients would have with a conversational agent and adapted for simplicity in measurement. Patients were briefly introduced to the technology, our particular conversational agent, and a demo, before using it themselves and taking the survey with the supporting scale thereafter.
RESULTS: With a mean score of 3.27 and standard deviation of 0.99 in the Adoption domain, we see that subjects typically felt less than content with adoption but believe that the conversational agent could become commonplace without complicated technical hurdles. With a mean score of 3.4 and standard deviation of 0.93 in the Usability domain, we see that subjects tended to feel more content with the usability of the conversational agent. With a mean score of 2.65 and standard deviation of 0.95 in the Trust domain, we see that subjects felt least content with trusting the conversational agent.
CONCLUSIONS: In summary, though conversational agents are now readily accessible and relatively easy to use, we see there is a bridge to be crossed before patients are willing to trust a conversational agent over speaking directly with a clinician in mental health settings. With increased attention, clinic adoption, and patient experience, however, we feel that conversational agents could be readily adopted for simple or routine tasks and requesting information that would otherwise require time, cost, and effort to acquire. The field is still young, however, and with advances in digital technologies and artificial intelligence, capturing the essence of natural language conversation could transform this currently simple tool with limited use-cases into a powerful one for the digital clinician.
Identifer | oai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/36733 |
Date | 18 June 2019 |
Creators | Vaidyam, Aditya Nrusimha |
Contributors | Flynn, David |
Source Sets | Boston University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Rights | Attribution-NonCommercial-ShareAlike 4.0 International, http://creativecommons.org/licenses/by-nc-sa/4.0/ |
Page generated in 0.0016 seconds