Return to search

Reading with Robots: A Platform to Promote Cognitive Exercise through Identification and Discussion of Creative Metaphor in Books

Maintaining cognitive health is often a pressing concern for aging adults, and given the world's shifting age demographics, it is impractical to assume that older adults will be able to rely on individualized human support for doing so. Recently, interest has turned toward technology as an alternative. Companion robots offer an attractive vehicle for facilitating cognitive exercise, but the language technologies guiding their interactions are still nascent; in elder-focused human-robot systems proposed to date, interactions have been limited to motion or buttons and canned speech. The incapacity of these systems to autonomously participate in conversational discourse limits their ability to engage users at a cognitively meaningful level.

I addressed this limitation by developing a platform for human-robot book discussions, designed to promote cognitive exercise by encouraging users to consider the authors' underlying intentions in employing creative metaphors. The choice of book discussions as the backdrop for these conversations has an empirical basis in neuro- and social science research that has found that reading often, even in late adulthood, has been correlated with a decreased likelihood to exhibit symptoms of cognitive decline. The more targeted focus on novel metaphors within those conversations stems from prior work showing that processing novel metaphors is a cognitively challenging task, for young adults and even more so in older adults with and without dementia.

A central contribution arising from the work was the creation of the first computational method for modelling metaphor novelty in word pairs. I show that the method outperforms baseline strategies as well as a standard metaphor detection approach, and additionally discover that incorporating a sentence-based classifier as a preliminary filtering step when applying the model to new books results in a better final set of scored word pairs. I trained and evaluated my methods using new, large corpora from two sources, and release those corpora to the research community. In developing the corpora, an additional contribution was the discovery that training a supervised regression model to automatically aggregate the crowdsourced annotations outperformed existing label aggregation strategies. Finally, I show that automatically-generated questions adhering to the Questioning the Author strategy are comparable to human-generated questions in terms of naturalness, sensibility, and question depth; the automatically-generated questions score slightly higher than human-generated questions in terms of clarity. I close by presenting findings from a usability evaluation in which users engaged in thirty-minute book discussions with a robot using the platform, showing that users find the platform to be likeable and engaging.

Identiferoai:union.ndltd.org:unt.edu/info:ark/67531/metadc1248384
Date08 1900
CreatorsParde, Natalie
ContributorsNielsen, Rodney D., Blanco, Eduardo, Jin, Wei, Parsons, Thomas
PublisherUniversity of North Texas
Source SetsUniversity of North Texas
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
Formatxii, 176 pages, Text
RightsPublic, Parde, Natalie, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved.

Page generated in 0.0021 seconds