<p> When reviewing the performance of Intelligent Virtual Assistants (IVAs), it is desirable to prioritize conversations involving misunderstood human inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. A system for measuring the posthoc <i>risk of missed intent </i> associated with a single human input is presented. Numerous indicators of risk are explored and implemented. These indicators are combined using various means and evaluated on real world data. In addition, the ability for the system to adapt to different domains of language is explored. Finally, the system performance in identifying errors in IVA understanding is compared to that of human reviewers and multiple aspects of system deployment for commercial use are discussed.</p><p>
Identifer | oai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10816823 |
Date | 26 September 2018 |
Creators | Beaver, Ian |
Publisher | The University of New Mexico |
Source Sets | ProQuest.com |
Language | English |
Detected Language | English |
Type | thesis |
Page generated in 0.002 seconds