• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 15
  • 15
  • 13
  • 9
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Learning user modelling strategies for adaptive referring expression generation in spoken dialogue systems

Janarthanam, Srinivasan Chandrasekaran January 2011 (has links)
We address the problem of dynamic user modelling for referring expression generation in spoken dialogue systems, i.e how a spoken dialogue system should choose referring expressions to refer to domain entities to users with different levels of domain expertise, whose domain knowledge is initially unknown to the system. We approach this problem using a statistical planning framework: Reinforcement Learning techniques in Markov Decision Processes (MDP). We present a new reinforcement learning framework to learn user modelling strategies for adaptive referring expression generation (REG) in resource scarce domains (i.e. where no large corpus exists for learning). As a part of the framework, we present novel user simulation models that are sensitive to the referring expressions used by the system and are able to simulate users with different levels of domain knowledge. Such models are shown to simulate real user behaviour more closely than baseline user simulation models. In contrast to previous approaches to user adaptive systems, we do not assume that the user’s domain knowledge is available to the system before the conversation starts. We show that using a small corpus of non-adaptive dialogues it is possible to learn an adaptive user modelling policy in resource scarce domains using our framework. We also show that the learned user modelling strategies performed better in terms of adaptation than hand-coded baselines policies on both simulated and real users. With real users, the learned policy produced around 20% increase in adaptation in comparison to the best performing hand-coded adaptive baseline. We also show that adaptation to user’s domain knowledge results in improving task success (99.47% for learned policy vs 84.7% for hand-coded baseline) and reducing dialogue time of the conversation (11% relative difference). This is because users found it easier to identify domain objects when the system used adaptive referring expressions during the conversations.
2

Example-Based Query Generation for Spontaneous Speech

MURAO, Hiroya, KAWAGUCHI, Nobuo, MATSUBARA, Shigeki, INAGAKI, Yasuyoshi 02 1900 (has links)
No description available.
3

Using Dialogue Acts in dialogue strategy learning : optimising repair strategies

Frampton, Matthew January 2008 (has links)
A Spoken Dialogue System's (SDS's) dialogue strategy specifies which action it will take depending on its representation of the current dialogue context. Designing it by hand involves anticipating how users will interact with the system, and/or repeated testing and refining, and so can be a difficult, time-consuming task. Since SDSs inevitably make understanding errors, a particularly important issue is how to design ``repair strategies'', the parts of the dialogue strategy which attempt to get the dialogue ``back-on-track'' following these errors. To try to produce better dialogue strategies with less time and effort, previous researchers have modelled a dialogue strategy as a sequential decision problem called a Markov Decision Process (MDP), and then applied Reinforcement Learning (RL) algorithms to example training dialogues to generate dialogue strategies automatically. More recent research has used training dialogues conducted with simulated rather than real users and learned which action to take in all dialogue contexts, (a ``full'' as opposed to a ``partial'' dialogue strategy) - simulated users allow more training dialogues to be generated, and the exploration of new dialogue contexts not present in an original dataset. As yet however, limited insight has been provided as to which dialogue contextual features are important to include in the MDP and why. Indeed, a full dialogue strategy has not been learned from training dialogues with a realistic probabilistic user simulation derived from real user data, and then shown to work well with real users. This thesis investigates the value of adding new linguistically-motivated contextual features to the MDP when using RL to learn full dialogue strategies for SDSs. These new features are recent Dialogue Acts (DAs). DAs indicate the role or intention of an utterance in a dialogue e.g. ``provide-information'', an utterance being a complete unit of a speaker's speech, often bounded by silence. An accurate probabilistic user simulation learned from real user data is used for generating training dialogues, and the recent DAs are shown to improve performance in testing in simulation and with real users. With real users, performance is also better than other competing learned and hand-crafted strategies. Analysis of the strategies, and further simulation experiments show how the DAs improve performance through better repair strategies. The main findings are expected to apply to SDSs in general - indeed our strategies are learned and tested on real users in different domains, (flight-booking versus tourist information). Comparisons are also made to recent research which focuses on handling understanding errors in SDSs, but which does not use RL or user simulations.
4

SPIRAL CONSTRUCTION OF SYNTACTICALLY ANNOTATED SPOKEN LANGUAGE CORPUS

Inagaki, Yasuyoshi, Kawaguchi, Nobuo, Matsubara, Shigeki, Ohno, Tomohiro 26 October 2003 (has links)
No description available.
5

Spoken Dialogue In Face-to-Face And Remote Collaborative Learning Environments

January 2014 (has links)
abstract: Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses (MOOCs), students across the world are able to access and learn material remotely. This creates a need for tools that support distant or remote collaboration. In order to build such tools we need to understand the basic elements of remote collaboration and how it differs from traditional face-to-face collaboration. The main goal of this thesis is to explore how spoken dialogue varies in face-to-face and remote collaborative learning settings. Speech data is collected from student participants solving mathematical problems collaboratively on a tablet. Spoken dialogue is analyzed based on conversational and acoustic features in both the settings. Looking for collaborative differences of transactivity and dialogue initiative, both settings are compared in detail using machine learning classification techniques based on acoustic and prosodic features of speech. Transactivity is defined as a joint construction of knowledge by peers. The main contributions of this thesis are: a speech corpus to analyze spoken dialogue in face-to-face and remote settings and an empirical analysis of conversation, collaboration, and speech prosody in both the settings. The results from the experiments show that amount of overlap is lower in remote dialogue than in the face-to-face setting. There is a significant difference in transactivity among strangers. My research benefits the computer-supported collaborative learning community by providing an analysis that can be used to build more efficient tools for supporting remote collaborative learning. / Dissertation/Thesis / Masters Thesis Computer Science 2014
6

Task and User Adaptation based on Character Expression for Spoken Dialogue Systems / 音声対話システムのためのキャラクタ表現に基づくタスク・ユーザ適応

Yamamoto, Kenta 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24728号 / 情博第816号 / 新制||情||137(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 熊田 孝恒, 教授 黒橋 禎夫 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
7

Development of an English public transport information dialogue system / Development of an English public transport information dialogue system

Vejman, Martin January 2015 (has links)
This thesis presents a development of an English spoken dialogue system based on the Alex dialogue system framework. The work describes a component adaptation of the framework for a different domain and language. The system provides public transport information in New York. This work involves creating a statistical model and the deployment of custom Kaldi speech recognizer. Its performance was better in comparison with the Google Speech API. The comparison was based on a subjective user satisfaction acquired by crowdsourcing. Powered by TCPDF (www.tcpdf.org)
8

Evolutionary reinforcement learning of spoken dialogue strategies

Toney, Dave January 2007 (has links)
From a system developer's perspective, designing a spoken dialogue system can be a time-consuming and difficult process. A developer may spend a lot of time anticipating how a potential user might interact with the system and then deciding on the most appropriate system response. These decisions are encoded in a dialogue strategy, essentially a mapping between anticipated user inputs and appropriate system outputs. To reduce the time and effort associated with developing a dialogue strategy, recent work has concentrated on modelling the development of a dialogue strategy as a sequential decision problem. Using this model, reinforcement learning algorithms have been employed to generate dialogue strategies automatically. These algorithms learn strategies by interacting with simulated users. Some progress has been made with this method but a number of important challenges remain. For instance, relatively little success has been achieved with the large state representations that are typical of real-life systems. Another crucial issue is the time and effort associated with the creation of simulated users. In this thesis, I propose an alternative to existing reinforcement learning methods of dialogue strategy development. More specifically, I explore how XCS, an evolutionary reinforcement learning algorithm, can be used to find dialogue strategies that cover large state spaces. Furthermore, I suggest that hand-coded simulated users are sufficient for the learning of useful dialogue strategies. I argue that the use of evolutionary reinforcement learning and hand-coded simulated users is an effective approach to the rapid development of spoken dialogue strategies. Finally, I substantiate this claim by evaluating a learned strategy with real users. Both the learned strategy and a state-of-the-art hand-coded strategy were integrated into an end-to-end spoken dialogue system. The dialogue system allowed real users to make flight enquiries using a live database for an Edinburgh-based airline. The performance of the learned and hand-coded strategies were compared. The evaluation results show that the learned strategy performs as well as the hand-coded one (81% and 77% task completion respectively) but takes much less time to design (two days instead of two weeks). Moreover, the learned strategy compares favourably with previous user evaluations of learned strategies.
9

Linguistic Adaptations in Spoken Human-Computer Dialogues - Empirical Studies of User Behavior

Bell, Linda January 2003 (has links)
This thesis addresses the question of how speakers adapttheir language when they interact with a spoken dialoguesystem. In human–human dialogue, people continuously adaptto their conversational partners at different levels. Wheninteracting with computers, speakers also to some extent adapttheir language to meet (what they believe to be) theconstraints of the dialogue system. Furthermore, if a problemoccurs in the human–computer dialogue, patterns oflinguistic adaptation are often accentuated. In this thesis, we used an empirical approach in which aseries of corpora of human–computer interaction werecollected and analyzed. The systems used for data collectionincluded both fully functional stand-alone systems in publicsettings, and simulated systems in controlled laboratoryenvironments. All of the systems featured animated talkingagents, and encouraged users to interact using unrestrictedspontaneous language. Linguistic adaptation in the corpora wasexamined at the phonetic, prosodic, lexical, syntactic andpragmatic levels. Knowledge about users’linguistic adaptations can beuseful in the development of spoken dialogue systems. If we areable to adequately describe their patterns of occurrence (atthe different linguistic levels at which they occur), we willbe able to build more precise user models, thus improvingsystem performance. Our knowledge of linguistic adaptations canbe useful in at least two ways: first, it has been shown thatlinguistic adaptations can be used to identify (andsubsequently repair) errors in human–computer dialogue.Second, we can try to subtly influence users to behave in acertain way, for instance by implicitly encouraging a speakingstyle that improves speech recognition performance.
10

Linguistic Adaptations in Spoken Human-Computer Dialogues - Empirical Studies of User Behavior

Bell, Linda January 2003 (has links)
<p>This thesis addresses the question of how speakers adapttheir language when they interact with a spoken dialoguesystem. In human–human dialogue, people continuously adaptto their conversational partners at different levels. Wheninteracting with computers, speakers also to some extent adapttheir language to meet (what they believe to be) theconstraints of the dialogue system. Furthermore, if a problemoccurs in the human–computer dialogue, patterns oflinguistic adaptation are often accentuated.</p><p>In this thesis, we used an empirical approach in which aseries of corpora of human–computer interaction werecollected and analyzed. The systems used for data collectionincluded both fully functional stand-alone systems in publicsettings, and simulated systems in controlled laboratoryenvironments. All of the systems featured animated talkingagents, and encouraged users to interact using unrestrictedspontaneous language. Linguistic adaptation in the corpora wasexamined at the phonetic, prosodic, lexical, syntactic andpragmatic levels.</p><p>Knowledge about users’linguistic adaptations can beuseful in the development of spoken dialogue systems. If we areable to adequately describe their patterns of occurrence (atthe different linguistic levels at which they occur), we willbe able to build more precise user models, thus improvingsystem performance. Our knowledge of linguistic adaptations canbe useful in at least two ways: first, it has been shown thatlinguistic adaptations can be used to identify (andsubsequently repair) errors in human–computer dialogue.Second, we can try to subtly influence users to behave in acertain way, for instance by implicitly encouraging a speakingstyle that improves speech recognition performance.</p>

Page generated in 0.0494 seconds