• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Listening behaviors in Japanese: Aizuchi and head nod use by native speakers and second language learners

Hanzawa, Chiemi 01 December 2012 (has links)
The purpose of the present study is to investigate similarities and differences in the listening behaviors of native speakers and learners of Japanese, focusing on the production of aizuchi and head nods. The term aizuchi is often interchangeably used with the word backchannel, and these are characterized as the listener's use of short utterances such as oh or uh huh in English or hai, un, or aa in Japanese. In this study, aizuchi is defined as a short verbal utterance that is produced in response to the primary speaker's speech in Japanese. A total of 14 NS--NS or 14 NS--NNS dyads were formed to elicit native speakers' and learners' aizuchi and head nods. With the exception of a few participants in their late twenties, most of the participants were female native speakers and learners of Japanese who were of college age. The learners of Japanese were native speakers of American English who had been labeled as intermediate/high-intermediate level learners of Japanese. Each interaction included a semi-free conversation and a narrative story-telling task, both of which were recorded and transcribed for analysis. The findings indicate that the differences in the use of aizuchi and head nods produced by native speakers and learners lie not mainly in their frequency, but in the types and functions. The results show that when the frequency of aizuchi and head nods was measured with a time-based scale, which was the frequency per 60 seconds, differences were found in the frequency of head nods and total frequency of aizuchi and head nods. However, no significant difference was found in the frequency of aizuchi and head nods based on the amount of speech the speakers produced. Aizuchi were categorized into 16 groups to investigate differences in their types. The results show that the learners were using less aa-group, hee-group, iya-group aizuchi but more soo-group aizuchi compared to the native speakers. The number of different aizuchi each participant used was also measured to examine the variety of aizuchi, and it was found that both the native speakers and the learners were producing a similar number of different aizuchi. Head nods were analyzed based on nodding count, and it was revealed that more multiple head nods were observed in the learner's behaviors. The functions of aizuchi and head nods were categorized into seven groups, and the distribution of the functions was analyzed. The results indicate that learners tend to use more aizuchi to express their understanding and reaction to their interlocutors' response solicitation, while the use of aizuchi that do not show their attitude was more frequent with native speakers. The distribution of the functions of head nods was similar between the two groups. By further examining the types and the function of aizuchi and head nods, the study sheds light on which types of aizuchi learners may be lacking or overusing. Pedagogical implications are drawn from the results.
2

Recipient response behaviour during Japanese storytelling: a combined quantitative/multimodal approach

Walker, Neill Lindsey Unknown Date
No description available.
3

Recipient response behaviour during Japanese storytelling: a combined quantitative/multimodal approach

Walker, Neill Lindsey 11 1900 (has links)
This study explores the role of speaker and listener gaze in the production of recipient responses, often called backchannels or, in Japanese, aizuchi. Using elicited narrative audio/video data, speaker gaze and recipient response behaviours were first analyzed quantitatively. The results showed that majority of recipient responses are made while the speaker is gazing at the recipient. Next, a qualitative multimodal analysis was performed on a specific type of recipient response that occurred both during and without speaker gaze. The results showed that recipients make good use of the state of the speakers gaze to regulate the speakers talk and negotiate for a pause, a repair, or a turn at talk. These findings suggest that what are currently known as backchannels are only a small part of a much larger sequential multimodal system that is inseparable from the ongoing talk. / Japanese Language and Linguistics
4

Identifying and Understanding the Difference Between Japanese and English when Giving Walking Directions

Barney, Keiko Moriyama 01 March 2015 (has links) (PDF)
In order to better identify and understand the differences between Japanese and English, the task of giving walking directions was used. Japanese and American public facilities (10 each) were randomly chosen from which to collect data over the phone in order to examine these differences based on the following five communication styles: 1) politeness, 2) indirectness, 3) self-effacement, 4) back-channel feedback (Aizuchi), 5) and other linguistic and cognitive differences in relation to space and giving directions. The study confirmed what the author reviewed in the literature: Japanese are more polite, English speakers tend to give directions simply and precisely, Japanese prefer pictorial information and most Americans prefer linguistic information, Japanese is a topic-oriented language and also an addressee-oriented language. The information revealed from this study will help Japanese learners develop important skills needed for developing proficiency in the target language and also teach important differences between the two languages.

Page generated in 0.0243 seconds