• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 3
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

手勢在中文會話中的語用功能 / Gestural contextualization in Chinese conversation

曾惠鈴, Zeng, Hui-Ling Unknown Date (has links)
本篇論文是藉由研究在會話中觀察到的手勢,說明手勢在溝通上的重要性。根據我們的觀察,說話者在溝通上使用的手勢有兩種,一種是「說到做到」,一種是「言行不一」。分析在會話中出現的手勢後,我們得知「說到做到」手勢最常傳遞名詞訊息,其次是動詞訊息,而這類手勢的功用在於使說話者的言語表達更形清楚。以具像和指示手勢傳遞的「言行不一」訊息,包括發生在主詞和動詞位置的「角色轉換」,其功用是使說話者表達對談論中人物的說明和詮釋,以及最常和動詞一起出現的「補充訊息」,可補充說話者在事件中沒有以言語說出的訊息。說話者在溝通中使用手勢,就能提供更多訊息給聽者,同時也可使言語表達更加清楚生動,便於聽者了解。 / There are two kinds of gestures in communication. One is "congruous gestures", which the gestural messages are identical with the meanings of the simultaneous speech. Another is "incongruous gestures". Incongruous gestures are divided into two groups. (1)"footing shift": the speakers play the roles of the characters in their narrations. (2) "message complement": the gestural messages are not the same as the meanings of the corresponding spoken words. Since the congruous gestures perform the same messages as those of the spontaneous speech, we want to find out why the speakers use them. To the incongruous gestures, we want to find out why the speakers perform them. We collect the Chinese conversational data to do our analyses. In our data, the messages performed by congruous gestures mostly correspond with the meanings of the nouns. The function of the congruous gestures is to make the speakers' verbal messages more clearly and vividly. Besides, the gestural types of incongruous gestures are iconic and deictic. The "footing shift" occurs in the positions of the noun and the verb. Its function is to make the speakers give their own interpretations and evaluations on the characters. The "message complement" mostly appears with the verb. Its function is to complement the message the speakers do not say in the events. Gesturing in communication is very important. It enables the speakers to provide more unspoken information, or convey the verbalized messages more clearly and vividly for the hearers to understand.
2

語言與手勢的觀點表現 / Representations of Viewpoints in Language and Gesture

謝培禹, Hsieh, Pei Yu Unknown Date (has links)
本文旨在探討在中文的日常生活對話當中,當說話者談論到他人過去事件時,語言與手勢的觀點表現。以McNeill曾經提到語言與手勢能夠共同表達觀點的說法作為根據,本研究也探討這些伴隨著語言的手勢是否和語言表達相同或者是不同的觀點。 本研究的架構根據Koven(2002)的說話者角色理論(speaker role inhabitance)和McNeill(1992)提出的事件當中人物的觀點(character viewpoint)和旁觀者的觀點(observer viewpoint),定義了三中觀點—當下說話者觀點(speaker viewpoint)、事件當中人物觀點(character viewpoint)和旁觀者觀點(observer viewpoint)。而在手勢的分析上,本研究提出五個手勢特徵—手勢使用空間範圍,手勢使用單手或是雙手、手勢表達語意的stroke階段執行時間的長短、手勢stroke階段同一手部動作是否有重複的現象,以及手勢是否伴隨身體上其他的動作作為五個手勢觀點分析的關鍵指標。 量化研究發現,說話者在生活對話當中描述他人過去事件使用搭配語言的手勢,在每一種觀點的分布和語言上的表現不同。事件當中人物觀點在語言上雖然鮮少被採用,在手勢上卻是最常被表達的觀點。相反的,儘管當下說話者觀點在語言上也常出現,手勢上卻很罕見。另外,旁觀者觀點則在語言上和手勢上的分布都很頻繁。針對同一事件語言與手勢共同表達觀點的量化研究則發現,百分之六十四點七的手勢表達了和語言不同的觀點。因此,本研究說明儘管語言和手勢可以合作表達觀點,手勢卻更常表達和伴隨語言不同的觀點。 語言與手勢合作表達觀點的探討不僅說明語言與手勢如何互相協調組織要表達的訊息和觀點,更進一步引領我們去探討在人與人溝通時,語言與手勢展現的認知過程。本研究藉由兩個手勢產製的假說—the Lexical Semantics Hypothesis和the Interface Hypothesis,提供了針對本研究結果理論上的解釋。而每一個假說也都由相關的研究結果作為證據支持。另外,the Interface Hypothesis還可以針對語言與手勢在表達觀點時的分工現象提出合理的解釋。 / This thesis explores linguistic and gestural representations of viewpoints utilizing the descriptions of third-person past events within Chinese conversational discourse. Following McNeill’s idea that language and gesture are co-expressive in viewpoints, the present study also attempts to investigate whether speakers’ speech-accompanying gesture works in collaboration with language in expressing the same or different viewpoints. The framework of this study utilizes Koven’s (2002) framework of speaker role inhabitance and McNeill’s (1992) notion of character and observer viewpoint, and defines three viewpoints—speaker, character and observer viewpoint. In analyzing gestural viewpoints, the present study recognizes five gestural features—gestural space, handedness, stroke duration, frequency, and the involvement of other parts of the body as five distinctive criteria for use in identifying different viewpoints. Quantitative study of linguistic and gestural viewpoints shows that speech-accompanying gesture in the descriptions of third-person past events within conversational contexts displays different patterns from that of those found in language in the distributions of the three viewpoints. Character viewpoint, which is rarely adopted in language, is the most often conveyed viewpoint in gesture. On the other hand, despite the fact that speaker viewpoint is also commonly expressed in language, it rarely occurs in gesture. Observer viewpoint, in addition, is frequently seen in both the linguistic and gestural channels. With respect to the collaborative expressions of viewpoints in language and gesture concerning a description of the same event, quantitative study shows that 64.7% of all gestures produced in the current data represent a viewpoint different from that conveyed in language. Therefore, this study suggests that while language and gesture are co-expressive in terms of viewpoints, gesture more often collaborates with the accompanying speech in representing different viewpoints. The collaborative expressions of viewpoints in language and gesture suggest how speech and gesture coordinate with each other in organizing information and expressing different viewpoints also lead us to see the cognitive process that underlies both linguistic and gestural modalities within daily human communication. Two hypotheses—the Lexical Semantics and the Interface Hypothesis are referred to in order to provide theoretical accounts for the findings in this study. Each hypothesis is also supported by different pieces of evidence and percentages of gestures produced in the current data. The Interface Hypothesis can further provide an explanation concerning the division of labor between language and gesture in expressing viewpoints, which the Lexical Semantics Hypothesis cannot supply.
3

人機介面中的形狀辨識及其應用 / Shape recognition and its application in human-computer interaction

鄭聖耀, Cheng, Sheng Yao Unknown Date (has links)
電腦硬體的發展日新月異,電腦在運算的能力有長足的提升,遠遠超過一般人腦的計算能力,隨著電腦的普及率大幅提高,電腦由以往為專業領域的工具轉變為家庭不可或缺的商品,一般大眾也成為電腦的使用者,與電腦溝通的技術(即人機介面)逐漸重要。在多樣人機介面當中,自然的人機介面尤為重要,在手持式計算裝置及平版電腦上的手寫或手勢是人機介面中較自然的方式,因此本論文將對手寫軌跡以及手勢辨識進行研究。由於此類的人機介面自由度較高,我們利用傅立葉描述元(Fourier descriptor)以及shape context,皆為平移、旋轉、縮放等rigid transformation下維持不變的方法。在手繪圖形,我們收集114位使用者的手繪資料,繪圖的過程中,依使用者直觀的方式,繪圖於電腦的觸控板,而這些使用者幾乎為首次使用觸控筆。當我們利用傅立葉描述元時,可達到67%辨識率;而使用shape context時,有90%的準確率。另外,我們將此技術應用於手勢辨識,收集348張手勢的照片,同樣使用傅立葉描述元以及shape context,其辨識率各為62%以及70%。 由於我們可以利用以上二方法定義出距離,即可使用K-Nearest Neighbor(KNN)為分類的方法。分別透過傅立葉描述元以及shape context所定義的距離,在辨識3D幾何物件約可達75%與95%,而在手勢辨識約有78%以及82%的辨識率。 / The cost of computing devices has dropped significantly in recent years, enabling diversified applications that require natural man-machine interaction such as pen-based computing and gesture-based communication. Whereas the automatic recognition of handwriting has been studied quite extensively, research on hand-drawn geometric shapes has received relatively little attention. In this thesis, we investigate an effective method to recognize hand-drawn geometric shapes and hand gesture. Due to the high degree of freedom of natural human-computer interface, we apply two methods, namely, Fourier descriptor (FD) and shape context (SC) to aid shape recognition. For hand-drawn shapes, we collect 114 users' free-hand drawings using Tablet PC. In this study, we achieve an accuracy of 67% by FD and 90% by SC. For gesture-based interface, we gather 348 pictures of hand gestures and obtain a classification rate of 62% by FD and 70% by SC. Since FD and SC are distance measures, we can use K-Nearest Neighbor (KNN) classifier to improve the recognition rate. The incorporation of KNN classifier has increased the precision to 75% and 95%, where distance is measured by FD and SC respectively. For hand gestures, the improved accuracy is 78% by FD and 82% by SC.
4

言語及手勢中之隱喻表達 / The Expression of Metaphor in Speech and Gesture

張宇彤, Chang, Yu Tung Unknown Date (has links)
本文旨在研究中文日常會話中語言及手勢之隱喻表達,並根據Lakoff和Johnson的概念隱喻理論(Conceptual Metaphor Theory),探討語言與手勢之慣常隱喻表達以及兩者的互動關係。本文共分析247筆隱喻。其中110(44.5%)筆在語言及手勢中同時傳遞同一類型之隱喻概念;另外137(55.5%)筆只藉著手勢傳達隱喻概念。 日常會話語料中共發現九種隱喻類型,包括身體譬喻、因果譬喻、傳輸譬喻、容器譬喻、實體譬喻、虛擬移動譬喻、 空間方位譬喻、擬人譬喻以及複合譬喻。此外,根據意象圖式之概念,本研究也區分了九種來源域概念:活動、身體部位、容器、虛擬移動、力、物體、路徑、人與空間。隱喻可以用來表達繁多的目標域概念,以下八種目標域概念在語料中至少出現五次:群體、心理活動、具體活動、程度、順序、說話內容、狀態與時間。 研究發現,實體譬喻(77.8%)及空間方位譬喻(17.4%)在日常會話中最為普遍。根據Lakoff 和 Johnson (1980c),人們對於實體之經驗提供了多種方式來表達其他抽象概念,例如我們能集合、分類、量化物體以及確立物體之情勢。Lakoff 和 Johnson亦表示,空間方位是構成某些概念(例如:高地位)之不可或缺的部分;缺乏空間方位譬喻,很難利用其他方式表達。因此,日常會話中經常使用實體譬喻及空間方位作譬喻。空間方位譬喻之來源域概念可以是空間或路徑,而其他類型之譬喻僅來自單一來源域。最常見的來源域概念為物體,而常見之目標域概念則有狀態、時間及具體活動。有關單一來源域至多元目標域之隱喻對應,來源域概念包括物體、空間、路徑、虛擬移動、活動及容器,可用以表達多個目標域。有關多元來源域至單一目標域之隱喻對應,目標域概念包括時間、心理活動、說話內容、順序及程度,可藉由多個來源域表達。 本文亦從三方面探討語言及手勢如何共同表達隱喻概念:語言及手勢之時序、手勢之關聯詞彙、語言及手勢之語意配合,以瞭解關於語言與手勢產生之不同理論假說。Lexical Semantic Hypothesis認為手勢源自於詞項之語意內容,也主張手勢出現之時間通常先於相關詞彙,以利詞彙搜索。Interface Hypothesis則認為空間-運動訊息及語言訊息在產生手勢之過程中相互影響,因此手勢及語言會同時出現,而本研究確實發現手勢大多與相關詞彙同時出現。再者,17.4%的手勢對應片語,而不限於單詞,此結果與Lexical Semantic Hypothesis之見解相悖。最後,研究發現55.5%之隱喻僅藉由手勢表達。因此語言及手勢傳達不同的語意訊息,結果支持Interface Hypothesis之論說—手勢和語言可各自表意。上述三項結果支持Interface Hypothesis之論點。 / This thesis explores the linguistic and gestural expressions of metaphors in daily Chinese conversations. Following Lakoff and Johnson’s framework of Conceptual Metaphor Theory, the present study aims to investigate the habitual expressions of metaphors in language and gesture and the collaboration of the two modalities in conveying metaphors. The present study examined 247 metaphoric expressions. The data includes 110 (44.5%) metaphors being conveyed concurrently by speech and gesture—the two modalities expressing the same type of metaphors—and 137 (55.5%) metaphors being conveyed in gesture exclusively. Nine types of metaphors were found in the daily conversations: Body-part, Causation, Conduit, Container, Entity, Fictive-motion, Orientation, Personification, and complex metaphors. Furthermore, based on the notion of image schema, nine types of source-domain concepts were recognized: ACTIVITY, BODY-PART, CONTAINER, FICTIVE-MOTION, FORCE, OBJECT, PATH, PERSON, and SPACE. A great variety of target-domain concepts were realized via metaphors; the present study focused on eight types, each occurred more than five times in the data: GROUP, MENTAL ACTIVITY, (physical) ACTIVITY, DEGREE, SEQUENCE, SPEECH CONTENT, STATE, and TIME. The results showed that Entity metaphor (77.8%) and Orientation metaphor (17.4%) are the most common types in daily conversation. According to Lakoff and Johnson (1980c), people’s bodily experiences of physical objects provide basis for viewing other abstract concepts; for instance, we can group, categorize, quantify, and identify aspects of objects. They also suggested spatial orientations are essential parts of certain concepts (e.g., high status); without orientation metaphors, it would be difficult to find alternative ways to express the ideas. Therefore, Entity metaphor and Orientation metaphor are frequently employed in metaphoric expressions. Orientation metaphors are based on two source domains, SPACE and PATH; the other types of metaphors are all associated with a single source domain. The most common type of source domains is OBJECT, whereas the common types of target domains are STATE, TIME, and PHYSICAL ACTIVITY. With regard to the one-source-to-many-targets correspondences, the source domains of OBJECT, SPACE, PATH, FICTIVE-MOTION, ACTIVITY, and CONTAINER could be used to express numerous target-domain concepts metaphorically. As to the many-sources-to-one-target correspondences, the target-domain concepts of TIME, MENTAL ACTIVITY, SPEECH CONTENT, SEQUENCE, and DEGREE could be represented by multiple source-domain concepts. The collaboration of language and gesture enables us to evaluate the various hypotheses of speech-gesture production, based on the temporal relation between language and gesture, the lexical affiliates of metaphoric gestures, and the semantic coordination across the two modalities. The Lexical Semantics Hypothesis suggests that gestures are generated from the semantic of a lexical item (or a word) and that gestures tend to precede their lexical affiliates to help lexical search. The Interface Hypothesis proposes that spatio-motoric and linguistic information interact with each other during gesture production, so gestures and the related speech will occur at the same time. The present study found that gestures mostly synchronize with the associated speech. Moreover, 17.4% of metaphoric gestures are associated with grammatical phrases rather than words. This result opposes to the claim of the Lexical Semantics Hypothesis. Last, the present study found that 55.5% of metaphoric expressions are being conveyed in gesture exclusively. The result supports the argument of the Interface Hypothesis that language and gesture can covey diverse semantic contents respectively. Based on the above findings, the current study tends to support the Interface Hypothesis.
5

語言與手勢之溝通動態性 / Communicative Dynamism in Language and Co-speech Gesture

楊婉君, Yang, Wan Chun Unknown Date (has links)
本研究探討溝通動態與手勢之關係。本研究相較於過去的研究提供了中文口語語料的量化結果,除了檢視語言上的溝通動態值與手勢出現、手勢類型的關係以外,並加入手勢特徵進而檢視。語言上的溝通動態值分別以「資訊狀態」與「主題連續性」來決定溝通動態值之高低,並且將手勢所伴隨的語詞依詞性分開分為名詞與動詞,檢視是否手勢上會依循語碼原則(the coding principle):當溝通動態值越高,所使用的代碼材料便越多;當溝通動態值越低,所使用的代碼材料越少。結果發現,以決定溝通動態值的標準來看,資訊狀態較主題連續性更能依語碼原則反應溝通動態值。原因是因為資訊狀態反應訊息的新舊差異性,而主題連續性反應的是舊訊息當中的不同的「舊」的程度差異,因此前者較能反應溝通動態與手勢之關係;而以手勢伴隨的語詞而言,動詞較名詞更能依語碼原則反應溝通動態值。因為動詞相較於名詞而言,在語言上無完整的語碼系統以反應溝通動態值之高低,因此倚賴手勢出現與手勢特徵來反應語言上的溝通動態值之高低。 / The study investigates the correlation between communicative dynamism (CD) and gesture. Different from the previous studies, the present study provides quantitative analysis based on Chinese conversational data. The study examines the correlation between CD in language and the occurence of gestures, gestural types and gestural features. The various degrees of CD are deteremined by two separate criteria, namely “information status” and “topic continuity”. Morover, the study also distinguishes between the nominal affiliates of gesture and the verbal counterparts. The study found that gestures occur at the two extremities of CD. Gestures tend to co-occur with linguistic elements bearing the highest or the lowest CD. In addition, based on the criterion of “information status”, stroke duration and handedness were found to reflect the various degrees of CD. On the other hand, based on the criterion of “topic continuity”, all gestural features including stroke duration, gestural space, handedness and stroke frequency have no correlation with CD.
6

輔助視障者以聲音記錄日常生活之手機介面研究 / HearMe: assisting the visually impaired to record vibrant moments of everyday life

蔡宜璇, Tsai, Yi Hsuan Unknown Date (has links)
視覺障礙者主要透過聲音來記錄生活與體驗世界,如同明眼人以文字或相片記憶重要時刻一般。然而觀察現有錄音裝置與軟體,皆尚未能提供適合視障者使用的完善錄音流程;即使是有語音功能的輔助裝置,不僅其價格、軟體更新、硬體維修等因市場小眾較為不易,也因為只是單純的錄音工具而無法流暢的銜接後續的檔案整理與分享。直到近幾年智慧型手機的興起,藉著其為市場主流產品、有豐富的軟體支援、隨時可連上網路等條件,逐漸成為視障者更好的輔助裝置的選擇。 為使視障者也能順利的操作觸控式介面,過往研究者針對Eyes-free情境提出了相關設計原則,而現今手機作業系統也大多內建螢幕報讀機制,讓視障者能自在與自信的與手機互動。雖然手機的可及性與通用性越來越受重視,專門為視障者設計的軟體卻並不多,輔助功能的開發資源和準則也待進一步的發展。本研究於初期的使用者觀察與訪談中,先深入了解視障者利用聲音記事的習慣與遇到的困難,並再進行初步設計方案的功能需求訪談,以切合使用者實際錄音的情境。 綜上所述,本研究為以視覺障礙者為目標使用族群,於觸控式手機設計錄音輔助軟體「HearMe」,解決視障者日常生活中聲音記錄的問題,並嘗試以手勢和語音設計,探索視障者操作觸控式介面的可能性。本系統原型有幾項重要特點:(1)快捷手勢可開始或結束錄音、(2)標記與播放重點段落、(3)即時編輯與歸類檔案、(4)以分類清單快速搜尋、(5)行動通訊立即分享;其他特色功能包括語音檔名、快轉與倒轉手勢、自訂群組和地標等。本系統原型開發時運用迭代設計流程共實作三次週期,每個版本皆經過設計、測試、分析、和調整功能等過程,逐步驗證系統的可行性與實用性。 經過三次的設計循環與共計18位視障者參與測試,本研究於第三版系統原型完成能實際應用在生活中的錄音軟體。受測者認為HearMe操作簡單、容易學習,快速播放重點段落省時省力、分類清楚而方便尋找檔案;同時它能夠以完善的語音提示和整合的錄音流程彌補現有裝置不足的部分,讓手機成為生活中記錄聲音的最佳輔助工具。最後,本研究以Google Analysis分析HearMe實際使用數據,並搭配訪談回饋總結系統設計的成果與互動設計之建議,提供HearMe或其他開發者做為日後設計的參考。 / The auditory sense is the primary channel for the visually impaired to experience the world, just as sighted people using words and photos to capture important moment. However, current recording devices mostly don’t have compact recording flows for the visually impaired. The devices with voice feedback are very expensive, nearly no softwares updated, and lack of maintenance supports due to the minority of the market. Also, these devices only can record and play but not organizing and sharing files with others. In recent years, smartphone’s popularity has been rising. It is the mainstream product with variety softwares and can be always online, showing the potential to become alternative accessible device for the visually impaired. In order to allow the visually impaired to use the touch screens, researchers have presented several design principles under eyes-free situations. Moreover, screen readers are embedded into smartphone operating systems like iOS and Android, which enable the visually impaired to freely and comfortably interact with smartphones. While the accessibility and universality of smartphones have been noticed, there are few applications tailored for their use, and the accessibility resources and principles need to be developed. In the first phase of user interview, we investigated their behaviors and difficulties when recording. After the design strategy has been made, we planned a second interview to verify if the functions we defined are suitable for their actual needs. This study focus on the visually impaired and tries to resolve the recording and memorizing problems they faced everyday by developing an accessible recording application on smartphone. The prototype, HearMe, provides specialized gestures and voice feedback. Followings are the highlight features of HearMe: (1) Short-cut gesture to start and finish recording, (2) marking and playing important parts, (3) editing and grouping files on device, (4) rapid searching by classified lists, and (5) real-time sharing. Other features include audio file name, gestures to play forward or backward, and custom groups and landmarks. While developing, this prototype applied iterative design process and repeated the flow cycle for three times. Every generation has been through steps of design, testing, analyzing, and modifying; by this approach, system’s usabilities can be gradually improved. After three cycles of design process which involved total 18 participants, we present a recording application that can use in real life. Participants command that HearMe is easy to operate and learn, playing by parts saving a lot of effort, and structured grouping helps file searching. Additionally, it provides well-defined audio feedbacks and integrated recording flow, complementing the shortcomings current devices have. These advantages make HearMe become the best tool to assist them for recording sounds during everyday life. This study finally concludes design considerations and suggestions by discussing usage data from Google Analytics and interview feedbacks, provides references for other assistive developers.
7

中文節拍手勢之研究 / Gestural Beats in Chinese Narrative Discourse

謝雅惠, Hsieh, Ya Huei Unknown Date (has links)
本文旨從語用和語音方面,來探討中文「敘述性」言談(narrative discourse)對話中,說話者自然產生的節拍手勢。在語用方面,主要是觀察說話者使用節拍手勢表達前後景(Grounding)和新/舊訊息(Information state)的關係。研究結果發現,說話者較常使用節拍手勢來傳達前景(foreground)的新訊息,以提示聽話者注意。此外,說話者在敘述的過程中,常使用節拍手勢來預示前後景訊息的轉換。在語音方面,主要探討說話者使用節拍手勢時,其所對應的語句之音調高低及說話能量強度的關係。研究發現,語句的音高和強度無法預測這類手勢的產生;但是,當傳達前景的新訊息時,這類手勢的出現常伴隨著較強的語氣。此外,研究結果也顯示,說話者使用節拍手勢通常是有規律性的。 / This thesis investigates gestural beats in Chinese narrative discourse at both pragmatic and acoustic levels. The relationship among grounding, information state and beat gestures is examined at the pragmatic level, and at the acoustic level, the relationship among pitch, intensity, and gestural beats is analyzed. Moreover, whether there is a rhythmic pattern for beat gestures is further studied. The subjects were four undergraduates who were asked to view an animated color cartoon episode and then to immediately recount the events in it to a listener. The subjects were audio-video taped. They were not informed of the particular interest of the study. 291 beats were transcribed and analyzed. The audio data was analyzed on Praat and a Kay’s Model 4100. The continuous beats were further analyzed to investigate the rhythmic patterns and were annotated in Anvil, a video annotation tool. Gestural beats, narrative data, and the annotations were statistically analyzed and five phenomena for beat gestures were explored. First, gestural beats can appear in both foregrounded and backgrounded clauses. Second, the occurrence of beat gestures signals shifts between levels of the narrative structure. Third, speakers usually incline to utter new information in foregrounded clauses in Chinese narrative discourse with accompanying beats. Fourth, neither intensity nor pitch changes can predict and correlate with the occurrence of gestural beats in the relationship between beats and two acoustic factors. However, when grounding and the information state of the associated referents are considered, beats are produced for new information in foregrounded clauses with greater intensity. Finally, there is a rhythmic pattern in the production of continuous beats and the regularity of the patterns correlates with the clause boundary. A close relationship is revealed between speech and gestures in this thesis. The findings provide some linguistic details that may help explore the performance of gestural beats and speech in Chinese narrative discourse.

Page generated in 0.0321 seconds