1 |
聽其所見,觀其所聞:以眼動探討口語與場景互動的跨感官理解歷程 / Look while listeng : using eye movements to investigate the interaction between spoken language and visual scene during cross-modal comprehension游婉雲, Yu, Wan-Yun Unknown Date (has links)
在人類溝通及語言使用行為中,口語和場景是構成人類跨感官理解歷程的兩項重要成分。究竟兩類資訊如何共同改變理解歷程仍待檢驗。本論文旨在探問四項研究問題:一,過去文獻對理解期間的視覺注意力運作提出兩類觀點。階層取徑主張口語優先並決定視覺表徵的處理,互動取徑則認為口語和視覺表徵可獨立影響視覺注意力。二,口語可促進指涉物體的凝視行為,然口語指涉效果是否受作業目標影響的本質仍不清楚。三,以複雜場景作為視覺情境,探討視覺複雜性和語義一致性表徵如何影響理解歷程。四,檢驗視覺刺激的預覽時間如何改變口語和場景表徵因素對理解歷程的影響。
本論文透過一系列視覺情境典範實驗探討以上研究問題。在每ㄧ項嘗試次中,參與者在聆聽中文語句期間同時觀看包含包含兩項物體的圖片:一為鑲嵌在一致(例如:原野)、不一致(例如:天空)和空白背景的口語指涉目標物體(例如:老虎),另一項則為口語未指涉且與背景一致的非目標物體(例如:禿鷹)。其次,四項實驗直交地操弄「作業目標」(「口語理解作業」或「場景理解作業」)和「預覽時間」(「一秒預覽」或「無預覽」)因素。
實驗結果發現:一,無論作業目標為何,所有實驗皆出現穩定的口語指涉效果。二,場景的視覺複雜性和語義一致性表徵不僅可獨立引導物體凝視行為,也可和口語共同決定理解期間的視覺注意力運作。三,作業目標對口語指涉效果及場景一致性效果產生差異化的調節作用。四,預覽時間有效促進口語理解作業的口語指涉效果,場景理解作業則不受影響。
整體而言,本論文的實驗證據支持互動取徑觀點。換言之,在跨感官理解的過程中,人類認知運作可透過協調語言、視覺和記憶等次系統,快速整合口語和場景所提供的物理和語義表徵,並依據當下情境動態地改變人類對外在世界的感官經驗。 / In human communication and language use, both speech and scene constitute the cross-modal comprehension process. However, how these two elements combine to affect human comprehension process has not yet been fully resolved. Four research questions will be examined. First, two approaches can account for the comprehension process: the hierarchical approach asserts speech plays the main part whereas the visual feature has only a supporting role, while the interactive approach states that
both speech and visual feature combine to determine the comprehension process. Second, despite the speech can cause the spoken reference effect on having more fixations on its visual referent, the nature of this effect is still unclear. Third, most past studies adopted simple object array as visual context, little is known about the impact of real world scenes on the comprehension process. Fourth, whether the preview time could alter the influence of speech and scene on comprehension will be tested.
A series of visual world paradigm experiments were conducted. Factors of task demand (speech comprehension vs. scene comprehension) and preview time (1-second vs. none) were orthogonally manipulated in four experiments. In each trial, participants listened to a spoken sentence in Chinese while viewing a picture with two critical objects: one is the mentioned target object (e.g., tiger), which was embedded in either a consistent, inconsistent or blank background; the other is an unmentioned non-target object (e.g., eagle) that was always consistent with its background.
Several findings were found. First, the reliable spoken reference effect were shown regardless of the task demand was given. Second, the visual complexity and scene consistency not only can individually guide fixations on objects, but can work together with the speech to determine the visual attention during comprehension. Third, task demand could differently modulate the spoken reference and scene consistency effect, respectively. Fourth, preview time significantly enhances the spoken reference effect in the speech comprehension task, whereas no impact was observed in the scene comprehension task. These evidence supported the view of interactive approach. In conclusion, human’s different cognitive systems, including language, vision and memory, can interact with each other and cause the moment to moment experience of how we understand the complex world around us.
|
Page generated in 0.0194 seconds