1 |
一個有延展性的動畫劇本描述語言 / A Scripting Language for Extensible Animation廖茂詠, Liao,Mao-Yung Unknown Date (has links)
在目前3D虛擬環境中,虛擬人物的動作多半是以動作抓取等離線方式錄製後,再以罐裝動作的方式呈現。該動作經過編碼後會以固定的格式進行傳送,然後由客戶端撥放該動畫內容。通常而言,固定的格式規範會限制電腦動畫表現與延展的能力。這篇論文我們提出了一套以XML為基礎的動畫語言,稱為eXtensible Animation Markup Language(XAML)來解決這樣的問題。這套語言設計的目的是為了讓開發者能夠彈性地選擇不同層次的指定方式來產生虛擬演員的動畫;同時使用者可以藉由已經定義好的動畫合成新的動畫內容,或是更改已定義動畫的部分內容來產生一套新的動畫。除此之外,XAML為客製化腳本語言提供延展擴充的機制,開發者可以透過plug-in、內嵌XAML引擎或轉換腳本等方式達到擴充XAML的目的。我們同時使用JAVA實作了一套能夠解譯XAML的動畫引擎,使用者不但可以利用XAML腳本命令產生相對應的3D動畫顯示,也可以透過該動畫引擎所提供的函式庫對場景中的3D物件進行控制。另外,我們也設計了一個具語音對話功能的多人虛擬環境系統,以驗證XAML語言的可行性及有效性。 / Character animations on most virtual environment systems are canned motions created off-line through motion capture techniques. The motions are then en-coded and transmitted with a fixed format and played at the client side. The rigid specification format for computer animation and multimedia presentation in general has greatly affected the development of 3D contents. In this thesis, we propose an XML-based scripting language, called eXtensible Animation Markup Language (XAML). The language is designed to describe character animations at various command levels and to compose a new animation from existing ani-mation clips. Furthermore, one can use plug-in, embeding or translation to in-corporate other customized scripting languages or new functions into XAML. We have implemented an animation engine in Java that can interpret the script-ing language and render 3D animations based on the user’s interactive XAML commands or the provided application programming interface. In addition, we have designed a speech-enabled multi-user virtual environment system based on XAML to verify the feasibility and effectiveness of such a language.
|
2 |
多人虛擬環境中互動式語音界面的實現 / Realizing the Interactive Speech Interface in a Multi-user Virtual Environment廖峻鋒, Liao , Chun-Feng Unknown Date (has links)
近年來3D虛擬環境與語音界面(Voice User Interface)在個人電腦上的應用逐漸受到重視。說話是人類最自然的溝通方式,若能在虛擬環境中加入語音界面,將使人物間的互動更為流暢。近年來雖有許多研究致力於3D虛擬環境與語音界面的整合,但在多人環境中對話管理(Dialog Management)等相關問題上,一直缺乏有效的解決方案。本研究的主要目的,即在解決語音界面整合及對話管理等問題,並實現多人虛擬環境的語音互動機制。我們針對虛擬環境中語音與動畫同步、對話管理機制與多人環境中之語音處理機制等問題,設計一個以VoiceXML為基礎的XAML-V (eXtensible Animation Markup Language – Voice extension ) 語言,並將其實作結果於一個多人虛擬環境系統中驗証其可行性及有效性。 / The applications of 3D virtual environments and voice user interface (VUI) on personal computers have received significant attentions in recent years. Since speech is the most natural way of communication, incorporating VUI into virtual environments can enhance user interaction and immersiveness. Although there have been many researches addressing the issue of integrating VUI and 3D virtual environment, most of the proposed solutions do not provide an effective mechanism for multi-user dialog management. The objective of this research is on providing a solution for VUI integration and dialog management and realizing such a mechanism in a multi-user virtual environment. We have designed a dialog scripting language called XAML-V (eXtensible Animation Markup Language – Voice Extension), based on the VoiceXML standard, to address the issues of synchronization between VUI and animation and dialog management for multi-user interaction. We have also implemented such a language and realized it on a multi-user virtual environment to evaluate the effectiveness of this design.
|
3 |
在語意虛擬環境中實現3D化身的可客製化行為 / Enabling Customized Behaviors of 3D Avatar in Semantic Virtual Environment朱鈺琳, Chu, Yu Lin Unknown Date (has links)
在多人虛擬環境系統的設計上,讓使用者能自行設計化身的行為,並即時以動畫元件的形式安裝到虛擬世界中,是3D內容能否達到共享的關鍵。本論文主要研究的部分包含三個部分:第一個部分是提供虛擬環境系統動態載入動畫元件的機制,使得虛擬環境系統可以動態加入客製化之化身行為;第二個部分是在現有的虛擬環境系統中加入語意的描述,並增加互動訊息傳送時的彈性;第三個部分是實現可客製化的動畫元件,分別以化身和環境間以及化身彼此間的互動,來說明上述機制的可行性。在動態的載入動畫元件的部分,客製化的動畫元件在系統上得以即時安裝並執行。動畫元件在安裝上可使用XML片段,並交由OSGi Framework中的服務來處理此XML標籤。另外,在加入物件的語意描述後,使得這些動畫元件可以取得世界的資訊,並進一步產生符合當時環境限制或應用需求的動畫。我們以Ontology來描述環境和化身的資訊,並實際製作路徑規劃器元件和化身間互動元件兩個範例。我們利用動態安裝及語意資訊兩個機制,以實例說明如何達到實現化身可客製化行為之目的。 / In the design of multi-user virtual environments, in order to share 3D contents designed by users, it is crucial to allow the behaviors of an avatar to be designed as animation components and loaded at the run time. In this thesis, we attempt to address the problem of designing a semantic virtual environment by considering the following three parts. First, we have designed a mechanism for user-designed animation procedures to be installed and loaded at run time. Second, we have augmented our virtual environment system with semantic descriptions and enhanced the flexibility of message interchange. Third, we have used two types of interaction scenarios, avatar-environment and avatar-avatar, to illustrate how customized animation components can be designed to enhance the functionality of a virtual environment. In our system, we allow users to design their own XML tags and the corresponding animation components managed in the OSGi framework. These components can acquire world information and generate appropriate animations according to application requirements and environment constraints. We have used ontology to describe the semantics of environments and avatars. Two example components: the motion planner and the avatar-avatar interaction have been designed to illustrate the dynamic installation process and the retrieval of semantic information for the realization of customizing avatar behaviors.
|
4 |
能表達音樂特徵的人體動畫自動產生機制 / Automatic Generation of Human Animation for Expressing Music Features雷嘉駿, Loi, Ka Chon Unknown Date (has links)
近年來電腦計算能力的進步使得3D虛擬環境得到廣泛的應用。本研究希望能在虛擬環境中結合人體動畫和音樂的特色,以人體動畫來詮釋音樂。我們希望能設計一個智慧型的人體動作產生器,賦予虛擬人物表達音樂特徵的能力,讓動作會因為“聽到”不同的音樂而有所不同。基於人類聽覺的短暫性,系統會自動抓取音樂特徵後將音樂切割成多個片段、對每一片段獨立規劃動作並產生動畫。過去動畫與音樂相關的研究中,許多生成的動作都經由修改或重組運動資料庫中的動作。本研究分析音樂和動作之間的關係,使用程序式動畫產生法自動產生多變且適當的詮釋動作。實驗顯示本系統能通用於LOA1人體模型和MIDI音樂;此外,透過調整系統中的參數,我們能產生不同風格的動畫,以符合不同使用者偏好和不同音樂曲風的特色。 / In recent years, the improvement of computing ability has contributed to the wide application of 3D virtual environment. In the thesis, we propose to combine character animation with music for music interpretation in 3D virtual environment. The system proposed in the thesis is an intelligent avatar motion generator, which generates expressive motions according to music features. The system can extract music features from input music data, segment a music into several music segments, and then plan avatar animation. In the literature, much music-related animation research uses reconstruction and modification of existing motion to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate applicable and variable motions to interpret music. Our experiments show that the system can accept LOA1 models and midi as inputs in general, and generate appropriate expressive motions by modifying parameters according to users’ preference or music style.
|
5 |
打破第四道牆: 以敘事理論為基礎之個人化3D互動敘事創作系統 / Breaking Into the Fourth Wall: Generating Personalized Interactive Narratives for 3D Drama Environments吳蕙盈, Wu, Hui Yin Unknown Date (has links)
互動敘事為敘事創作開啟了許多新的可能性,不論是在各種多媒體敘事創作平台上或是提供更擬真、更深刻的說故事體驗,是傳統敘事所無法提供的。透過日新月異的傳播工具,要在各種平台上創作出這類多媒體互動敘事,對於各種敘事創作者來說是一項相當大的挑戰,也引發了許多相關議題的探討:要如何開發出好用的創作工具,不但是可以降低創作技術門檻,同時也提升創造力。如何設計出一個創作模型是能夠讓故事建構者(包含原創作者、中介創作者、體驗者等本系統目標使用者)對於故事內容、結構以及像長度、複雜度、主軸、文類等故事特性有更多的控制權。
為了探討這議題,本研究提出一個多媒體敘事創作以及互動敘事腳本產生的框架,結合 3D 戲劇平台建立一個具有創作環境,以故事建構者設定的條件與敘事理論為基礎的故事篩選與腳本產生機制,及3D虛擬模擬環境的互動敘事系統。在創作與故事產生方面,故事建構者可以針對各種條件的篩選(像是故事主軸、長度、敘事架構、時間順序等等)由同一組故事片段產生各種敘事上的可能性。本研究設計一個演算法,有效重組既有的故事片段以產生符合作者所有條件設定的互動敘事腳本。
這種機制的另一個特色就是所產生的互動敘事腳本與敘事平台是獨立的,不受到特定平台的技術門檻、創作格式所侷限。為展現此腳本產生系統在各種敘事表現形式上的彈性,在本研究的系統實作中,可以同時產生故事的文字形式並在3D敘事系統 The Theater 上以即時的動畫、攝影機規劃與簡易互動呈現結果。最後,此研究設計一個質化前導實驗,以了解使用者面對具互動、動畫與個人化的敘事內容時,會有甚麼看法與反應。
此次研究的貢獻為設計一個建構在3D虛擬環境上的互動敘事創作的架構,並提供適當的故事腳本產生機制,讓創作者的故事片段擁有重複利用價值。此外,透過故事內容的篩選過程,我們能提供故事建構者在故事結構與內容上有高度的控制,讓產生出來的敘事腳本符合故事建構者所設定的條件、具有良好的敘事理論基礎,並即時在3D虛擬環境中以角色動畫演出。以這次建立互動敘事平台的經驗以及於使用者測試中所得到的回饋,本研究也對於敘事創作介面與輔助工具提供一些設計原則,並提出一些互動敘事系統未來可再延展的議題。 / Interactive storytelling opens a world of possibilities for narrative creation on multimedia platforms, allowing a more compelling and immersive experience compared to traditional narratives. With the emergence of new storytelling technologies, the authoring of such narratives in complex virtual environments becomes an issue critical in the domain of multimedia storytelling platforms: How can we reduce the authoring effort as well as enhance creativity for interactive narratives? How can we design a flexible framework to allow creators of the story (including authors and experiencers at various stages of the interactive story) to have control over the story content and structure based on characteristics such as length, complexity, plot line, and genre?
In order to address these issues, we propose the design of an interactive storytelling platform with models for authoring, story generating based on narrative theory and constraints set by story creators, and simulation in virtual environments. In the platform the creators of the story can specify characteristics (such as plot, length, narrative structure, time sequence, and etc.) on story fragments in order to generate variations of interactive stories. An algorithm we devise will filter and recombine story fragments from these characteristics, generating a high-level interactive script that satisfies all authorial and structural constraints.
This mechanism provides sufficient abstraction from the technical implementation in that it is platform independent, and can be highly expressive in various forms of discourse. To implement the results of the story generation and demonstrate the abstraction from the virtual environment, we simulate the generated interactive narrative both in text form and in the 3D animation environment of The Theater. The Theater platform is complete with autonomous character animation, simple interaction methods, and automatic camera planning. Finally, we carry out a qualitative pilot study to understand how users would perceive and react to the animated, interactive, and personalized narrative content.
Through this implementation, our contributions are to design a flexible framework for authoring interactive narratives for 3D environments, and also provide story generating tools that allow easy reuse and recombining of existing story fragments. Moreover, the filtering and selection process provides high-level control over the story content and structure, thus enforcing the authorial control as well as ensuring the generated stories have a basis in narrative theory. From the experience of implementing this platform and feedback obtained from the user experiment, we hope to suggest design principles for authoring tools and interfaces of interactive narratives.
|
Page generated in 0.0189 seconds