近年來電腦計算能力的進步使得3D虛擬環境得到廣泛的應用。本研究希望能在虛擬環境中結合人體動畫和音樂的特色,以人體動畫來詮釋音樂。我們希望能設計一個智慧型的人體動作產生器,賦予虛擬人物表達音樂特徵的能力,讓動作會因為“聽到”不同的音樂而有所不同。基於人類聽覺的短暫性,系統會自動抓取音樂特徵後將音樂切割成多個片段、對每一片段獨立規劃動作並產生動畫。過去動畫與音樂相關的研究中,許多生成的動作都經由修改或重組運動資料庫中的動作。本研究分析音樂和動作之間的關係,使用程序式動畫產生法自動產生多變且適當的詮釋動作。實驗顯示本系統能通用於LOA1人體模型和MIDI音樂;此外,透過調整系統中的參數,我們能產生不同風格的動畫,以符合不同使用者偏好和不同音樂曲風的特色。 / In recent years, the improvement of computing ability has contributed to the wide application of 3D virtual environment. In the thesis, we propose to combine character animation with music for music interpretation in 3D virtual environment. The system proposed in the thesis is an intelligent avatar motion generator, which generates expressive motions according to music features. The system can extract music features from input music data, segment a music into several music segments, and then plan avatar animation. In the literature, much music-related animation research uses reconstruction and modification of existing motion to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate applicable and variable motions to interpret music. Our experiments show that the system can accept LOA1 models and midi as inputs in general, and generate appropriate expressive motions by modifying parameters according to users’ preference or music style.
Identifer | oai:union.ndltd.org:CHENGCHI/G0095753006 |
Creators | 雷嘉駿, Loi, Ka Chon |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 中文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.0018 seconds