本研究係發想自政治大學「未來馬戲團」的展演活動表演方式,嘗試改進表演方式中的程式技術,以程式化方式整合展演藝術中實體與虛擬的互動平台,我們希望提供導演撰寫較為口語或展演描述方式的腳本敘述如『when ... other-wise ...』,如此一來就可以任意組合實體演員的肢體動作與指示虛擬環境的特效,因此我們採用了一套介接實體與虛擬環境應用程式的領域專屬語言- Digital In-teractive Performance Sketch (DIPS),用以開發客製化的展演程式庫,並佈署於本團隊自行開發的執行引擎 Wearable Item Service runtimE (WISE),提供導演在這個引擎上透過這個DIPS編寫前述口語的程式腳本,讓程式自行互動,達成展演效果的自動變化。
我們的系統會接收來自展演人員穿配的連網感應器上的訊號,並且根據導演寫好的腳本規則,自動根據接收到的裝置訊號判斷出該指示虛擬環境做出什麼樣的效果,以達到展演效果自動變化,完成虛擬與實體展互動的程式支援。
為了減少腳本程式撰寫前須具備的程式邏輯訓練,本研究開發一款所見即所得(WYSIWYG)的視覺化腳本編輯器 DIPS Creator,提供腳本編寫者可以直覺的方式組合編輯器中的展演詞彙方塊,完成腳本設計。
本研究展示了如何以較為口語或展演語意的方式敘述展演規則,以實現虛實互動的程式化,並且提供了具有彈性的客製化展演函式庫及圖形化展演規則編輯器的製作方式,未來可增加多演員層次的抽象支援以展現本研究系統的更多程式化能力,並加入表演階段設計、雙向溝通與規則互斥等能力,擴充系統功能。 / This research was inspired by “The Future Circus”, a cyber-physical interactive performance art developed in National Chengchi University. In this thesis, we pro-pose some mechanisms to support such performance art programmatically in a more effective manner. Specially, we provide a high-level scripting tool for directors to de-scribe the performance rules abstractly in the form of “when ... otherwise ...”, so that directors can compose arbitrary actions and effects easily. Underlying such abstract rules are a domain specific language – Digital Interactive Performance Sketch (DIPS), and a middleware. Wearable Item Service runtime (WISE), developed by our research team.
Given a script with those abstract rules, our system will receive signals sent from a sensor on wearable devices of actors, and then it will command cyber environment perform effects, the performance effects or actions according to rules written by the director. Through our integration efforts, the performance effects in the cyber environment will change automatically in a programmatic way. Besides, for users without prior scripting experience, we developed a WYSIWYG GUI editor, DIPS Creator, that allows users to write a script intuitively by dragging and dropping pre-built rule blocks.
We conduct a few experiments with real sensor device to demonstrate the programming support of our tool. The preliminary results are satisfactory in terms of prototype support. To further extend our tool for practical performance, we describe in detail a few directions such as support for multiple actor performance stage model-ing, and integrity check of related rules that will make our system more powerful.
Identifer | oai:union.ndltd.org:CHENGCHI/G1027530142 |
Creators | 蕭奕凱, Hsiao, Yi Kai |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 中文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.0022 seconds