Return to search

Towards a more versatile dynamic-music for video games : approaches to compositional considerations and techniques for continuous music

This study contributes to practical discussions on the composition of dynamic music for video games from the composer’s perspective. Creating greater levels of immersion in players is used as a justification for the proposals of the thesis. It lays down foundational aesthetic elements in order to proceed with a logical methodology. The aim of this paper is to build upon, and further hybridise, two techniques used by composers and by video game designers to increase further the reactive agility and memorability of the music for the player. Each chapter of this paper explores a different technique for joining two (possibly disparate) types of gameplay, or gamestates, with appropriate continuous music. In each, I discuss a particular musical engine capable of implementing continuous music. Chapter One will discuss a branching-music engine, which uses a precomposed musical mosaic (or musical pixels) to create a linear score with the potential to diverge at appropriate moments accompanying onscreen action. I use the case study of the Final Fantasy battle system to show how the implementation of a branching-music engine could assist in maintaining the continuity of gameplay experience that current disjointed scores, which appear in many games, create. To aid this argument I have implemented a branching-music engine, using the graphical object oriented programming environment MaxMSP, in the style of the battle music composed by Nobuo Uematsu, the composer of the early Final Fantasy series. The reader can find this in the accompanying demonstrations patch. In Chapter Two I consider how a generative-music engine can also implement a continuous music and also address some of the limitations of the branching-music engine. Further I describe a technique for an effective generative music for video games that creates musical ‘personalities’ that can mimic a particular style of music for a limited period of time. Crucially, this engine is able to transition between any two personalities to create musical coincidence with the game. GMGEn (<b>G</b>ame <b>M</b>usic <b>G</b>eneration <b>E</b>ngine) is a program I have created in MaxMSP to act as an example of this concept. GMGEn is available in the Demonstrations_Application. Chapter Three will discuss potential limitations of the branching music engine described in Chapter One and the generative music engine described in Chapter Two, and highlights how these issues can be solved by way of a third engine, which hybridises both. As this engine has an indeterminate musical state it is termed the intermittent-music engine. I go on to discuss the implementation of this engine in two different game scenarios and how emergent structures of this music will appear. The final outcome is to formulate a new compositional approach delivering dynamic music, which accompanies the onscreen action with greater agility than currently present in the field, increasing the memorability and therefore the immersive effect of the video-game music.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:686948
Date January 2015
CreatorsDavies, Huw
ContributorsSaxton, Robert
PublisherUniversity of Oxford
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://ora.ox.ac.uk/objects/uuid:3f1e4cfa-4a36-44d8-9f4b-4c623ce6b045

Page generated in 0.0021 seconds