Return to search

Gestural musical interfaces using real time machine learning

Master of Science / Department of Computer Science / William H. Hsu / We present gestural music instruments and interfaces that aid musicians and audio engineers to express themselves efficiently. While we have mastered building a wide variety of physical instruments, the quest for virtual instruments and sound synthesis is on the rise. Virtual instruments are essentially software that enable musicians to interact with a sound module in the computer. Since the invention of MIDI (Musical Instrument Digital Interface), devices and interfaces to interact with sound modules like keyboards, drum machines, joysticks, mixing and mastering systems have been flooding the music industry.
Research in the past decade gone one step further in interacting through simple musical gestures to create, shape and arrange music in real time. Machine learning is a powerful tool that can be smartly used to teach simple gestures to the interface. The ability to teach innovative gestures and shape the way a sound module behaves unleashes the untapped creativity of an artist. Timed music and multimedia programs such as Max/MSP/Jitter along with machine learning techniques open gateways to embodied musical experiences without physical touch. This master's report presents my research, observations and how this interdisciplinary field of research could be used to study wider neuroscience problems like embodied music cognition and human-computer interactions.

Identiferoai:union.ndltd.org:KSU/oai:krex.k-state.edu:2097/39341
Date January 1900
CreatorsDasari, Sai Sandeep
Source SetsK-State Research Exchange
Languageen_US
Detected LanguageEnglish
TypeReport

Page generated in 0.0018 seconds