Spelling suggestions: "subject:"expressive performance"" "subject:"axpressive performance""
1 |
Application of intermediate multi-agent systems to integrated algorithmic composition and expressive performance of musicKirke, Alexis January 2011 (has links)
We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization.
|
2 |
Natural elasticity : Influence of overspunnen woollen yarns on material expressionScheller, Miriam January 2023 (has links)
Exploring the expressive abilities of overspunnen woollen-based knits . This project focused on the influence of over-spun woollen yarns in knitted material and the overall material quality. The use of over-twisted yarns focuses on the reactionary design and explores the colour rituals and patterns of the Igbo tribe in Nigeria. This exploration of heritage opened up new patterns and expressive material development methods with a regenerative alternative to fossil fuel-based elastics. The exploration was focused on the transformative possibilities of knitted textile tubes that interact with elasticity activated by the body. Furthermore, it is defined through technical knits and sensitivity to poetic aesthetics. This artistic and poetic expression is researched using colour and patterns. The colours used are based on the original colours utilized in Uli, a practice of body painting. The texture is inspired by the concept of ritual pollution, which is closely linked to the earth. The patterns are developed through drawings on the body employed in the Nigerian practice of Uli and then translated into jacquard knits. Close attention is paid to in conveying dynamics and movement by utilizing opaque and non-opaque areas, focusing on elastic and non-elastic properties. The results show a great range in conveying adaptive colour, textures, and interactive patterns for the body through woollen-based elastic yarns. This project showcases the great potential and encourages rethinking of elastic materials.
|
3 |
Expressive Collaborative Music Performance via Machine LearningXia, Guangyu 01 August 2016 (has links)
Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a predefined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be “music,” body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, body movements, and eyebrow movements. We also conduct the first subjective evaluation of the joint effect of automatic accompaniment and robot expression. Our result shows robot embodiment and expression enable more musical, interactive, and engaging human-computer collaborative performance.
|
4 |
The Player as a Conductor : Utilizing an Expressive Performance System to Create an Interactive Video Game Soundtrack / Spelare som dirigent : Interaktiv datorspelsmusik genom att tillämpa ett system för expressivt musikframträdandeLundh Haaland, Magnus January 2020 (has links)
Music is commonly applied in art and entertainment to enhance the emotional experience. In video-games and other non-linear mediums this task must be achieved dynamically at run-time, as the timeline of events is unknown in advance. Different techniques have been developed to solve this issue, but most commercial applications still rely on pre-rendered audio. In this study, I investigate using a Computer System for Expressive Music Performance (CSEMP) to dynamically shape a computer performance of a pre-composed track to a small platforming game. A prototype environment utilising the KTH Rule System was built and evaluated through semistructured interviews and observations with 7 participants. The results suggest that changes in the musical performance can successfully reflect smaller changes in the experience such as character movement, and are less effective for sound effects or more dramatic changes, such as when the player is engaging in combat or when the player loses. All participants preferred the interactive soundtrack over a non-interactive version of the same soundtrack. / Musik används ofta som ett komplement i konst och underhållning för att förstärka den känslomässiga upplevelse. I datorspel och andra icke-linjära medier måste musiken ta på sig denna rollen dynamiskt, eftersom det inte går att veta hur händelserna kommer utfalla sig i förväg. För att lösa detta problem har det utvecklats olika tekniker, men dom flesta är fortfarande baserade på digitala inspelningar. I denna studie utforskar jag användningen av ett "Computer Systen for Expressive Musig Performance" (CSEMP) för att dynamiskt forma datorns framträdande av en lineär komposition till ett enkelt plattformsspel. En prototyp baserat på systemet "KTH Rule System" utvecklades och utvärderades genom semistrukturerade intervjuer och observationer med 7 deltagare. Resultaten visar att förändringar i uppspelningen lyckades spegla mindre förändringar i spelet, såsom hur en spelkaraktär rör sig, och var mindre effekt som ljudeffekter och för större förändringar, såsom när spelaren är i fara eller spelet är över. Alla deltagare föredrog det interaktiva ljudspåret över en icke-interaktiv version i det samma ljudspåret.
|
5 |
Musikaliskt framförande i midibaserad musik : En undersökning om hur två olika metoder för att komponera mididata påverkar upplevelsen av digital orkestral musik / Expressive performance in midi-based music : A study on how two different methods of composing midi data affect the listeners’ perception of digital orchestral musicHägglund, Anders January 2018 (has links)
Med hjälp av digitala ljudbibliotek kan man få tillgång till ljudet av en orkester, men hur kan man återskapa känslan av en riktig orkester? Det finns flera metoder för att digitalt återskapa ett mänskligt framförande, två av de vanligaste är realtidsinspelning och datorsimulering. Båda metoderna kan användas för att efterlikna/återskapa mänskliga karaktärsdrag i musikaliska framföranden.I detta arbete jämfördes dessa metoder ur lyssnarens perspektiv, för att ta reda på vilken metod som bäst gynnar kompositörer av digital orkestral musik. Undersökningen utnyttjade en kvantitativ metod i form av en internetbaserad enkät där respondenterna fick svara på frågor och rangordna deras upplevelse av de olika metoderna.Resultatet visade bland annat att den generella upplevelsen av metoderna inte skiljde sig åt i genomsnitt, men att det fanns trender mellan olika lyssningsvanor och vilken metod som föredrogs. Mängden insamlad data var inte tillräcklig för att dra konkreta slutsatser, arbetet visar dock tendenser och kan användas som underlag till vidare forskning.
|
6 |
Human-informed robotic percussion renderings: acquisition, analysis, and rendering of percussion performances using stochastic models and roboticsVan Rooyen, Robert Martinez 19 December 2018 (has links)
A percussion performance by a skilled musician will often extend beyond a written score in terms of expressiveness. This assertion is clearly evident when comparing a human performance with one that has been rendered by some form of automaton that expressly follows a transcription. Although music notation enforces a significant set of constraints, it is the responsibility of the performer to interpret the piece and “bring it to life” in the context of the composition, style, and perhaps with a historical perspective. In this sense, the sheet music serves as a general guideline upon which to build a credible performance that can carry with it a myriad of subtle nuances. Variations in such attributes as timing, dynamics, and timbre all contribute to the quality of the performance that will make it unique within a population of musicians. The ultimate goal of this research is to gain a greater understanding of these subtle nuances, while simultaneously developing a set of stochastic motion models that can similarly approximate minute variations in multiple dimensions on a purpose-built robot. Live or recorded motion data, and algorithmic models will drive an articulated robust multi-axis mechatronic system that can render a unique and audibly pleasing performance that is comparable to its human counterpart using the same percussion instruments. By utilizing a non-invasive and flexible design, the robot can use any type of drum along with different types of striking implements to achieve an acoustic richness that would be hard if not impossible to capture by sampling or sound synthesis. The flow of this thesis will follow the course of this research by introducing high-level topics and providing an overview of related work. Next, a systematic method for gesture acquisition of a set of well-defined percussion scores will be introduced, followed by an analysis that will be used to derive a set of requirements for motion control and its associated electromechanical subsystems. A detailed multidiscipline engineering effort will be described that culminates in a robotic platform design within which the stochastic motion models can be utilized. An analysis will be performed to evaluate the characteristics of the robotic renderings when compared to human reference performances. Finally, this thesis will conclude by highlighting a set of contributions as well as topics that can be pursued in the future to advance percussion robotics. / Graduate / 2019-12-10
|
7 |
Modélisation de l'interprétation des pianistes & applications d'auto-encodeurs sur des modèles temporelsLauly, Stanislas 04 1900 (has links)
Ce mémoire traite d'abord du problème de la modélisation de l'interprétation des pianistes à l'aide de l'apprentissage machine. Il s'occupe ensuite de présenter de nouveaux modèles temporels qui utilisent des auto-encodeurs pour améliorer l'apprentissage de séquences.
Dans un premier temps, nous présentons le travail préalablement fait dans le domaine de la modélisation de l'expressivité musicale, notamment les modèles statistiques du professeur Widmer. Nous parlons ensuite de notre ensemble de données, unique au monde, qu'il a été nécessaire de créer pour accomplir notre tâche. Cet ensemble est composé de 13 pianistes différents enregistrés sur le fameux piano Bösendorfer 290SE. Enfin, nous expliquons en détail les résultats de l'apprentissage de réseaux de neurones et de réseaux de neurones récurrents. Ceux-ci sont appliqués sur les données mentionnées pour apprendre les variations expressives propres à un style de musique.
Dans un deuxième temps, ce mémoire aborde la découverte de modèles statistiques expérimentaux qui impliquent l'utilisation d'auto-encodeurs sur des réseaux de neurones récurrents. Pour pouvoir tester la limite de leur capacité d'apprentissage, nous utilisons deux ensembles de données artificielles développées à l'Université de Toronto. / This thesis addresses the problem of modeling pianists' interpretations using machine learning, and presents new models that use temporal auto-encoders to improve their learning for sequences.
We present previous work in the field of modeling musical expression, including Professor Widmer's statistical models. We then discuss our unique dataset created specifically for our task. This dataset is composed of 13 different pianists recorded on the famous Bösendorfer 290SE piano. Finally, we present the learning results of neural networks and recurrent neural networks in detail. These algorithms are applied to the dataset to learn expressive variations specific to a style of music.
We also present novel statistical models involving the use of auto-encoders in recurrent neural networks. To test the limits of these algorithms' ability to learn, we use two artificial datasets developed at the University of Toronto.
|
8 |
Modélisation de l'interprétation des pianistes & applications d'auto-encodeurs sur des modèles temporelsLauly, Stanislas 04 1900 (has links)
Ce mémoire traite d'abord du problème de la modélisation de l'interprétation des pianistes à l'aide de l'apprentissage machine. Il s'occupe ensuite de présenter de nouveaux modèles temporels qui utilisent des auto-encodeurs pour améliorer l'apprentissage de séquences.
Dans un premier temps, nous présentons le travail préalablement fait dans le domaine de la modélisation de l'expressivité musicale, notamment les modèles statistiques du professeur Widmer. Nous parlons ensuite de notre ensemble de données, unique au monde, qu'il a été nécessaire de créer pour accomplir notre tâche. Cet ensemble est composé de 13 pianistes différents enregistrés sur le fameux piano Bösendorfer 290SE. Enfin, nous expliquons en détail les résultats de l'apprentissage de réseaux de neurones et de réseaux de neurones récurrents. Ceux-ci sont appliqués sur les données mentionnées pour apprendre les variations expressives propres à un style de musique.
Dans un deuxième temps, ce mémoire aborde la découverte de modèles statistiques expérimentaux qui impliquent l'utilisation d'auto-encodeurs sur des réseaux de neurones récurrents. Pour pouvoir tester la limite de leur capacité d'apprentissage, nous utilisons deux ensembles de données artificielles développées à l'Université de Toronto. / This thesis addresses the problem of modeling pianists' interpretations using machine learning, and presents new models that use temporal auto-encoders to improve their learning for sequences.
We present previous work in the field of modeling musical expression, including Professor Widmer's statistical models. We then discuss our unique dataset created specifically for our task. This dataset is composed of 13 different pianists recorded on the famous Bösendorfer 290SE piano. Finally, we present the learning results of neural networks and recurrent neural networks in detail. These algorithms are applied to the dataset to learn expressive variations specific to a style of music.
We also present novel statistical models involving the use of auto-encoders in recurrent neural networks. To test the limits of these algorithms' ability to learn, we use two artificial datasets developed at the University of Toronto.
|
9 |
Cognitive and Theoretical Analyses of Expressive Performance ChoicesTrevor, Caitlyn M. January 2018 (has links)
No description available.
|
Page generated in 0.0998 seconds