• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 47
  • 12
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 231
  • 231
  • 61
  • 61
  • 59
  • 30
  • 27
  • 24
  • 21
  • 21
  • 21
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Música eletroacústica no estado de São Paulo = segunda geração (anos 1981-2009) / Electroacoustic music in the state of São Paulo : the second generation (years 1981-2009)

Mamedes, Clayton Rosa, 1983- 08 October 2010 (has links)
Orientador: Denise Hortência Lopes Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes / Made available in DSpace on 2018-08-16T23:41:57Z (GMT). No. of bitstreams: 1 Mamedes_ClaytonRosa_M.pdf: 56445719 bytes, checksum: 5aa77ec88f4cefec6634aaef3a7bb216 (MD5) Previous issue date: 2010 / Resumo: A presente pesquisa objetiva realizar um panorama da produção de música eletroacústica paulista entre as décadas de 1990 e 2000, focando-se no trabalho desenvolvido por doze compositores desta que consideramos a primeira geração a empregar o computador como ferramenta básica de suporte à composição. Foi aqui estudada a produção dos compositores José Augusto Mannis, Flo Menezes, Edson Zampronha, Denise Garcia, Rodolfo Coelho de Souza, Lívio Tragtenberg, Lelo Nazário, Fernando Iazzetta, Sílvio Ferraz, Ignacio de Campos, Wilson Sukorski e Jônatas Manzolli. Neste trabalho enfocamos obras características destes compositores, através das quais localizamos sua área de atuação específica no gênero eletroacústico, classificando-os de acordo com a mídia proeminente dentre o escopo de suas respectivas produções, construindo desta forma um quadro sucinto do período / Abstract: The present research aims to produce a panoramic view of the musical production on Brazilian electroacoustic music from the State of São Paulo from 1990 to 2000. It studies works produced by twelve composers of the first generation which worked exclusively with computers as a compositional tool. We studied here the production of the composers José Augusto Mannis, Flo Menezes, Edson Zampronha, Denise Garcia, Rodolfo Coelho de Souza, Lívio Tragtenberg, Lelo Nazário, Fernando Iazzetta, Sílvio Ferraz, Ignacio de Campos, Wilson Sukorski and Jônatas Manzolli. In this dissertation we focused main works of these composers, among which we localized their specific area of actuation in the electroacoustic genre, classifying them to the most important media of their production, building an approach of that period / Mestrado / Processos Criativos / Mestre em Música
182

Live coding: um algoritmo gerador de uma sonoridade tonal em A Study in Keith (2009) de Andrew Sorensen

Lunhani, Guilherme Martins 31 March 2016 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-01T12:42:48Z No. of bitstreams: 1 guilhermemartinslumhani.pdf: 2454325 bytes, checksum: ac5b21618158e2916e391e0fbc6f82d0 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-02T15:14:26Z (GMT) No. of bitstreams: 1 guilhermemartinslumhani.pdf: 2454325 bytes, checksum: ac5b21618158e2916e391e0fbc6f82d0 (MD5) / Made available in DSpace on 2017-06-02T15:14:26Z (GMT). No. of bitstreams: 1 guilhermemartinslumhani.pdf: 2454325 bytes, checksum: ac5b21618158e2916e391e0fbc6f82d0 (MD5) Previous issue date: 2016-03-31 / FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais / Este documento discute uma versão sintetizada de uma técnica polivalente cujo nome é live coding, suas construções históricas na Música, e uma simulação de improvisação tonal guiada por improvisação com linguagens de programação. Na Introdução (ver p. xv) apresentamos uma definição de live coding. A definição destaca o fazer musical, mas não exclúi outras potências artísticas. No Capítulo 1 (ver p. 1) destacamos um mecanismo criativo desta técnica em dois contextos não musicais. No Capítulo 2 (ver p. 13) listamos períodos de atividades musicais que prototiparam e formalizaram o mecanismo criativo do primeiro capítulo. No Capítulo 3 (ver p. 31) analisamos uma proposição musical, um vídeo intitulado A Study in Keith de Sorensen e Swift (2009), de acordo com o mecanismo mental do primeiro capítulo. A contribuição deste trabalho para a musicologia brasileira é a organização historiográfica de uma técnica ainda pouco elaborada em português. / This document presents a synthesized version of a versatile technique whose name is live coding, its historical buildings in music, and a simulation of a tonal improvisation, guided by improvisation with programming languages. In the Introduction (see p. xv), we present a definition of live coding. The definition highlights a focus on music, but does not exclude other artistic powers. In Chapter 1(see p. 1), we highlight a creative mechanism of this technique in two unmusical contexts. In Chapter 2 (see p. 13), we listed periods of musical activities that prototyped and formalized the creative engine of the first chapter. In Chapter 3 (see p. 31), we analyzed a musical proposition, a Sorensen and Swift’s video entitled A Study in Keith (2009), according to the first mental mechanism chapter. The contribution of this work to the Brazilian musicology is a historiographical organization of a technique still little developed in portuguese.
183

Singing the body electric : Understanding the role of embodiment in performing and composing interactive music

Einarsson, Anna January 2017 (has links)
Almost since the birth of electronic music, composers have been fascinated by the prospect of integrating the human voice with its expressiveness and complexity into electronic musical works. This thesis addresses how performing with responsive technologies in mixed works, i.e. works that combine an acoustic sound source with a digital one, is experienced by participating singers, adopting an approach of seamlessness, of zero – or invisible – interface, between singer and computer technology. It demonstrates how the practice of composing and the practice of singing both are embodied activities, where the many-layered situation in all its complexity is of great importance for a deepened understanding. The overall perspective put forward in this thesis is that of music as a sounding body to resonate with, where the resonance, a process of embodying, of feeling and emotion, guides the decision-making. The core of the investigation is the lived experiences through the process of composing and performing three musical works. One result emerging from this process is the suggested method of calibration, according to which a bodily rooted attention forms a kind of joint attention towards the work in the making. Experiences from these three musical works arrive in the formulation of an over-arching framework entailing a view of musical composition as a process of construction – and embodied mental simulation – of situations, whose dynamics unfold to engage musicians and audience through shifting fields of affordances, based on a shared landscape of affordances.
184

Critical Discussion of Pleroma: A Digital Drama and Its Relevance to Tragic Form in Music

Lucas, Stephen, 1985- 12 1900 (has links)
Pleroma is a digital drama: a work composed of digital animation combined with electroacoustic music, presenting an original dramatic narrative. Pleroma's dramatic elements evoke both the classical form of tragedy and the concept of perceptual paradox. A structural overview of the drama and its characters and a plot synopsis are given to provide context for the critical discussion. Analytical descriptions of Beethoven's Coriolan Overture Op.62 and Mahler's Symphony No. 9 are provided to give background on tragic form and Platonic allegory in music. An investigation into the elements discussed in the analysis of the instrumental works reveals several layers of possible interpretation in Pleroma. Dramatic elements allow for tragic narratives to be constructed, but they are complemented by character associations formed by pitch relationships, stylistic juxtapositions, and instrumentation. A copy of the dramatic text is included to supplement the multimedia production.
185

GranCloud: A real-time granular synthesis application and its implementation in the interactive composition Creo.

Lee, Terry Alan 12 1900 (has links)
GranCloud is new application for the generation of real-time granular synthesis in the SuperCollider programming environment. Although the software was initially programmed for use in the interactive composition Creo, it was implemented as an independent program for use in any computer music project. GranCloud consists of a set of SuperCollider classes representing granular clouds and parameter objects defining control data for the synthesis. The software is very flexible, allowing users to create their own grain synthesis definitions and control parameters. Cloud objects encapsulate all of the control data and methods necessary to render virtually any type of granular synthesis. Parameter objects provide several simple methods for mapping grain parameters to complex changing data sets or to external data sources. GranCloud simplifies the complex task of generating granular synthesis, allowing composers to focus less on technological issues and more on musical considerations during the compositional process.
186

Strategies for the Creation of Spatial Audio in Electroacoustic Music

Smith, Michael Sterling 12 1900 (has links)
This paper discusses technical and conceptual approaches to incorporate 3D spatial movement in electroacoustic music. The Ambisonic spatial audio format attempts to recreate a full sound field (with height information) and is currently a popular choice for 3D spatialization. While tools for Ambisonics are typically designed for the 2D computer screen and keyboard/mouse, virtual reality offers new opportunities to work with spatial audio in a 3D computer generated environment. An overview of my custom virtual reality software, VRSoMa, demonstrates new possibilities for the design of 3D audio. Created in the Unity video game engine for use with the HTC Vive virtual reality system, VRSoMa utilizes the Google Resonance SDK for spatialization. The software gives users the ability to control the spatial movement of sound objects by manual positioning, a waypoint system, animation triggering, or through gravity simulations. Performances can be rendered into an Ambisonic file for use in digital audio workstations. My work Discords (2018) for 3D audio facilitates discussion of the conceptual and technical aspects of spatial audio for use in musical composition. This includes consideration of human spatial hearing, technical tools, spatial allusion/illusion, and blending virtual/real spaces. The concept of spatial gestures has been used to categorize the various uses of spatial motion within a musical composition.
187

Summer Rain Part I Summer Rain - Dawn for Two-channel Tape; Part II After the Summer Rain for Piano and Two-channel Tape

Kawamoto, Hideko 12 1900 (has links)
This dissertation contains five chapters: 1. Introduction, 2. Basic Digital Processing Used in Summer Rain, 3. Part I Summer Rain - Dawn, 4. Part II After the Summer Rain and 5. Conclusion. Introduction contains a brief historical background of musique concrète, Electronische Musik, acousmatic music and music for instruments and tape, followed by basic descriptions of digital technique used in both parts of Summer Rain in Chapter 2. Also Chapter 2 describes software used in Summer Rain including "Kawamoto's VST," which is based on MAX/MSP, to create new sounds from the recorded samples using a Macintosh computer. In both Chapter 3 and 4, Kawamoto discusses a great deal of the pre-compositional stage of each piece including inspirational sources, especially Rainer Maria Rilke's poems and Olidon Redon's paintings, as well as her visual and sound imageries. In addition Chapter 3 she talks about sound sources, pitch, form and soundscape. Chapter 4 contains analysis on pitch in the piano part, rhythm, form and the general performance practice. Chapter 5 is a short conclusion of her aesthetics regarding Summer Rain, which is connected to literature, visual art and her Japanese cultural background.
188

Aesthetic and Technical Analysis on Soar!

Wang, Hsiao-Lan 08 1900 (has links)
Soar! is a musical composition written for wind ensemble and computer music. The total duration of the work is approximately 10 minutes. Flocking behavior of migratory birds serves as the most prominent influence on the imagery and local structure of the composition. The cyclical nature of the birds' journey inspires palindromic designs in the temporal domain. Aesthetically, Soar! portrays the fluid shapes of the flocks with numerous grains in the sounds. This effect is achieved by giving individual parts high degree of independence, especially in regards to rhythm. Technically, Soar! explores various interactions among instrumental lines in a wind ensemble, constructs overarching symmetrical structures, and integrates a large ensemble with computer music. The conductor acts as the leader at several improvisational moments in Soar! The use of conductor-initiated musical events in the piece can be traced back through the historic lineage of aleatoric compositions since the middle of the twentieth century. [Score is on p. 54-92.]
189

Guiding Human-Computer Music Improvisation : introducing Authoring and Control with Temporal Scenarios / Guider ou composer l'improvisation musicale homme-machine à l'aide de scénarios temporels

Nika, Jérôme 16 May 2016 (has links)
Cette thèse propose l’introduction de scénarios temporels pour guider ou composer l’improvisation musicale homme-machine. Ce travail étudie la dialectique entre planification et réactivité dans les systèmes interactifs dédiés à l’improvisation : des systèmes informatiques pouvant générer de la musique en relation directe avec le contexte produit par une situation de concert. On cherche ici à appréhender l'improvisation pulsée et dite « idiomatique ». En s’appuyant sur l’existence d’une structure formalisée antérieure à la performance dans de nombreux répertoires improvisés (une « grille d’accords » par exemple) ces travaux proposent : un modèle d’improvisation guidée par un « scénario » introduisant des mécanismes d’anticipation ; une architecture temporelle hybride combinant anticipation et réactivité et permettant la synchronisation du rendu multimédia avec une pulsation non métronomique ; et un cadre pour composer des sessions d’improvisation idiomatique ou non à l’échelle du scénario en exploitant la généricité des modèles. Ces recherches ont été menées en interaction constante avec des musiciens experts, en intégrant pleinement ces collaborations au processus itératif de conception des modèles et architectures. Ceux-ci ont été implémentés dans le système ImproteK, utilisé à de nombreuses reprises lors de performances avec des improvisateurs. Au cours de ces collaborations, les sessions d'expérimentations ont été associées à des entretiens et séances de réécoute afin de recueillir de nombreuses appréciations formulées par les musiciens pour valider et affiner les choix technologiques. / This thesis focuses on the introduction of authoring and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. An interactive system dedicated to music improvisation generates music on the fly, in relation to the musical context of a live performance. We focus here on pulsed and idiomatic music relying on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. This thesis thus presents: a music generation model guided by a ''scenario'' introducing anticipatory behaviors; an architecture combining this anticipation with reactivity using mixed static/dynamic scheduling techniques; an audio rendering module to perform live re-injection of captured material in synchrony with a non-metronomic beat; and a framework to compose improvised interactive performances at the ''scenario'' level. This work fully integrated frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK that was used at various occasions during live performances with improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather numerous judgments expressed by the musicians in order to validate and refine the scientific and technological choices.
190

Sound synthesis with cellular automata

Serquera, Jaime January 2012 (has links)
This thesis reports on new music technology research which investigates the use of cellular automata (CA) for the digital synthesis of dynamic sounds. The research addresses the problem of the sound design limitations of synthesis techniques based on CA. These limitations fundamentally stem from the unpredictable and autonomous nature of these computational models. Therefore, the aim of this thesis is to develop a sound synthesis technique based on CA capable of allowing a sound design process. A critical analysis of previous research in this area will be presented in order to justify that this problem has not been previously solved. Also, it will be discussed why this problem is worthwhile to solve. In order to achieve such aim, a novel approach is proposed which considers the output of CA as digital signals and uses DSP procedures to analyse them. This approach opens a large variety of possibilities for better understanding the self-organization process of CA with a view to identifying not only mapping possibilities for making the synthesis of sounds possible, but also control possibilities which enable a sound design process. As a result of this approach, this thesis presents a technique called Histogram Mapping Synthesis (HMS), which is based on the statistical analysis of CA evolutions by histogram measurements. HMS will be studied with four different automatons, and a considerable number of control mechanisms will be presented. These will show that HMS enables a reasonable sound design process. With these control mechanisms it is possible to design and produce in a predictable and controllable manner a variety of timbres. Some of these timbres are imitations of sounds produced by acoustic means and others are novel. All the sounds obtained present dynamic features and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means.

Page generated in 0.0701 seconds