Spelling suggestions: "subject:"[een] MUSIC VISUALIZATION"" "subject:"[enn] MUSIC VISUALIZATION""
1 |
Visualizing temporality in music: music perception – feature extractionHamidi Ghalehjegh, Nima 01 August 2017 (has links)
Recently, there have been efforts to design more efficient ways to internalize music by applying the disciplines of cognition, psychology, temporality, aesthetics, and philosophy. Bringing together the fields of art and science, computational techniques can also be applied to musical analysis. Although a wide range of research projects have been conducted, the automatization of music analysis remains emergent. Importantly, patterns are revealed by using automated tools to analyze core musical elements created from melodies, harmonies, and rhythms, high-level features that are perceivable by the human ear. For music to be captured and successfully analyzed by a computer, however, one needs to extract certain information found in the lower-level features of amplitude, frequency, and duration. Moreover, while the identity of harmonic progressions, melodic contour, musical patterns, and pitch quantification are crucial factors in traditional music analysis, these alone are not exclusive. Visual representations are useful tools that reflect form and structure of non-conventional musical repertoire.
Because I regard the fluidity of music and visual shape as strongly interactive, the ultimate goal of this thesis is to construct a practical tool that prepares the visual material used for musical composition. By utilizing concepts of time, computation, and composition, this tool effectively integrates computer science, signal processing, and music perception. This will be obtained by presenting two concepts, one abstract and one mathematical, that will provide materials leading to the original composition. To extract the desired visualization, I propose a fully automated tool for musical analysis that is grounded in both the mid-level elements of loudness, density, and range, and low-level features of frequency and duration. As evidenced by my sinfonietta, Equilibrium, this tool, capable of rapidly analyzing a variety of musical examples such as instrumental repertoire, electro-acoustic music, improvisation and folk music, is highly beneficial to my proposed compositional procedure.
|
2 |
Italian polkaYeh, Chin-Hua 23 October 2014 (has links)
Italian Polka is an experiment that builds a bridge between Music and the field of Costume Design. It explores the new relationship of integration and artistic possibility between Music, Costume Design, Dance, and Digital Art. This is also an attempt to participate in a new form of performing art: a combination of a live concert and a costume show. / text
|
3 |
Metáforas visuais alternativas para layouts gerados por projeções multidimensionais: um estudo de caso na visualização de músicas / Alternative visual metaphors for layouts generated by multidimensional projections: a case study in visualization of musicVargas, Aurea Rossy Soriano 09 May 2013 (has links)
Os layouts gerados por técnicas de projeção multidimensional podem ser a base para diferentes metáforas de visualização que são aplicáveis a diversos tipos de dados. Existe muito interesse em investigar metáforas alternativas à comumente usada, nuvem de pontos usada para exibir layouts gerados por projeções multidimensionais. Neste trabalho, foi estudado este problema, com foco no domínio da visualização de músicas. Existem muitas dimensões envolvidas na percepção e manipulação de músicas e portanto é difícil encontrar um modelo computacional intuitivo para representá-las. Nosso objetivo neste trabalho foi investigar as representações visuais capazes de transmitir a estrutura de uma música, assim como exibir uma coleção de músicas de modo a ressaltar as similaridades. A solução proposta consiste em uma representação icônica de músicas individuais, que é associada ao posicionamento espacial dos grupos ou coleções de músicas gerado por uma técnica de projeção multidimensional que reflete suas similaridades estruturais. Tanto a projeção quanto o ícone requerem um vetor de características para representar a música. As características são extraídas a partir de arquivos MIDI, já que a própria natureza das descrições MIDI permite a identificação das estruturas musicais relevantes. Estas características proporcionam a entrada tanto para a comparação de dissimilaridades quanto para a construção do ícone da música. Os posicionamentos espaciais são obtidos usando a técnica de projeção multidimensional Least Square Projection (LSP), e as similaridades são calculadas usando a distância Dynamic Time Warping (DTW). O ícone fornece um resumo visual das repetições de acordes em uma música em particular. Nessa dissertação são descritos os processos de geração destas representações visuais, além de descrever um sistema que implementa esses recursos e ilustrar como eles podem apoiar algumas tarefas exploratórias das coleções de músicas, identificando possíveis cenários de uso / The layouts generated by multidimensional projection techniques can be the basis for different visualization metaphors that are applicable to various data types. There is much interest in investigating alternatives to the point cloud metaphor commonly used to present projection layouts. In this work, we investigated this problem, targeting the domain of music visualization. There are many dimensions involved in the perception and manipulation of music and therefore it is difficult to find an intuitive computer model to represent music. Our goal in this work was to investigate visual representations capable of conveying the musical structure of a song, as well as displaying a collection of songs so as to highlight their similarities. The proposed solution consists of an iconic representation for individual songs, that is associated with the spatial positioning of groups or collections of songs generated by a multidimensional projection technique that reflects their structural similarity. Both the projection and the icon require a feature vector representation of the music. The features are extracted from MIDI files, as the nature of the MIDI descriptions allows the identification of the relevant musical structures. These features provide the input for both the dissimilarity comparison and for constructing the music icon. The spatial layout is computed with the Least Square Projection (LSP) technique, and similarities are computed using the Dynamic Time Warping (DTW) distance. The icon provides a visual summary of the chord repetitions in a particular song. We describe the process of generating these visual representations, describe a system that implements such funcionalities and illustrate how they can support some exploratory tasks on music collections, identifying possible usage scenarios
|
4 |
Metáforas visuais alternativas para layouts gerados por projeções multidimensionais: um estudo de caso na visualização de músicas / Alternative visual metaphors for layouts generated by multidimensional projections: a case study in visualization of musicAurea Rossy Soriano Vargas 09 May 2013 (has links)
Os layouts gerados por técnicas de projeção multidimensional podem ser a base para diferentes metáforas de visualização que são aplicáveis a diversos tipos de dados. Existe muito interesse em investigar metáforas alternativas à comumente usada, nuvem de pontos usada para exibir layouts gerados por projeções multidimensionais. Neste trabalho, foi estudado este problema, com foco no domínio da visualização de músicas. Existem muitas dimensões envolvidas na percepção e manipulação de músicas e portanto é difícil encontrar um modelo computacional intuitivo para representá-las. Nosso objetivo neste trabalho foi investigar as representações visuais capazes de transmitir a estrutura de uma música, assim como exibir uma coleção de músicas de modo a ressaltar as similaridades. A solução proposta consiste em uma representação icônica de músicas individuais, que é associada ao posicionamento espacial dos grupos ou coleções de músicas gerado por uma técnica de projeção multidimensional que reflete suas similaridades estruturais. Tanto a projeção quanto o ícone requerem um vetor de características para representar a música. As características são extraídas a partir de arquivos MIDI, já que a própria natureza das descrições MIDI permite a identificação das estruturas musicais relevantes. Estas características proporcionam a entrada tanto para a comparação de dissimilaridades quanto para a construção do ícone da música. Os posicionamentos espaciais são obtidos usando a técnica de projeção multidimensional Least Square Projection (LSP), e as similaridades são calculadas usando a distância Dynamic Time Warping (DTW). O ícone fornece um resumo visual das repetições de acordes em uma música em particular. Nessa dissertação são descritos os processos de geração destas representações visuais, além de descrever um sistema que implementa esses recursos e ilustrar como eles podem apoiar algumas tarefas exploratórias das coleções de músicas, identificando possíveis cenários de uso / The layouts generated by multidimensional projection techniques can be the basis for different visualization metaphors that are applicable to various data types. There is much interest in investigating alternatives to the point cloud metaphor commonly used to present projection layouts. In this work, we investigated this problem, targeting the domain of music visualization. There are many dimensions involved in the perception and manipulation of music and therefore it is difficult to find an intuitive computer model to represent music. Our goal in this work was to investigate visual representations capable of conveying the musical structure of a song, as well as displaying a collection of songs so as to highlight their similarities. The proposed solution consists of an iconic representation for individual songs, that is associated with the spatial positioning of groups or collections of songs generated by a multidimensional projection technique that reflects their structural similarity. Both the projection and the icon require a feature vector representation of the music. The features are extracted from MIDI files, as the nature of the MIDI descriptions allows the identification of the relevant musical structures. These features provide the input for both the dissimilarity comparison and for constructing the music icon. The spatial layout is computed with the Least Square Projection (LSP) technique, and similarities are computed using the Dynamic Time Warping (DTW) distance. The icon provides a visual summary of the chord repetitions in a particular song. We describe the process of generating these visual representations, describe a system that implements such funcionalities and illustrate how they can support some exploratory tasks on music collections, identifying possible usage scenarios
|
5 |
Audible Images: síntese de imagens controladas por áudio / Audible Images: a system for audio controlled image synthesisMartins, Mariana Zaparolli 15 February 2008 (has links)
Neste trabalho é apresentada a biblioteca AIM, uma biblioteca de objetos Pd que combina ferramentas de análise de áudio e síntese de imagens para a geração de acompanhamentos visuais para uma entrada musical. O usuário estabelece conexões que determinam como os parâmetros musicais afetam a síntese de objetos gráficos, e controla estas conexões em tempo-real durante a performance. A biblioteca combina um protocolo de comunicação para intercâmbio de parâmetros musicais e visuais com uma interface fácil de usar, tornando-a acessível a usuários sem experiência em programação de computadores. Suas áreas possíveis de aplicação incluem a musicalização infantil e a indústria de entretenimento. / This thesis describes the AIM library, a Pd object library which combines audio analysis and image synthesis tools for generating visual accompaniments to musical input data. The user establishes connections that determine how musical parameters affect the synthesis of graphical objects, and controls these connections in real-time during performance. The library combines a straightforward communication protocol for exchanging musical and visual parameters with an easy-to-use interface that makes it accessible for users with no computer programming experience. Its potential applications areas include children\'s musical education and the entertainment industry.
|
6 |
Application of Text-Based Methods of Analysis to Symbolic MusicWolkowicz, Jacek Michal 20 March 2013 (has links)
This dissertation features methods of analyzing symbolic music, focused on n-gram-based approaches, as this representation resembles the most text and natural languages. The analysis of similarities between several text and music corpora is accompanied with implementation of text-based methods for problems of composer classification and symbolic music similarity definition. Both problems contain thorough evaluation of performance of the systems with comparisons to other approaches on existing testbeds. It is also described how one can use this symbolic representation in conjunction with genetic algorithms to tackle problems like melody generation. The proposed method is fully automated, and the process utilizes n-gram statistics from a sample corpus to achieve it. A method of visualization of complex symbolic music pieces is also presented. It consist of creating a self similarity matrix of a piece in question, revealing dependencies between voices, themes and sections, as well as music structure. A fully automatic technique of inferring music structure from these similarity matrices is also presented The proposed structure analysis system is compared against similar approaches that operate on audio data. The evaluation shows that the presented structure analysis system outperformed significantly all audio-based algorithms available for comparison in both precision and recall.
|
7 |
Audible Images: síntese de imagens controladas por áudio / Audible Images: a system for audio controlled image synthesisMariana Zaparolli Martins 15 February 2008 (has links)
Neste trabalho é apresentada a biblioteca AIM, uma biblioteca de objetos Pd que combina ferramentas de análise de áudio e síntese de imagens para a geração de acompanhamentos visuais para uma entrada musical. O usuário estabelece conexões que determinam como os parâmetros musicais afetam a síntese de objetos gráficos, e controla estas conexões em tempo-real durante a performance. A biblioteca combina um protocolo de comunicação para intercâmbio de parâmetros musicais e visuais com uma interface fácil de usar, tornando-a acessível a usuários sem experiência em programação de computadores. Suas áreas possíveis de aplicação incluem a musicalização infantil e a indústria de entretenimento. / This thesis describes the AIM library, a Pd object library which combines audio analysis and image synthesis tools for generating visual accompaniments to musical input data. The user establishes connections that determine how musical parameters affect the synthesis of graphical objects, and controls these connections in real-time during performance. The library combines a straightforward communication protocol for exchanging musical and visual parameters with an easy-to-use interface that makes it accessible for users with no computer programming experience. Its potential applications areas include children\'s musical education and the entertainment industry.
|
8 |
Real Time Music Visualization: A Study in the Visual Extension of MusicBain, Matthew N. 24 June 2008 (has links)
No description available.
|
9 |
Real-time visual feedback of emotional expression in singingFu, Xuehua January 2023 (has links)
The thesis project concerns the development and evaluation of a real-time music visualization system aimed at creating a multi-modal perceptual experience of music emotions. The purpose of the project is to provide singers with real-time visual feedback on their singing, to enhance their expression of emotions in the music. Built upon results from previous studies on emotional expression in music, crossmodal correspondences, and associations among sound, shape, color, and emotions, a singing voice visualization system is proposed that generates real-time graphics to reflect the emotional expression in the input singing in an intuitive fashion. A mapping between musical and visual features was established and tested within a user study regarding the setting of its polarities. The singing voice visualization system was developed as as a software system that runs on personal computers, utilizing Pure Data and Unity. This implementation allows for instantaneous feedback to the user during their singing. The mapping was evaluated in a user study, where participants engage in expressive singing to test the system,in order to assess the meaningfulness of the visual feedback and the effectiveness of the mapping, as well as the impact of the polarity. The results show that color as a strong visual cue of emotional expression provided meaningful information on some participants’ expression of typical happiness and sadness. Other cues of the visual feedback possibly enhanced some participants’ emotional expression of singing in an indirect way. The polarity had a noticeable impact on the perception of the visual feedback. The current study is limited by the reliability of the techniques used for the extracting acoustic features from real-time singing, particularly in the detection of attack speed. The evaluation were limited by the broad definition of one of the research questions. The findings of this study suggest potential applications for the singing voice visualization system in fields of music education, art, and entertainment. Additionally, the research highlights the need for further exploration and refinement in the design of the mapping and the evaluation methodology. / Avhandlingsprojektet handlar om utveckling och utvärdering av ett realtidssystem för musikvisualisering som syftar till att skapa en multimodal perceptuell upplevelse av musikkänslor. Syftet med projektet är att ge sångare visuell feedback i realtid på deras sång, för att förstärka deras uttryck av känslor i musiken. Med utgångspunkt i resultat från tidigare studier om känslouttryck i musik, korsmodala korrespondenser och associationer mellan ljud, form, färg och känslor, föreslås ett visualiseringssystem för sångstämmor som genererar realtidsgrafik för att på ett intuitivt sätt återspegla känslouttrycket i den inmatade sången. En mappning mellan musikaliska och visuella funktioner etablerades och testades i en användarstudie avseende inställningen av dess polariteter. Visualiseringssystemet för sångröst utvecklades som ett mjukvarusystem som körs på persondatorer med Pure Data och Unity. Denna implementering möjliggör omedelbar feedback till användaren under dennes sång. Kartläggningen utvärderades i en användarstudie, där deltagarna fick sjunga uttrycksfullt för att testa systemet, för att bedöma meningsfullheten i den visuella feedbacken och kartläggningens effektivitet, samt polaritetens inverkan. Resultaten visar att färg som en stark visuell signal för känslomässiga uttryck gav meningsfull information om vissa deltagares uttryck av typisk lycka och sorg. Andra ledtrådar i den visuella återkopplingen förstärkte möjligen vissa deltagares känslomässiga uttryck av sång på ett indirekt sätt. Polariteten hade en märkbar inverkan på uppfattningen av den visuella feedbacken. Den aktuella studien begränsas av tillförlitligheten hos de tekniker som används för att extrahera akustiska egenskaper från sång i realtid, särskilt när det gäller att upptäcka attackhastighet. Utvärderingen begränsades av den breda definitionen av en av forskningsfrågorna. Resultaten av denna studie visar på potentiella tillämpningar för visualiseringssystemet för sångröster inom musikutbildning, konst och underhållning. Dessutom belyser forskningen behovet av ytterligare utforskning och förfining av kartläggningens utformning och utvärderingsmetodiken.
|
10 |
[en] REAL-TIME 3D ANIMATION WITH HARMONIC AND MODAL ANALYSES / [pt] ANIMAÇÃO 3D EM TEMPO REAL COM ANÁLISES HARMÔNICAS E MODALCLARISSA CODA DOS SANTOS CAVALCANTI MARQUES 07 August 2013 (has links)
[pt] Ainda hoje a animação de caracteres tridimensionais é um processo
manual. Aplicações como jogos de computadores, ou capturas de movimentos
para efeitos especiais em filmes requerem incessante intervenções do
artista, que praticamente guia os movimentos a cada passo. Nesses exemplos
as ferramentas disponíveis oferecem geralmente edição de detalhes, ou
no espaço ou no tempo. Essa tese utiliza duas abordagens analíticas ao processo
de animação: harmônica e modal, permitindo descrever movimentos
com poucos controles. O resultado destas animações é mostrado em tempo
real para o usuário graças às suas implementações na GPU. Em particular,
permite escolher os parâmetros de controle através de galerias animadas em
tempo real ou ainda usar as freqüências da música para guiar a animação. / [en] Animation of three-dimensional characters is still a mostly manual
process. Applications such as computer games and motion capture for
special effects in movies require continuous intervention from the artist,
who needs to guide the movement almost step by step. In such examples the
available tools provide controls mainly over local details, either in space or
in time. This thesis uses two analytical frameworks to deal with the process
of animation: harmonic and modal analyses, allowing the description of
movements with a reduced set of controls. A GPU implementation of the
resulting animations allows for real-time rendering of those. In particular,
it allows applications such as interactive control tuning through design
galleries animated in real-time or three-dimensional music visualization.
Particularly, it allows the choice of control parameters through the use of
animated galleries in realtime and the use of music frequencies to guide the
animation.
|
Page generated in 0.3735 seconds