21 |
Computer Realization of Human Music CognitionAlbright, Larry E. (Larry Eugene) 08 1900 (has links)
This study models the human process of music cognition on the digital computer. The definition of music cognition is derived from the work in music cognition done by the researchers Carol Krumhansl and Edward Kessler, and by Mari Jones, as well as from the music theories of Heinrich Schenker.
The computer implementation functions in three stages. First, it translates a musical "performance" in the form of MIDI (Musical Instrument Digital Interface) messages into LISP structures.
Second, the various parameters of the performance are examined separately a la Jones's joint accent structure, quantified according to psychological findings, and adjusted to a common scale. The findings of Krumhansl and Kessler are used to evaluate the consonance of each note with respect to the key of the piece and with respect to the immediately sounding harmony.
This process yields a multidimensional set of points, each of which is a cognitive evaluation of a single musical event within the context of the piece of music within which it occurred. This set of points forms a metric space in multi-dimensional Euclidean space. The third phase of the analysis maps the set of points into a topology-preserving data structure for a Schenkerian-like middleground structural analysis.
This process yields a hierarchical stratification of all the musical events (notes) in a piece of music. It has been applied to several pieces of music with surprising results. In each case, the analysis obtained very closely resembles a structural analysis which would be supplied by a human theorist.
The results obtained invite us to take another look at the representation of knowledge and perception from another perspective, that of a set of points in a topological space, and to ask if such a representation might not be useful in other domains. It also leads us to ask if such a representation might not be useful in combination with the more traditional rule-based representations by helping to eliminate unwanted levels of detail in a cognitive-perceptual system.
|
22 |
Dialogic Form, Harmonic Schemata, and Expressive Meaning in the Songs of BroadwayTovar, Dale 06 September 2017 (has links)
This thesis addresses the matter of convention in Broadway songs of the song and dance era. Composers worked with implicit, regular procedures in the commercial aesthetic of the 1920s and 1930s New York theater industry. However, discussions of formal convention in this repertoire have not gone much beyond the identification of AABA and ABAC forms. I explore how hypermeter and conventional formal layouts act as schemata. Through this lens, I advocate for an in-time, listener-based approach to form, attending to the stylistically learned projections and anticipations. Later on, I unpack many of the conventional patterns underlying the ABAC form. I argue that the ABAC form provides a template for climactic musical narratives, which places climaxes near the end of the form. Lastly, I focus on AABA form where I highlight many salient conventions of the AABA form and draw historical connections to AABA forms in rock and jazz.
|
23 |
Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não-músicos? / Do emotional factors during music listening tasks affect time perception of musicians and nonmusicians?Ramos, Danilo 17 September 2008 (has links)
RAMOS, Danilo. Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não músicos? 2008, 268 p. Tese (Doutorado). Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto. Universidade de São Paulo, Ribeirão Preto, 2008. Esta pesquisa teve como objetivo verificar o papel das emoções desencadeadas pela música na percepção temporal de músicos e não músicos. Quatro experimentos foram realizados: no Experimento I, músicos e não músicos realizaram tarefas de associações emocionais a trechos musicais de 36 segundos de duração, pertencentes ao repertório erudito ocidental. A tarefa consistia em escutar cada trecho musical e associá-lo às categorias emocionais Alegria, Serenidade, Tristeza, Medo ou Raiva. Os resultados mostraram que a maioria dos trechos musicais desencadeou uma única emoção específica nos ouvintes; além disso, as associações emocionais dos músicos foram semelhantes às associações emocionais dos não músicos para a maioria dos trechos musicais apresentados. No Experimento II, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais mais representativos de cada emoção, utilizados no Experimento I. Assim, os trechos musicais eram apresentados e os participantes deveriam associar cada um deles a durações de 16, 18, 20, 22 ou 24 segundos. Os resultados mostraram que, para o grupo Músicos, os três trechos musicais associados à Tristeza foram subestimados em relação às suas durações reais; nenhuma outra categoria emocional apresentou mais do que um trecho musical sendo subestimado ou superestimado em relação a suas durações reais, para ambos os grupos. Pesquisas recentes em Psicologia da Música têm mostrado duas propriedades estruturais como sendo moduladoras da percepção de emoções específicas durante uma escuta musical: o modo (organização das notas dentro de uma escala musical) e o andamento (número de batidas por minuto). Assim, no Experimento III, músicos e não músicos realizaram tarefas de associações emocionais a composições musicais construídas em sete modos (Jônio, Dórico, Frígio, Lídio, Mixolídio, Eólio e Lócrio) e três andamentos (adágio, moderato e presto). O procedimento foi o mesmo utilizado no Experimento I. Os resultados mostraram que o modo musical modulou a valência afetiva desencadeada pelos trechos musicais: trechos musicais apresentados em modos maiores obtiveram índices positivos de valência afetiva e trechos musicais apresentados em modos menores obtiveram índices negativos de valência afetiva; além disso, o andamento musical modulou o arousal desencadeado pelos trechos musicais: quanto mais rápido o andamento do trecho musical, maiores os níveis de arousal desencadeados e vice-versa. No Experimento IV, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais modais utilizados no Experimento III. O procedimento foi o mesmo utilizado no Experimento II. Os resultados mostraram que manipulações, principalmente no arousal, afetaram a percepção temporal dos ouvintes: para ambos os grupos, foram encontradas subestimações temporais para trechos musicais desencadeadores de baixos índices de arousal; além disso, para o grupo Não Músicos, foram encontradas superestimações temporais para trechos musicais desencadeadores de altos índices de arousal. Estes resultados mostraram que, no caso dos músicos, a percepção temporal foi afetada por atmosferas emocionais relacionadas à Tristeza; no caso dos Não Músicos, a percepção temporal foi afetada por fatores relacionados ao nível do arousal dos eventos musicais apreciados. / RAMOS, Danilo. Do emotional factors during music listening tasks affect time perception of musicians and nonmusicians? 2008, 268 pages. Thesis (PhD). Faculty of Philosophy, Sciences and Letters of Ribeirão Preto. University of São Paulo, Ribeirão Preto, 2008. This study aimed to verify the role of emotions triggered by music on time perception of musicians and nonmusicians. Four experiments were conducted: In Experiment I, musicians and nonmusicians performed emotional association tasks for musical excerpts of 36 seconds duration belonging to the Western classic repertoire. The tasks required to listen to each musical excerpt and to associate it with emotional categories: Joy, Serenity, Sadness, or Fear/Anger. The results showed that most musical excerpts triggered a specific single emotion in listeners; moreover, the emotional associations of musicians were similar to the emotional associations of nonmusicians for most musical excerpts presented. In Experiment II, musicians and nonmusicians performed temporal association tasks for the three most representative excerpts of each emotion used in Experiment I. Thus, the participants had to associate each of such musical excerpts with the following durations: 16, 18, 20, 22 or 24 seconds. The results showed that for the musicians, the three musical excerpts associated with Sadness were underestimated in relation to their real time; moreover, no other emotional category was associated with more than one musical excerpt whether being underestimated or overestimated, regarding their real time, for both groups. Recent researches in Psychology of Music have shown two structural properties as the modulators of specific emotions perceived during a music listening task: the mode (the organization of the notes in a musical scale) and tempo (the number of beats per minute). Thus, in Experiment III, musicians and nonmusicians carried out emotional association tasks with musical compositions constructed in seven modes (Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian) and three tempi (adagio, moderato, and presto). The procedure was the same used in Experiment I. The results showed that the musical mode modulated the affective valence triggered by the excerpts: musical excerpts based on major modes obtained positive affective valence indexes and musical excerpts based on minor modes obtained negative affective valence indexes; moreover, the musical tempo modulated the arousal triggered by the excerpts: the faster the tempo of the musical excerpts, the higher the arousal levels and vice versa, for both groups. In Experiment IV, musicians and nonmusicians performed temporal association tasks for those modal musical excerpts used in Experiment III. The procedure was the same used in Experiment II. The results showed that manipulations concerning arousal affected the time perception of the listeners: time underestimations due to low arousal excerpts were found for both groups; moreover, time underestimations due to high arousal excerpts were found only for nonmusicians. These results showed that in the case of musicians, time perception was affected by emotional atmospheres related to Sadness; in the case of nonmusicians, time perception was affected by factors related to the level of arousal of music events appreciated.
|
24 |
Embodied musical experiences in early childhoodAlmeida, Ana Paula Ramos da Rocha January 2015 (has links)
Embodied Music Cognition is a recently developed theoretical and empirical framework which in the last eight years has been redefining the role of the body in music perception. However, to date there have been very few attempts to research embodied musical experiences in early childhood. The research reported in this thesis investigated 4- and 5-year-olds’ self-regulatory sensorimotor processes in response to music. Two video-based observation studies were conducted. The first, exploratory in nature, aimed to identify levels of musical self-regulation in children’s actions while ‘playing’ in a motion-based interactive environment (Sound=Space). The interactive element of this system provided an experiential platform for the young ‘players’ to explore and develop the ability to recognise themselves as controlling musical events, and to continuously adapt their behaviour according to expected auditory outcomes. Results showed that low-level experiences of musical self-regulation were associated with more random trajectories in space, often performed at a faster pace (e.g. running), while a higher degree of control corresponded to more organised spatial pathways usually involving slower actions and repetition. The second study focused on sensorimotor synchronisation. It aimed to identify children’s free and individual movement choices in response to rhythmic music with a salient and steady beat presented at different tempi. It also intended to find the similarities and differences between participants’ repertoire and their adjustments to tempo changes. The most prominent findings indicate that children’s movements exhibited a resilient periodicity which was not synchronised to the beat. Even though a great variety of body actions (mostly non-gestural) was found across the group, each child tended to use a more restricted repertoire and one specific dominant action that would be executed throughout the different tempi. Common features were also found in children’s performance, such as, the spatial preference for up/down directions and for movements done in place (e.g. vertical jump). The results of both studies highlight the great deal of variability in the way preschoolers regulate their own sensorimotor behaviour when interacting with music. This variety of responses can be interpreted as underlining the importance of the physical nature of the cognitive agent in the perception of music. If this is indeed the case, then it will be crucial to create and develop embodied music learning activities in early years education that encourage each child to self-monitor their own sensorimotor processes and, thus, to shape their experiences of linking sound and movement in a meaningful and fulfilling way.
|
25 |
Advances in multiple viewpoint systems and applications in modelling higher order musical structureHedges, Thomas January 2017 (has links)
Statistical approaches are capable of underpinning strong models of musical structure, perception, and cognition. Multiple viewpoint systems are probabilistic models of sequential prediction that aim to capture the multidimensional aspects of a symbolic domain with predictions from multiple finite-context models combined in an information theoretically informed way. Information theory provides an important grounding for such models. In computational terms, information content is an empirical measure of compressibility for model evaluation, and entropy a powerful weighting system for combining predictions from multiple models. In perceptual terms, clear parallels can be drawn between information content and surprise, and entropy and certainty. In cognitive terms information theory underpins explanatory models of both musical representation and expectation. The thesis makes two broad contributions to the field of statistical modelling of music cognition: firstly, advancing the general understanding of multiple viewpoint systems, and, secondly, developing bottom-up, statistical learning methods capable of capturing higher order structure. In the first category, novel methods for predicting multiple basic attributes are empirically tested, significantly outperforming established methods, and refuting the assumption found in the literature that basic attributes are statistically independent from one another. Additionally, novel techniques for improving the prediction of derived viewpoints (viewpoints that abstract information away from whatever musical surface is under consideration) are introduced and analysed, and their relation with cognitive representations explored. Finally, the performance and suitability of an established algorithm that automatically constructs locally optimal multiple viewpoint systems is tested. In the second category, the current research brings together a number of existing statistical methods for segmentation and modelling musical surfaces with the aim of representing higher-order structure. A comprehensive review and empirical evaluation of these information theoretic segmentation methods is presented. Methods for labelling higher order segments, akin to layers of abstraction in a representation, are empirically evaluated and the cognitive implications explored. The architecture and performance of the models are assessed from cognitive and musicological perspectives.
|
26 |
O processamento de informação rítmica em pessoas com ouvido absoluto / Not informed by the authorRodrigues, Fabrizio Veloso 28 June 2017 (has links)
O ouvido absoluto é descrito como a habilidade de nomear ou produzir notas musicais sem uma referência externa. Estudos recentes sugerem o processamento mais rápido de informação linguística em pessoas que possuem a habilidade. Sabe-se que conteúdo rítmico é um elemento essencial para o processamento linguístico. No entanto, não se sabe se pessoas com ouvido absoluto processam informação rítmica de maneira distinta. O objetivo deste trabalho foi verificar as possíveis diferenças entre portadores e não portadores de ouvido absoluto no processamento de padrões rítmicos em estímulos sonoros. Dezesseis participantes, sendo 8 com ouvido absoluto e 8 sem a habilidade foram submetidos a uma tarefa psicofísica, na qual deveriam reproduzir sequências rítmicas, com acurácia. Como critério de comparação, consideraram-se o número de intervalos produzidos e a evolução da acurácia temporal ao longo da tarefa. Não se observaram diferenças significativas entre os grupos. Os resultados sugerem que, no processamento de informação rítmica não há participação significativa de processos nervosos especificamente presentes apenas em pessoas com ouvido absoluto / Absolute pitch is described as the ability to name or produce musical notes without an external reference. Recent studies suggest faster processing of linguistic information in people with this skill. It is known that rhythmic content is an essential element for linguistic processing. However, it is not known whether people with absolute pitch process rhythmic information differently. The objective of this work was to verify whether differences exist between absolute pitch and non-absolute pitch possessors in processing rhythmic patterns in sound stimuli. Sixteen participants, 8 with absolute pitch and 8 without the ability, underwent a psychophysical task, in which they were asked to reproduce as accurately as possible rhythmic sequences presented to them. As a criterion of performance, we considered the number of intervals produced and the evolution of temporal accuracy as the task was carried out. No significant difference was found between the two groups. The results suggest that in the processing of rhythmic information there is no significant participation of nervous circuitry specifically present only in absolute pitch possessors
|
27 |
WHY WE SING ALONG: MEASURABLE TRAITS OF SUCCESSFUL CONGREGATIONAL SONGSRead, Daniel 01 January 2017 (has links)
Songwriters have been creating music for the church for hundreds of years. The songs have gone through many stylistic changes from generation to generation, yet, each song has generated congregational participation. What measurable, traceable qualities of congregational songs exist from one generation to the next?
This document explores the history and development of Congregational Christian Song (CCS), to discover and document the similarities between seemingly contrasting styles of music. The songs analyzed in this study were chosen because of their wide popularity and broad dissemination among non-denominational churches in the United States. While not an exhaustive study, this paper reviews over 200 songs spanning 300 years of CCS. The findings of the study are that songs that have proven to be successful in eliciting participation all contain five common elements. These elements encourage congregations to participate in singing when an anticipation cue is triggered and then realized. The anticipation/reward theory used in this study is based on David Huron’s ITPRA (Imagination-Tension-Prediction-Reaction-Appraisal) Theory of Expectation.
This thesis is designed to aid songwriters and music theorists to quickly identify whether a CCS can be measured as successful (i.e., predictable).
|
28 |
Between the LinesVice President Research, Office of the 06 1900 (has links)
Nancy Hermiston is examining the links between music cognition and improved learning development through one of the most complicated art forms.
|
29 |
An exploration of the cerebral lateralisation of musical functionWilson, Sarah-Jane January 1996 (has links)
The aim of the thesis was to conduct a detailed examination of the evidence pertaining to the cerebral lateralisation of musical function. Theoretical models from the neuropsychological and cognitive psychology fields were employed, with emphasis placed on the way the models interrelate to gain a more coherent account of music cognition and its relationship to cerebral lateralisation. (For complete abstract open document.)
|
30 |
Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não-músicos? / Do emotional factors during music listening tasks affect time perception of musicians and nonmusicians?Danilo Ramos 17 September 2008 (has links)
RAMOS, Danilo. Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não músicos? 2008, 268 p. Tese (Doutorado). Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto. Universidade de São Paulo, Ribeirão Preto, 2008. Esta pesquisa teve como objetivo verificar o papel das emoções desencadeadas pela música na percepção temporal de músicos e não músicos. Quatro experimentos foram realizados: no Experimento I, músicos e não músicos realizaram tarefas de associações emocionais a trechos musicais de 36 segundos de duração, pertencentes ao repertório erudito ocidental. A tarefa consistia em escutar cada trecho musical e associá-lo às categorias emocionais Alegria, Serenidade, Tristeza, Medo ou Raiva. Os resultados mostraram que a maioria dos trechos musicais desencadeou uma única emoção específica nos ouvintes; além disso, as associações emocionais dos músicos foram semelhantes às associações emocionais dos não músicos para a maioria dos trechos musicais apresentados. No Experimento II, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais mais representativos de cada emoção, utilizados no Experimento I. Assim, os trechos musicais eram apresentados e os participantes deveriam associar cada um deles a durações de 16, 18, 20, 22 ou 24 segundos. Os resultados mostraram que, para o grupo Músicos, os três trechos musicais associados à Tristeza foram subestimados em relação às suas durações reais; nenhuma outra categoria emocional apresentou mais do que um trecho musical sendo subestimado ou superestimado em relação a suas durações reais, para ambos os grupos. Pesquisas recentes em Psicologia da Música têm mostrado duas propriedades estruturais como sendo moduladoras da percepção de emoções específicas durante uma escuta musical: o modo (organização das notas dentro de uma escala musical) e o andamento (número de batidas por minuto). Assim, no Experimento III, músicos e não músicos realizaram tarefas de associações emocionais a composições musicais construídas em sete modos (Jônio, Dórico, Frígio, Lídio, Mixolídio, Eólio e Lócrio) e três andamentos (adágio, moderato e presto). O procedimento foi o mesmo utilizado no Experimento I. Os resultados mostraram que o modo musical modulou a valência afetiva desencadeada pelos trechos musicais: trechos musicais apresentados em modos maiores obtiveram índices positivos de valência afetiva e trechos musicais apresentados em modos menores obtiveram índices negativos de valência afetiva; além disso, o andamento musical modulou o arousal desencadeado pelos trechos musicais: quanto mais rápido o andamento do trecho musical, maiores os níveis de arousal desencadeados e vice-versa. No Experimento IV, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais modais utilizados no Experimento III. O procedimento foi o mesmo utilizado no Experimento II. Os resultados mostraram que manipulações, principalmente no arousal, afetaram a percepção temporal dos ouvintes: para ambos os grupos, foram encontradas subestimações temporais para trechos musicais desencadeadores de baixos índices de arousal; além disso, para o grupo Não Músicos, foram encontradas superestimações temporais para trechos musicais desencadeadores de altos índices de arousal. Estes resultados mostraram que, no caso dos músicos, a percepção temporal foi afetada por atmosferas emocionais relacionadas à Tristeza; no caso dos Não Músicos, a percepção temporal foi afetada por fatores relacionados ao nível do arousal dos eventos musicais apreciados. / RAMOS, Danilo. Do emotional factors during music listening tasks affect time perception of musicians and nonmusicians? 2008, 268 pages. Thesis (PhD). Faculty of Philosophy, Sciences and Letters of Ribeirão Preto. University of São Paulo, Ribeirão Preto, 2008. This study aimed to verify the role of emotions triggered by music on time perception of musicians and nonmusicians. Four experiments were conducted: In Experiment I, musicians and nonmusicians performed emotional association tasks for musical excerpts of 36 seconds duration belonging to the Western classic repertoire. The tasks required to listen to each musical excerpt and to associate it with emotional categories: Joy, Serenity, Sadness, or Fear/Anger. The results showed that most musical excerpts triggered a specific single emotion in listeners; moreover, the emotional associations of musicians were similar to the emotional associations of nonmusicians for most musical excerpts presented. In Experiment II, musicians and nonmusicians performed temporal association tasks for the three most representative excerpts of each emotion used in Experiment I. Thus, the participants had to associate each of such musical excerpts with the following durations: 16, 18, 20, 22 or 24 seconds. The results showed that for the musicians, the three musical excerpts associated with Sadness were underestimated in relation to their real time; moreover, no other emotional category was associated with more than one musical excerpt whether being underestimated or overestimated, regarding their real time, for both groups. Recent researches in Psychology of Music have shown two structural properties as the modulators of specific emotions perceived during a music listening task: the mode (the organization of the notes in a musical scale) and tempo (the number of beats per minute). Thus, in Experiment III, musicians and nonmusicians carried out emotional association tasks with musical compositions constructed in seven modes (Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian) and three tempi (adagio, moderato, and presto). The procedure was the same used in Experiment I. The results showed that the musical mode modulated the affective valence triggered by the excerpts: musical excerpts based on major modes obtained positive affective valence indexes and musical excerpts based on minor modes obtained negative affective valence indexes; moreover, the musical tempo modulated the arousal triggered by the excerpts: the faster the tempo of the musical excerpts, the higher the arousal levels and vice versa, for both groups. In Experiment IV, musicians and nonmusicians performed temporal association tasks for those modal musical excerpts used in Experiment III. The procedure was the same used in Experiment II. The results showed that manipulations concerning arousal affected the time perception of the listeners: time underestimations due to low arousal excerpts were found for both groups; moreover, time underestimations due to high arousal excerpts were found only for nonmusicians. These results showed that in the case of musicians, time perception was affected by emotional atmospheres related to Sadness; in the case of nonmusicians, time perception was affected by factors related to the level of arousal of music events appreciated.
|
Page generated in 0.1029 seconds