• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The role of music in lyric analysis The effect of music on participants' emotional changes, perceived disclosure levels and group impression /

Fujioka, Yuka. Standley, Jayne M. January 2005 (has links)
Thesis (M.M.) Florida State University, 2005. / Advisor: Jayne M. Standley, Florida State University, College of Music. Title and description from dissertation home page (viewed 6-25-07). Document formatted into pages; contains 65 pages. Includes biographical sketch. Includes bibliographical references.
2

The relationship between music and emotion, as conveyed by Prosody, in indivuduals with Williams Syndrome /

Abbey-Warn, Bonnie. January 2006 (has links) (PDF)
Undergraduate honors paper--Mount Holyoke College, 2006. Dept. of Psychology and Education. / Text also included on accompanying CD-ROM. Includes bibliographical references (leaves 86-89).
3

Ritual perspectives : an investigation into the epistemology of performance /

Woods, Belinda Jane. January 2010 (has links)
Thesis (MMusPerf)--University of Melbourne, The Faculty of the Victorian College of the Arts and Music, 2010. / Typescript. Includes bibliographical references (p. 37-40)
4

Infants' perception of emotions in music and social cognition. / Music and social cognition

January 2012 (has links)
已有文獻指出幼兒能夠根據他人的意圖、慾望和信念以解釋對方的行為。本硏究的主要目的為探究幼兒能否理解音樂所傳達的情緒以及能否運用情緒來推測對方的行為,並了解這兩種能力之間的關係。在第一項測試中,我們用了Phillip et al.(2002)的注視時間飾演方範式(looking-time paradigm)來測試幼兒能否透過觀察他人的情緒(包括面部表情及說話)來推斷他的行為。在此項測試中,幼兒會觀看兩種情景(一)實驗人員首先會笑著面向物件甲說話,跟著手抱該物件(一致的情況);(二)實驗人員首先會笑著面向物件甲說話,但接著手抱另一物件(不一致的情況)。因為在不一致的情況下實驗人員的面部表情與她的行為不協調,假若幼兒能夠理解實驗人員的情緒與行為之間的關係,幼兒將會對這種情況比較感到驚訝,因而注視時間會較長。在另一項測試中,我們運用了跨感官比對飾演方範式(intermodal matching paradigm)來探究幼兒能否理解音樂所表達的情緒。我們在播放開心的音樂之後,幼兒同樣地會觀看兩種情景:(一)螢幕中的實驗人員面露笑容地講話(一致的情況);(二)螢幕中的實驗人員面帶哀傷地說話(不一致的情況)。由於在不一致的情況下音樂傳達的情緒與實驗人員的面部表情不相符,如果幼兒能夠理解音樂中的情緒,他們對這種情況的注視時間將會較長。此外,鑬於幼兒的語言能力與理解他人的行為及想法有著密切的關連,我們亦要求家長填寫《漢語溝通發展量表》來評估幼兒的語言溝通能力。是次硏究對象為三十五名十八個月大幼兒(平均年齡為十八月及四天)。硏究結果顯示,(一)當實驗人員對一件物件面露笑容時,她便會手握該物件;(二)當實驗人員聽到開心的音樂時,她會面露笑容;相反,當她聽到悲傷的音樂時她便會愁眉苦臉。結果亦顯示幼兒在以上兩項測試中的表現並沒有正向的關聯,即與我們的假設不相符。由於我們認為次序效應(order effect)影響了本硏究的結果,因此我們建議在量度幼兒對音樂中的情緒之理解,以及對情緒與行為之間的關係的理解應作出適當的修改。總括而言,是次硏究把動作及視覺經驗延伸至聽覺經驗,以及由理解意向和信念延伸至理解情緒,因此本硏究對了解自身經驗和理解他人行為及想法之間的關係潛在莫大的貢獻。 / Prior studies demonstrated infants’ precocious mentalistic reasoning of attributing others’ behaviours to intentions, desires and beliefs. However, fewer studies looked at infants’ interpretation of behaviours in terms of agents’ emotional expressions. The present study examined the relationship between infants’ perception of emotions in music and their understanding of behaviours as motivated by emotional states. In Task 1, we adapted Phillips et al.’s (2002) looking-time paradigm to assess infants’ use of emotional information to predict agent’s action. Infants were shown an actress with positive emotional-visual regard directed towards one object and subsequently grasping the same object (consistent event) or the other one (inconsistent event). If infants appreciated the connection between actress’ affect and her action, they should show greater novelty response to inconsistent events in which the actress’ expressed emotion contradicted the expected action. In Task 2, an intermodal matching paradigm was used to test whether infants are sensitive to emotions conveyed in music. We exposed infants to happy or sad music and later showed them an actress portraying either happy or sad dynamic facial expressions on a monitor. If they could discern the emotions embedded in the musical excerpts, they should look longer when the actress’ posed emotion is inconsistent with the emotion represented in the music. Parental report of language skills as measured by the Mac-Arthur Bates Communicative Development Inventories was also obtained to partial out the effect of language ability on psychological reasoning. Results from 35 18-month-olds (M = 18 months 4 days) revealed that as a group (a) they recognized that the actress tended to grasp the object with which she had positively regarded previously, and (b) they appreciated that the actress tended to show happy face upon hearing positive music excerpts whereas sad facial expression was displayed when listening to sad music. Contrary to our hypothesis, we failed to find a positive correlation between these two conceptual understanding. We speculate that the result was obscured by order effects, and suggestions have been proposed to ameliorate the measurement of infants’ looking preferences as reflecting their conceptual understanding. Despite the null result, the current study is potentially significant in corroborating the role of first-person experience in social cognition by extending from motor and visual experience to auditory experience on the one hand, as well as from intention and belief attribution to emotion attribution on the other. / Detailed summary in vernacular field only. / Siu, Tik Sze Carrey. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 62-84). / Abstracts also in Chinese. / Introduction --- p.11 / Infants’ Early Psychological Reasoning --- p.11 / Self-experience as a Mechanism underlying Infants’ Psychological Reasoning --- p.14 / Infants’ Attribution of Behaviours to Emotions --- p.17 / Music as a Language of Emotions --- p.23 / The Perception of Emotions in Music among Young Children and Infants --- p.25 / The Hypothesis of the Present Study --- p.29 / Method --- p.32 / Participants --- p.32 / Apparatus and Materials --- p.33 / Chapter Task 1 --- : Ability to predict agent’s action based on expressed emotion --- p.33 / Chapter Task 2 --- : Ability to decode emotions in music --- p.33 / Musical stimuli --- p.33 / Facial expressions --- p.34 / The MacArthur-Bates Communicative Development Inventories --- p.35 / Procedure --- p.36 / Chapter Task 1 --- : Ability to predict agent’s action based on expressed emotion --- p.36 / Chapter Task 2 --- : Ability to decode emotions in music --- p.38 / Reliability Coding --- p.39 / Results --- p.41 / Chapter Task 1 --- : Ability to Predict Agent’s Action Based upon Expressed Emotion --- p.41 / Familiarization --- p.41 / Test events --- p.41 / Chapter Task 2 --- : Ability to Decode Emotions in Music --- p.42 / 42 / Test events --- p.42 / The Link between Task 1 and Task 2 --- p.43 / Discussion --- p.46 / Limitations and Further Research --- p.53 / Significance and Implications --- p.59 / Conclusion --- p.60 / References --- p.62 / Appendices --- p.85 / Table --- p.85 / Figures --- p.86
5

Emotion-based music retrieval and recommendation

Deng, Jie 01 August 2014 (has links)
The digital music industry has expanded dramatically during the past decades, which results in the generation of enormous amounts of music data. Along with the Internet, the growing volume of quantitative data about users (e.g., users’ behaviors and preferences) can be easily collected nowadays. All these factors have the potential to produce big data in the music industry. By utilizing big data analysis of music related data, music can be better semantically understood (e.g., genres and emotions), and the user’s high-level needs such as automatic recognition and annotation can be satisfied. For example, many commercial music companies such as Pandora, Spotify, and Last.fm have already attempted to use big data and machine learn- ing related techniques to drastically alter music search and discovery. According to musicology and psychology theories, music can reflect our heart and soul, while emotion is the core component of music that expresses the complex and conscious experience. However, there is insufficient research in this field. Consequently, due to the impact of emotion conveyed by music, retrieval and discovery of useful music information at the emotion level from big music data are extremely important. Over the past decades, researchers have made great strides in automated systems for music retrieval and recommendation. Music is a temporal art, involving specific emotion expression. But while it is easy for human beings to recognize emotions expressed by music, it is still a challenge for automated systems to recognize them. Although some significant emotion models (e.g., Hevner’s adjective circle, Arousal- Valence model, Pleasure-Arousal-Dominance model) established upon the discrete emotion theory and dimensional emotion theory have been widely adopted in the fi of emotion research, they still suffer from limitations due to the scalability and specificity in music domain. As a result, the effectiveness and availability of music retrieval and recommendation at the emotion level are still unsatisfactory. This thesis makes contribution at theoretical, technical, and empirical level. First of all, a hybrid musical emotion model named “Resonance-Arousal-Valence (RAV)” is proposed and well constructed at the beginning. It explores the computational and time-varying expressions of musical emotions. Furthermore, dependent on the RAV musical emotion model, a joint emotion space model (JESM) combines musical audio features and emotion tags feature is constructed. Second, corresponding to static musical emotion representation and time-varying musical emotion representation, two methods of music retrieval at the emotion level are designed: (1) a unified framework for music retrieval in joint emotion space; (2) dynamic time warping (DTW) for music retrieval by using time-varying music emotions. Furthermore, automatic music emotion annotation and segmentation are naturally conducted. Third, following the theory of affective computing (e.g., emotion intensity decay, and emotion state transition), an intelligent affective system for music recommendation is designed, where conditional random fi lds (CRF) is applied to predict the listener’s dynamic emotion state based on his or her personal historical music listening list in a session. Finally, the experiment dataset is well created and pro- posed systems are also implemented. Empirical results (recognition, retrieval, and recommendation) regarding accuracy compared to previous techniques are also presented, which demonstrates that the proposed methods enable an advanced degree of effectiveness of emotion-based music retrieval and recommendation. Keywords: Music and emotion, Music information retrieval, Music emotion recognition, Annotation and retrieval, Music recommendation, Affective computing, Time series analysis, Acoustic features, Ranking, Multi-objective optimization
6

Can Frontal Alpha Asymmetry Predict the Perception of Emotions in Music?

Rischer, Katharina January 2016 (has links)
Resting frontal alpha asymmetry was measured with an electroencephalogram in 28 volunteers to predict the evaluation of emotions in music. Sixteen music excerpts either expressing happiness, sadness, anger or fear were rated by the participants with regard to conveyed mood, pleasantness and arousal. In addition, various variables of music background were collected. The experiment started with the assessment of current mood, followed by the evaluation of the music excerpts, and finished with the assessment of the participants’ approach and withdrawal behaviour. The results showed that each music excerpt was specic for the intended mood except for music of the category anger which obtained also high ratings for fear. These music excerpts were also the only ones for which a difference in ratings between relatively more left-active and right-active participants could be observed. Partly against expectations, left-dominant volunteers perceived music excerpts of the category anger to express more fear and anger than right-active participants. Results are interpreted within the behavioural inhibitionand approach model of anterior brain asymmetry.
7

Emotion Recognition Education in Western Art Music Appreciation

Matsumoto, Akiko January 2021 (has links)
Because Western art music is harmonically more complex than popular music and because it is written with musical notation, it may be challenging for certain people with no music training (non-musicians), those who did not grow up with Western art music, or those who did not choose to listen to this type of music for enjoyment to understand and appreciate it. Furthermore, there is a prevalent belief that Western art music is for the wealthy and elderly. This belief may be preventing symphony orchestra groups from cultivating new audiences. This study aims to determine if a narrative music listening activity would generate emotional response and cognitive engagement in a study group of non-Western art music listeners and prompt them to create musical narratives. Theoretically, narrative form music listening may present episodic memories, which can be built up into stories. To test the effect of narrative music listening activities, an online survey was distributed to non-Western art music listeners in the 20 through 40 age range, and pretest–treatment–posttest activity was devised and administered to three groups, an absolute music listening group, a programmatic music listening group, and a polyphonic texture listening group. In the treatment section, the creative listening activity, participants were prompted to create musical narratives, which take the form of colors, shapes, dialogues, or explicit stories. Participants were then asked to write about the music they heard before and after the narrative music listening activity. Participants’ motivation to attend a Western art music concert was assessed via a motivation scale using Likert scales. The results suggest that this online activity’s multimodality was a promising method for enhancing the appreciation of Western art music.
8

The Perception of Emotions in Multimedia: An Empirical Test of Three Models of Conformance and Contest

Somsaman, Kritsachai 03 March 2004 (has links)
No description available.
9

Attentional and affective responses to complex musical rhythms

Unknown Date (has links)
I investigated how two types of rhythmic complexity, syncopation and tempo fluctuation, affect the neural and behavioral responses of listeners. The aim of Experiment 1 was to explore the role of attention in pulse and meter perception using complex rhythms. A selective attention paradigm was used in which participants attended either to a complex auditory rhythm or a visually presented list of words. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. Selective attention to rhythms led to increased BOLD (Blood Oxygen Level-Dependent) responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations show that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. In Experiment 2, the effect of tempo fluctuation in expressive music on emotional responding in musically experienced and inexperienced listeners was investigated. Participants listened to a skilled music performance, including natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses, and a mechanical performance of the same piece, that served as a control. Participants reported emotional responses on a 2-dimensional rating scale (arousal and valence), before and after fMRI scanning. During fMRI scanning, participants listened without reporting emotional responses. Tempo fluctuations predicted emotional arousal ratings for all listeners. / Expressive performance was associated with BOLD increases in limbic areas for all listeners and in limbic and reward related areas forthose with musical experience. Activity in the dorsal anterior cingulate, which may reflect temporal expectancy, was also dependent on the musical experience of the listener. Changes in tempo correlated with activity in a mirror neuron network in all listeners, and mirror neuron activity was associated with emotional arousal in experienced listeners. These results suggest that emotional responding to music occurs through an empathic motor resonance. / by Heather L. Chapin. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
10

1/f structure of temporal fluctuation in rhythm performance and rhythmic coordination / One/f structure of temporal fluctuation in rhythm performance and rhythmic coordination

Unknown Date (has links)
This dissertation investigated the nature of pulse in the tempo fluctuation of music performance and how people entrain with these performed musical rhythms. In Experiment 1, one skilled pianist performed four compositions with natural tempo fluctuation. The changes in tempo showed long-range correlation and fractal (1/f) scaling for all four performances. To determine whether the finding of 1/f structure would generalize to other pianists, musical styles, and performance practices, fractal analyses were conducted on a large database of piano performances in Experiment 3. Analyses revealed signicant long-range serial correlations in 96% of the performances. Analysis showed that the degree of fractal structure depended on piece, suggesting that there is something in the composition's musical structure which causes pianists' tempo fluctuations to have a similar degree of fractal structure. Thus, musical tempo fluctuations exhibit long-range correlations and fractal scaling. To examine how people entrain to these temporal fluctuations, a series of behavioral experiments were conducted where subjects were asked to tap the pulse (beat) to temporally fluctuating stimuli. The stimuli for Experiment 2 were musical performances from Experiment 1, with mechanical versions serving as controls. Subjects entrained to all stimuli at two metrical levels, and predicted the tempo fluctuations observed in Experiment 1. Fractal analyses showed that the fractal structure of the stimuli was reected in the inter-tap intervals, suggesting a possible relationship between fractal tempo scaling, pulse perception, and entrainment. Experiments 4-7 investigated the extent to which people use long-range correlation and fractal scaling to predict tempo fluctuations in fluctuating rhythmic sequences. / Both natural and synthetic long-range correlations enabled prediction, as well as shuffled versions which contained no long-term fluctuations. Fractal structure of the stimuli was again in the inter-tap intervals, with persistence for the fractal stimuli, and antipersistence for the shuffled stimuli. 1/f temporal structure is suficient though not necessary for prediction of fluctuations in a stimulus with large temporal fluctuations. / by Summer K. Rankin. / Vita. / Thesis (Ph.D.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.

Page generated in 0.0867 seconds