• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 10
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A grunge philosophy, or how I came to speak a sub-cultural vocabulary negating social binaries

Lechuga, Anthony. January 2009 (has links) (PDF)
Senior Honors thesis--Regis University, Denver, Colo., 2009. / Title from PDF title page (viewed on May 12, 2009). Includes bibliographical references.
2

Harmony in pastoral care music meeting pastoral care needs /

Lister, James Kenneth. January 1995 (has links)
Thesis (D. Min.)--Erskine Theological Seminary, 1995. / Abstract. Includes bibliographical references (leaves 145-151).
3

Latent Walking Techniques for Conditioning GAN-Generated Music

Eisenbeiser, Logan Ryan 21 September 2020 (has links)
Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Generating music is very difficult; components like long and short term structure present time complexity, which can be difficult for neural networks to capture. Additionally, the acoustics of musical features like harmonies and chords, as well as timbre and instrumentation require complex representations for a network to accurately generate them. Various techniques for both music representation and network architecture have been used in the past decade to address these challenges in music generation. The focus of this thesis extends beyond generating music to the challenge of controlling and/or conditioning that generation. Conditional generation involves an additional piece or pieces of information which are input to the generator and constrain aspects of the results. Conditioning can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored. This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact of this conditioning on the generated music. / Master of Science / Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Beyond simply generating music lies the challenge of controlling or conditioning that generation. Conditional generation can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored, especially for generative adversarial networks (GANs). This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact and effectiveness of this conditioning on the generated music.
4

AI Drummer - Using Learning to EnhanceArti cial Drummer Creativity

Thörn, Oscar January 2020 (has links)
This project explores the usability of Transformers for learning a model that canplay the drums and accompany a human pianist. Building upon previous workusing fuzzy logic systems three experiments are devised to test the usabilityof Transformers. The report also includes a brief survey of algorithmic musicgeneration.The result of the project are that in their current form Transformers cannoteasily learn collaborative music generation. The key insights is that a new wayto encode sequences are needed for collaboration between human and robot inthe music domain. This encoding should be able to handle the varied demandsand lengths of di erent musical instruments.
5

Reviving Mozart with Intelligence Duplication

Galajda, Jacob E 01 January 2021 (has links)
Deep learning has been applied to many problems that are too complex to solve through an algorithm. Most of these problems have not required the specific expertise of a certain individual or group; most applied networks learn information that is shared across humans intuitively. Deep learning has encountered very few problems that would require the expertise of a certain individual or group to solve, and there has yet to be a defined class of networks capable of achieving this. Such networks could duplicate the intelligence of a person relative to a specific task, such as their writing style or music composition style. For this thesis research, we propose to investigate Artificial Intelligence in a new direction: Intelligence Duplication (ID). ID encapsulates neural networks that are capable of solving problems that require the intelligence of a specific person or collective group. This concept can be illustrated by learning the way a composer positions their musical segments -as in the Deep Composer neural network. This will allow the network to generate similar songs to the aforementioned artist. One notable issue that arises with this is the limited amount of training data that can occur in some cases. For instance, it would be nearly impossible to duplicate the intelligence of a lesser known artist or an artist who did not live long enough to produce many works. Generating many artificial segments in the artist's style will overcome these limitations. In recent years, Generative Adversarial Networks (GANs) have shown great promise in many similarly related tasks. Generating artificial segments will give the network greater leverage in assembling works similar to the artist, as there will be an increased overlap in data points within the hashed embedding. Additional review indicates that current Deep Segment Hash Learning (DSHL) network variations have potential to optimize this process. As there are less nodes in the input and output layers, DSHL networks do not need to compute nearly as much information as traditional networks. We indicate that a synthesis of both DSHL and GAN networks will provide the framework necessary for future ID research. The contributions of this work will inspire a new wave of AI research that can be applied to many other ID problems.
6

Automatic Generation of Music for Inducing Emotive and Physiological Responses

Monteith, Kristine Perry 13 August 2012 (has links) (PDF)
Music and emotion are two realms traditionally considered to be unique to human intelligence. This dissertation focuses on furthering artificial intelligence research, specifically in the area of computational creativity, by investigating methods of composing music that elicits desired emotional and physiological responses. It includes the following: an algorithm for generating original musical selections that effectively elicit targeted emotional and physiological responses; a description of some of the musical features that contribute to the conveyance of a given emotion or the elicitation of a given physiological response; and an account of how this algorithm can be used effectively in two different situations, the generation of soundtracks for fairy tales and the generation of melodic accompaniments for lyrics. This dissertation also presents research on more general machine learning topics. These include a method of combining output from base classifiers in an ensemble that improves accuracy over a number of different baseline strategies and a description of some of the problems inherent in the Bayesian model averaging strategy and a novel algorithm for improving it.
7

A Novel Approach to Extending Music Using Latent Diffusion

Roohparvar, Keon, Kurfess, Franz J. 01 June 2023 (has links) (PDF)
Using deep learning to synthetically generate music is a research domain that has gained more attention from the public in the past few years. A subproblem of music generation is music extension, or the task of taking existing music and extending it. This work proposes the Continuer Pipeline, a novel technique that uses deep learning to take music and extend it in 5 second increments. It does this by treating the musical generation process as an image generation problem; we utilize latent diffusion models (LDMs) to generate spectrograms, which are image representations of music. The Continuer Pipeline is able to receive a waveform as an input, and its output will be what the pipeline predicts the next five seconds might sound like. We trained the Continuer Pipeline using the expansive diffusion model functionality provided by the HuggingFace platform, and our dataset consisted of 256x256 spectrogram images representing 5-second snippets of various hip-hop songs from Spotify. The musical waveforms generated by the Continuer Pipeline are currently at a much lower quality compared to human-generated music, but we affirm that the Continuer Pipeline still has many uses in its current state, and we describe many avenues for future improvement to this technology.
8

A formal language theory approach to music generation

Schulze, Walter 03 1900 (has links)
Thesis (MSc (Mathematical Sciences))-- University of Stellenbosch, 2010. / ENGLISH ABSTRACT: We investigate the suitability of applying some of the probabilistic and automata theoretic ideas, that have been extremely successful in the areas of speech and natural language processing, to the area of musical style imitation. By using music written in a certain style as training data, parameters are calculated for (visible and hidden) Markov models (of mixed, higher or first order), in order to capture the musical style of the training data in terms of mathematical models. These models are then used to imitate two instrument music in the trained style. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die toepasbaarheid van probabilitiese en outomaatteoretiese konsepte, wat uiters suksesvol toegepas word in die gebied van spraak en natuurlike taal-verwerking, op die gebied van musiekstyl nabootsing. Deur gebruik te maak van musiek wat geskryf is in ’n gegewe styl as aanleer data, word parameters vir (sigbare en onsigbare) Markov modelle (van gemengde, hoër- of eerste- orde) bereken, ten einde die musiekstyl van die data waarvan geleer is, in terme van wiskundige modelle te beskryf. Hierdie modelle word gebruik om musiek vir twee instrumente te genereer, wat die musiek waaruit geleer is, naboots.
9

Automatic symbolic melody generation from lyrics

Xie, Yifan 08 1900 (has links)
Music generation is a popular task in the domain of music artificial intelligence, aiming at generating music automatically. Music generation includes both symbolic and acoustic music generation. The former focuses on the score level, while the latter emphasizes the audio signal level. This thesis focuses on one task of symbolic music generation: generating symbolic melodies from lyrics and attempting to solve several pre-existing issues in this field. Firstly, we address the problem of melody generation from lyrics for non-popular music, which has not been widely studied in the literature, in addition to the generation of popular music. We study the following two music types: popular music with English lyrics and traditional Chinese music with classical Chinese poetry. The former has been extensively researched, while the latter has seldom been explored. Secondly, to mitigate the challenge of insufficient modeling of the relationship between lyrics and melody in non-popular music, we utilize deep neural networks to learn from a larger paired dataset for generating melodies from classical Chinese poetry. This approach enhances the model's ability to understand the relationship between classical Chinese poetry and its associated melodies. Another motivation behind this endeavor stems from historical context: many classical Chinese poems could be sung in ancient times, but many associated melodies have been lost, leaving only the poetry itself. Given the assumption that the lost melodies share similar elements, such as styles and genres, with the preserved melodies, this thesis employs deep neural networks to model the remaining melodies and their corresponding poems, which may assist in restoring these lost melodies. Thirdly, prior research integrates human music rules to enhance performance, which has limitations in generalization and adaptability. To tackle this issue, we employ methods allowing the model to autonomously encode music theory information for melody generation. Specifically, part-of-speech embeddings and tone embeddings are incorporated into the model, improving the capture of relationships between prosodic boundaries in lyrics (applicable to both English and Chinese lyrics) and melody, as well as between the tone of Chinese characters and the pitch of the melody, without manually designed rules. Fourthly, to address the problem of generated melodies lacking stylistic features, we incorporate style constraints into the inference phase. This adjustment enables the model to grasp the global style features of music to some extent. After implementing these adaptations, both objective and subjective evaluations are conducted. Objective ablation studies confirm that each adaptation contributes to improving the model's fit to the data. Subjective evaluations corroborate that our model can generate high-quality melodies akin to real music. / La génération de musique est une tâche populaire dans le domaine de l’intelligence artificielle musicale, visant à générer automatiquement de la musique. La génération musicale comprend la génération de musique symbolique et acoustique. La première se concentre sur le niveau de la partition, tandis que la seconde met l’accent sur le niveau du signal audio. Ce mémoire se concentre sur une tâche de génération musicale symbolique : générer des mélodies symboliques à partir de paroles et tenter de résoudre plusieurs problèmes existants dans ce domaine. Premièrement, nous abordons le problème de génération de la mélodie à partir de la parole pour la musique non populaire, un problème assez peu étudié. Nous étudions non seulement la génération de la musique populaire à partir de la parole en anglais, mais aussi et surtout de la musique chinoise traditionnelle avec de la poésie classique. La première a fait l’objet de nombreuses recherches, tandis que la dernière a rarement été explorée. Deuxièmement, pour atténuer le défi de la modélisation insuffisante de la relation entre les paroles et la mélodie dans la musique non populaire, nous utilisons des réseaux neuronaux profonds pour apprendre à partir d’un ensemble de données appariées plus grand pour générer des mélodies à partir de la poésie chinoise classique. Cette approche renforce la capacité du modèle à comprendre la relation entre la poésie chinoise classique et ses mélodies associées. Une autre motivation derrière cette démarche provient du contexte historique : de nombreux poèmes chinois classiques pouvaient être chantés dans l’Antiquité, mais de nombreuses mélodies associées ont été perdues, ne laissant que la poésie elle-même. En supposant que les mélodies perdues partagent des éléments similaires, tels que les styles et les genres, avec les mélodies préservées, ce mémoire utilise des réseaux neuronaux profonds pour modéliser les mélodies restantes et leurs poèmes correspondants, ce qui peut aider à restaurer ces mélodies perdues. Troisièmement, la recherche précédente intègre des règles musicales humaines pour améliorer les performances, ce qui a des limitations en matière de généralisation et d’adaptabilité. Nous employons des méthodes permettant au modèle de coder de manière autonome des informations théoriques sur la musique pour la génération de mélodies. Plus précisément, des plongements de parties du discours et des plongements de tons sont intégrés au modèle, améliorant la capture des relations entre les frontières prosodiques dans les paroles (applicables à la fois aux paroles anglaises et chinoises) et la mélodie, ainsi qu’entre le ton des caractères chinois et la hauteur de la mélodie, sans règles conçues manuellement. Quatrièmement, pour aborder le problème du manque de caractéristiques stylistiques des mélodies générées, nous intégrons des contraintes de style dans la phase d’inférence. Cet ajustement permet au modèle de saisir dans une certaine mesure les caractéristiques stylistiques globales de la musique. Après avoir mis en œuvre ces adaptations, des évaluations objectives et subjectives sont menées. Les études objectives d’ablation confirment que chaque adaptation contribue à améliorer l’ajustement du modèle aux données. Les évaluations subjectives corroborent que notre modèle peut générer des mélodies de haute qualité semblables à de la vraie musique.
10

Dynamic Procedural Music Generation from NPC Attributes

Washburn, Megan E 01 March 2020 (has links)
Procedural content generation for video games (PCGG) has seen a steep increase in the past decade, aiming to foster emergent gameplay as well as to address the challenge of producing large amounts of engaging content quickly. Most work in PCGG has been focused on generating art and assets such as levels, textures, and models, or on narrative design to generate storylines and progression paths. Given the difficulty of generating harmonically pleasing and interesting music, procedural music generation for games (PMGG) has not seen as much attention during this time. Music in video games is essential for establishing developers' intended mood and environment. Given the deficit of PMGG content, this paper aims to address the demand for high-quality PMGG. This paper describes the system developed to solve this problem, which generates thematic music for non-player characters (NPCs) based on developer-defined attributes in real time and responds to the dynamic relationship between the player and target NPC. The system was evaluated by means of user study: participants confront four NPC bosses each with their own uniquely generated dynamic track based on their varying attributes in relation to the player's. The survey gathered information on the perceived quality, dynamism, and helpfulness to gameplay of the generated music. Results showed that the generated music was generally pleasing and harmonious, and that while players could not detect the details of how, they were able to detect a general relationship between themselves and the NPCs as reflected by the music.

Page generated in 0.1095 seconds