At present, state-of-the-art deep learning music generation systems require a lot time and hardware resources to develop. This means that they are almost exclusively available to large companies. In order to reduce these requirements, more efficient techniques and methods need to be utilised. This project aims to investigate various approaches by developing a music generation system using generative adversarial networks, comparing different techniques and their effect on the system's performance. Our results show the difficulties in generating music in a more resource-constrained environment. We find that structuring the input space with conditional model constraints improves the systems' ability to conform to musical standards. The results also indicate the importance of a patch-based discriminator for evaluating the texture of the generated music. Finally, we propose a similarity loss as a way of reducing mode collapse in the generator, thus stabilising the training process.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hj-62106 |
Date | January 2023 |
Creators | Li, Yupeng, Linberg, Jonatan |
Publisher | Jönköping University, JTH, Avdelningen för datavetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | Swedish |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0013 seconds