Procedural generation algorithms can infer rules based on a dataset of examples when each example is made up of labeled components. Unfortunately, musical sequences resist potential inclusion in these kinds of datasets because they lack explicit structural semantics. In order to algorithmically transform a musical sequence into a sequence of labeled components, a segmentation process is needed. We outline a solution to the challenge of musical phrase segmentation that uses grammatical induction algorithms, a class of algorithms which infer a context-free grammar from an input sequence. We study five different grammatical induction algorithms on three different datasets, one of which is introduced in this work. Additionally, we test how the performance of each algorithm varies when transforming musical sequences using viewpoint combinations. Our experiments show that the algorithm longestFirst achieves the best F1 scores across all three datasets, and that viewpoint combinations which include the duration viewpoint result in the best performance.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10435 |
Date | 06 April 2022 |
Creators | Perkins, Reed James |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0018 seconds