1 |
Rhyme, Rhythm, and Rhubarb: Using Probabilistic Methods to Analyze Hip Hop, Poetry, and Misheard LyricsHirjee, Hussein January 2010 (has links)
While text Information Retrieval applications often focus on extracting semantic features to identify the topic of a document, and Music Information Research tends to deal with melodic, timbral or meta-tagged data of songs, useful information can be gained from surface-level features of musical texts as well. This is especially true for texts such as song lyrics and poetry, in which the sound and structure of the words is important. These types of lyrical verse usually contain regular and repetitive patterns, like the rhymes in rap lyrics or the meter in metrical poetry. The existence of such patterns is not always categorical, as there may be a degree to which they appear or apply in any sample of text. For example, rhymes in hip hop are often imperfect and vary in the degree to which their constituent parts differ. Although a definitive decision as to the existence of any such feature cannot always be made, large corpora of known examples can be used to train probabilistic models enumerating the likelihood of their appearance. In this thesis, we apply likelihood-based methods to identify and characterize patterns in lyrical verse. We use a probabilistic model of mishearing in music to resolve misheard lyric search queries. We then apply a probabilistic model of rhyme to detect imperfect and internal rhymes in rap lyrics and quantitatively characterize rappers' styles in their use. Finally, we compute likelihoods of prosodic stress in words to perform automated scansion of poetry and compare poets' usage of and adherence to meter. In these applications, we find that likelihood-based methods outperform simpler, rule-based models at finding and quantifying lyrical features in text.
|
2 |
Rhyme, Rhythm, and Rhubarb: Using Probabilistic Methods to Analyze Hip Hop, Poetry, and Misheard LyricsHirjee, Hussein January 2010 (has links)
While text Information Retrieval applications often focus on extracting semantic features to identify the topic of a document, and Music Information Research tends to deal with melodic, timbral or meta-tagged data of songs, useful information can be gained from surface-level features of musical texts as well. This is especially true for texts such as song lyrics and poetry, in which the sound and structure of the words is important. These types of lyrical verse usually contain regular and repetitive patterns, like the rhymes in rap lyrics or the meter in metrical poetry. The existence of such patterns is not always categorical, as there may be a degree to which they appear or apply in any sample of text. For example, rhymes in hip hop are often imperfect and vary in the degree to which their constituent parts differ. Although a definitive decision as to the existence of any such feature cannot always be made, large corpora of known examples can be used to train probabilistic models enumerating the likelihood of their appearance. In this thesis, we apply likelihood-based methods to identify and characterize patterns in lyrical verse. We use a probabilistic model of mishearing in music to resolve misheard lyric search queries. We then apply a probabilistic model of rhyme to detect imperfect and internal rhymes in rap lyrics and quantitatively characterize rappers' styles in their use. Finally, we compute likelihoods of prosodic stress in words to perform automated scansion of poetry and compare poets' usage of and adherence to meter. In these applications, we find that likelihood-based methods outperform simpler, rule-based models at finding and quantifying lyrical features in text.
|
3 |
Spelling Normalization of English Student WritingsHONG, Yuchan January 2018 (has links)
Spelling normalization is the task to normalize non-standard words into standard words in texts, resulting in a decrease in out-of-vocabulary (OOV) words in texts for natural language processing (NLP) tasks such as information retrieval, machine translation, and opinion mining, improving the performance of various NLP applications on normalized texts. In this thesis, we explore different methods for spelling normalization of English student writings including traditional Levenshtein edit distance comparison, phonetic similarity comparison, character-based Statistical Machine Translation (SMT) and character-based Neural Machine Translation (NMT) methods. An important improvement of our implementation is that we develop an approach combining Levenshtein edit distance and phonetic similarity methods with added components of frequency count and compound splitting and it is evaluated as a best approach with 0.329% accuracy improvement and 63.63% error reduction on the original unnormalized test set.
|
Page generated in 0.3563 seconds