• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Intonation modelling for the Nguni languages

Govender, Natasha 19 October 2007 (has links)
Although the complexity of prosody is widely recognised, there is a lack of widely-accepted descriptive standards for prosodic phenomena. This situation has become particularly noticeable with the development of increasingly capable text-to-speech (TTS) systems. Such systems require detailed prosodic models to sound natural. For the languages of Southern Africa, the deficiencies in our modelling capabilities are acute. Little work of a quantitative nature has been published for the languages of the Nguni family (such as isiZulu and isiXhosa), and there are significant contradictions and imprecisions in the literature on this topic. We have therefore embarked on a programme aimed at understanding the relationship between linguistic and physical variables of a prosodic nature in this family of languages. We then use the information/knowledge gathered to build intonation models for isiZulu and isiXhosa as representatives of the Nguni languages. Firstly, we need to extract physical measurements from the voice recordings of the Nguni family of languages. A number of pitch tracking algorithms have been developed; however, to our knowledge, these algorithms have not been evaluated formally on a Nguni language. In order to decide on an appropriate algorithm for further analysis, evaluations have been performed on two stateof- the-art algorithms namely the Praat pitch tracker and Yin (developed by Alain de Cheveingn´e). Praat’s pitch tracker algorithm performs somewhat better than Yin in terms of gross and fine errors and we use this algorithm for the rest of our analysis.<./p> For South African languages the task of building an intonation model is complicated by the lack of intonation resources available. We describe the methodology used for developing a generalpurpose intonation corpus and the various methods implemented to extract relevant features such as fundamental frequency, intensity and duration from the spoken utterances of these languages. In order to understand how the ‘expected’ intonation relates to the actual measured characteristics extracted, we developed two different statistical approaches to build intonation models for isiZulu and isiXhosa. The first is based on straightforward statistical techniques and the second uses a classifier. Both intonation models built produce fairly good accuracy for our isiZulu and isiXhosa sets of data. The neural network classifier used produces slightly better results for both sets of data than the statistical method. The classification model is also more robust and can easily learn from the training data. We show that it is possible to build fairly good intonation models for these languages using different approaches, and that intensity and fundamental frequency are comparable in predictive value for the ascribed tone. / Dissertation (MSc (Computer Science))--University of Pretoria, 2006. / Computer Science / MSc / unrestricted
2

A Design of Karaoke Music Retrieval System by Acoustic Input

Tsai, Shiu-Iau 11 August 2003 (has links)
The objective of this thesis is to design a system that can be used to retrieve the music songs by acoustic input. The system listens to the melody or the partial song singing by the Karaoke users, and then prompts them the whole song paragraphs. Note segmentation is completed by both the magnitude of the song and the k-Nearest Neighbor technique. In order to speed up our system, the pitch period estimation algorithm is rewritten by a theory in communications. Besides, a large popular music database is built to make this system more practical.
3

Automatic accompaniment of vocal melodies in the context of popular music

Cao, Xiang 08 April 2009 (has links)
A piece of popular music is usually defined as a combination of vocal melody and instrumental accompaniment. People often start with the melody part when they are trying to compose or reproduce a piece of popular music. However, creating appropriate instrumental accompaniment part for a melody line can be a difficult task for non-musicians. Automation of accompaniment generation for vocal melodies thus can be very useful for those who are interested in singing for fun. Therefore, a computer software system which is capable of generating harmonic accompaniment for a given vocal melody input has been presented in this thesis. This automatic accompaniment system uses a Hidden Markov Model to assign chord to a given part of melody based on the knowledge learnt from a bank of vocal tracks of popular music. Comparing with other similar systems, our system features a high resolution key estimation algorithm which is helpful to adjust the generated accompaniment to the input vocal. Moreover, we designed a structure analysis subsystem to extract the repetition and structure boundaries from the melody. These boundaries are passed to the chord assignment and style player subsystems in order to generate more dynamic and organized accompaniment. Finally, prototype applications are discussed and the entire system is evaluated.
4

Music And Speech Analysis Using The 'Bach' Scale Filter-Bank

Ananthakrishnan, G 04 1900 (has links)
The aim of this thesis is to define a perceptual scale for the ‘Time-Frequency’ analysis of music signals. The equal tempered ‘Bach ’ scale is a suitable scale, since it covers most of the genres of music and the error is equally distributed for each semi-tone. However, it may be necessary to allow a tolerance of around 50 cents or half the interval of the Bach scale, so that the interval can accommodate other common intonation schemes. The thesis covers the formulation of the Bach scale filter-bank as a time-varying model. It makes a comparative study with other commonly used perceptual scales. Two applications for the Bach scale filter-bank are also proposed, namely automated segmentation of speech signals and transcription of singing voice for query-by-humming applications. Even though this filter-bank is suggested with a motivation from music, it could also be applied to speech. A method for automatically segmenting continuous speech into phonetic units is proposed. The results, obtained from the proposed method, show around 82% accuracy for the English and 85% accuracy for the Hindi databases. This is an improvement of around 2 -3% when the performance is compared with other popular methods in the literature. Interestingly, the Bach scale filters perform better than the filters designed for other common perceptual scales, such as Mel and Bark scales. ‘Musical transcription’ refers to the process of converting a musical rendering or performance into a set of symbols or notations. A query in a ‘query-by-humming system’ can be made in several ways, some of which are singing with words, or with arbitrary syllables, or whistling. Two algorithms are suggested to annotate a query. The algorithms are designed to be fairly robust for these various forms of queries. The first algorithm is a frequency selection based method. It works on the basis of selecting the most likely frequency components at any given time instant. The second algorithm works on the basis of finding time-connected contours of high energy in the ‘Time-Frequency’ plane of the input signal. The time domain algorithm works better in terms of instantaneous pitch estimates. It results in an error of around 10 -15%, while the frequency domain method results in an error of around 12 -20%. A song rendered by two different people will have quite a few different properties. Their absolute pitches, rates of rendering, timbres based on voice quality and inaccuracies, may be different. The thesis discusses a method to quantify the distance between two different renderings of musical pieces. The distance function has been evaluated by attempting a search for a particular song from a database of a size of 315, made up of songs sung by both male and female singers and whistled queries. Around 90 % of the time, the correct song is found among the top five best choices picked. Thus, the Bach scale has been proposed as a suitable scale for representing the perception of music. It has been explored in two applications, namely automated segmentation of speech and transcription of singing voices. Using the transcription obtained, a measure of the distance between renderings of musical pieces has also been suggested.
5

Compositional approaches within new media paradigms

Oliveiro, Mark, 1983- 05 1900 (has links)
"Compositional Approaches to New Media Paradigms" is the discursive accompaniment to the original composition BoMoH, (a new media chamber opera. A variety of new media concepts and practices are discussed in relation to their use as a contemporary compositional methodology for computer musicians and digital content producers. This paper aligns relevant discourse with a variety of concepts as they influence and affect the compositional process. This paper does not propose a new working method; rather it draws attention to a contemporary interdisciplinary practice that facilitates new possibilities for engagement and aesthetics in digital art/music. Finally, in demonstrating a selection of the design principals, from a variety of new media theories of interest, in compositional structure and concept, it is my hope to provide composers and computer musicians with a tested resource that will function as a helpful set of working guidelines for producing new media enabled art, sonic or otherwise.
6

Towards expressive melodic accompaniment using parametric modeling of continuous musical elements in a multi-attribute prediction suffix trie framework

Mallikarjuna, Trishul 22 November 2010 (has links)
Elements of continuous variation such as tremolo, vibrato and portamento enable dimensions of their own in expressive melodic music in styles such as in Indian Classical Music. There is published work on parametrically modeling some of these elements individually, and to apply the modeled parameters to automatically generated musical notes in the context of machine musicianship, using simple rule-based mappings. There have also been many systems developed for generative musical accompaniment using probabilistic models of discrete musical elements such as MIDI notes and durations, many of them inspired by computational research in linguistics. There however doesn't seem to have been a combined approach of parametrically modeling expressive elements in a probabilistic framework. This documents presents a real-time computational framework that uses a multi-attribute trie / n-gram structure to model parameters like frequency, depth and/or lag of the expressive variations such as vibrato and portamento, along with conventionally modeled elements such as musical notes, their durations and metric positions in melodic audio input. This work proposes storing the parameters of expressive elements as metadata in the individual nodes of the traditional trie structure, along with the distribution of their probabilities of occurrence. During automatic generation of music, the expressive parameters as learned in the above training phase are applied to the associated re-synthesized musical notes. The model is aimed at being used to provide automatic melodic accompaniment in a performance scenario. The parametric modeling of the continuous expressive elements in this form is hypothesized to be able to capture deeper temporal relationships among musical elements and thereby is expected to bring about a more expressive and more musical outcome in such a performance than what has been possible using other works of machine musicianship using only static mappings or randomized choice. A system was developed on Max/MSP software platform with this framework, which takes in a pitched audio input such as human singing voice, and produces a pitch track which may be applied to synthesized sound of a continuous timbre. The system was trained and tested with several vocal recordings of North Indian Classical Music, and a subjective evaluation of the resulting audio was made using an anonymous online survey. The results of the survey show the output tracks generated from the system to be as musical and expressive, if not more, than the case where the pitch track generated from the original audio was directly rendered as output, and also show the output with expressive elements to be perceivably more expressive than the version of the output without expressive parameters. The results further suggest that more experimentation may be required to conclude the efficacy of the framework employed in relation to using randomly selected parameter values for the expressive elements. This thesis presents the scope, context, implementation details and results of the work, suggesting future improvements.

Page generated in 0.0898 seconds