• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 2
  • 2
  • Tagged with
  • 28
  • 28
  • 11
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Recognition of nonstationary signals with particular reference to the piano

Carrasco, Ana Cristina Pereira Rosa Pascoalinho January 2002 (has links)
No description available.
2

The piano transcriptions of Liszt and their place in the musical style of the nineteenth century

Kirton, Stanley January 1962 (has links)
Thesis (D.M.A.)--Boston University.
3

Second elergy a transcription for orchestra

Lucas, Thomas Donald January 1961 (has links)
Thesis (M.M.)--Boston University
4

Analysis and resynthesis of polyphonic music

Nunn, Douglas John Edgar January 1997 (has links)
This thesis examines applications of Digital Signal Processing to the analysis, transformation, and resynthesis of musical audio. First I give an overview of the human perception of music. I then examine in detail the requirements for a system that can analyse, transcribe, process, and resynthesise monaural polyphonic music. I then describe and compare the possible hardware and software platforms. After this I describe a prototype hybrid system that attempts to carry out these tasks using a method based on additive synthesis. Next I present results from its application to a variety of musical examples, and critically assess its performance and limitations. I then address these issues in the design of a second system based on Gabor wavelets. I conclude by summarising the research and outlining suggestions for future developments.
5

Automatic music transcription using structure and sparsity

O'Hanlon, Ken January 2014 (has links)
Automatic Music Transcription seeks a machine understanding of a musical signal in terms of pitch-time activations. One popular approach to this problem is the use of spectrogram decompositions, whereby a signal matrix is decomposed over a dictionary of spectral templates, each representing a note. Typically the decomposition is performed using gradient descent based methods, performed using multiplicative updates based on Non-negative Matrix Factorisation (NMF). The final representation may be expected to be sparse, as the musical signal itself is considered to consist of few active notes. In this thesis some concepts that are familiar in the sparse representations literature are introduced to the AMT problem. Structured sparsity assumes that certain atoms tend to be active together. In the context of AMT this affords the use of subspace modelling of notes, and non-negative group sparse algorithms are proposed in order to exploit the greater modelling capability introduced. Stepwise methods are often used for decomposing sparse signals and their use for AMT has previously been limited. Some new approaches to AMT are proposed by incorporation of stepwise optimal approaches with promising results seen. Dictionary coherence is used to provide recovery conditions for sparse algorithms. While such guarantees are not possible in the context of AMT, it is found that coherence is a useful parameter to consider, affording improved performance in spectrogram decompositions.
6

Vibraphone transcription from noisy audio using factorization methods

Zehtabi, Sonmaz 30 April 2012 (has links)
This thesis presents a comparison between two factorization techniques { Probabilistic Latent Component Analysis (PLCA) and Non-Negative Least Squares (NNLSQ) { for the problem of detecting note events played by a vibraphone, using a microphone for sound acquisition in the context of live performance. Ambient noise is reduced by using specifi c dictionary codewords to model the noise. The results of the factorization are analyzed by two causal onset detection algorithms: a rule-based algorithm and a trained machine learning based classi fier. These onset detection algorithms yield decisions on when note events happen. Comparative results are presented, considering a database of vibraphone recordings with di fferent levels of noise, showing the conditions under which the event detection is reliable. / Graduate
7

Fantasia and fugue, 2nd movement op. 110 a transcription for band

Murray, Robert January 1961 (has links)
Thesis (M.M.)--Boston University
8

Four selected cantatas by Alessandro Scarlatti: transcription from manuscript number M360.10 of the Boston Public Library and commentary

Mandel, Sara Yehudah January 1974 (has links)
Thesis (M.A.)--Boston University / The cantatas of Alessandro Scarlatti (1660-1725) represent the culmination of more than a century of Italian secular cantata composition. Although overshadowed by the immense popularity of the Neapolitan opera, the cantata served as the ideal medium for the experimentation with and the perfection of new musical techniques. Thus, while Scarlatti himself was far better known for his operas, his numerous cantatas are historically of more special musical significance. Although Scarlatti's cantatas are distinguished by their beauty and craftsmanship, only a very few have been edited in modern published editions. This is despite the fact that almost eight hundred of Scarlatti's cantatas are known to exist in manuscript form. These have been exhaustively indexed by Edwin Hanley in an unpublished Yale University dissertation, "Alessandro Scarlatti's Cantate da Camera: A Bibliographic Study" (1963). However, the forthcoming complete edition of them lies years in the future. [TRUNCATED]
9

Language of music : a computational model of music interpretation

McLeod, Andrew Philip January 2018 (has links)
Automatic music transcription (AMT) is commonly defined as the process of converting an acoustic musical signal into some form of musical notation, and can be split into two separate phases: (1) multi-pitch detection, the conversion of an audio signal into a time-frequency representation similar to a MIDI file; and (2) converting from this time-frequency representation into a musical score. A substantial amount of AMT research in recent years has concentrated on multi-pitch detection, and yet, in the case of the transcription of polyphonic music, there has been little progress. There are many potential reasons for this slow progress, but this thesis concentrates on the (lack of) use of music language models during the transcription process. In particular, a music language model would impart to a transcription system the background knowledge of music theory upon which a human transcriber relies. In the related field of automatic speech recognition, it has been shown that the use of a language model drawn from the field of natural language processing (NLP) is an essential component of a system for transcribing spoken word into text, and there is no reason to believe that music should be any different. This thesis will show that a music language model inspired by NLP techniques can be used successfully for transcription. In fact, this thesis will create the blueprint for such a music language model. We begin with a brief overview of existing multi-pitch detection systems, in particular noting four key properties which any music language model should have to be useful for integration into a joint system for AMT: it should (1) be probabilistic, (2) not use any data a priori, (3) be able to run on live performance data, and (4) be incremental. We then investigate voice separation, creating a model which achieves state-of-the-art performance on the task, and show that, used as a simple music language model, it improves multi-pitch detection performance significantly. This is followed by an investigation of metrical detection and alignment, where we introduce a grammar crafted for the task which, combined with a beat-tracking model, achieves state-of-the-art results on metrical alignment. This system's success adds more evidence to the long-existing hypothesis that music and language consist of extremely similar structures. We end by investigating the joint analysis of music, in particular showing that a combination of our two models running jointly outperforms each running independently. We also introduce a new joint, automatic, quantitative metric for the complete transcription of an audio recording into an annotated musical score, something which the field currently lacks.
10

Methods of Music Classification and Transcription

Baker, Jonathan Peter 06 July 2012 (has links) (PDF)
We begin with an overview of some signal processing terms and topics relevant to music analysis including facts about human sound perception. We then discuss common objectives of music analysis and existing methods for accomplishing them. We conclude with an introduction to a new method of automatically transcribing a piece of music from a digital audio signal.

Page generated in 0.109 seconds