• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Onset detection in polyphonic music / Ansatsdetektion i polyfon musik

Efraimsson, Nils January 2017 (has links)
In music analysis, the beginning of events in a music signal (i.e. sound onset detection) is important for such tasks as sound segmentation, beat recognition and automatic music transcription. The aim of the present work was to make an algorithm for sound onset detection with better performance than other state-of-the-art1 algorithms. Necessary theoretical background for spectral analysis on a sound signal is given with special focus on the Short-Time Fourier Transform (STFT) and the effects of applying a window to a signal. Previous works based on different approaches to sound onset detection were studied, and a possible improvement was observed for one such approach - namely the one developed by Bello, Duxbury, Davies, & Sandler (2004). The algorithm uses an STFT approach, analyzing a sound signal time frame by time frame. The algorithm’s detection is sequential in nature: It takes a frame from the STFT and makes an extrapolation to the next frame, assuming that the signal is constant. The difference between the extrapolated frame and the actual frame of the STFT constitutes the detection function. The proposed improvement lies in a combination of ideas from other algorithms, analyzing the signal with different frequency bands with frequency dependent settings and a modification of the extrapolation step. The proposed algorithm is compared to the original algorithm and an adaption by Dixon (2006) by analyzing 20 songs using three different window functions. The results were evaluated with the standards set by MIREX (2005-2016). The results of the proposed algorithm are encouraging, showing good recall, but fail to out-perform any of the algorithms it is compared to in both precision and the so-called F-measure. The shortcomings of the proposed algorithm leave room for further improvement, and a number of possible future modifications are exemplified. / Ansatsdetektion används inom musikanalys för bland annat automatisk transkription och ljudkomprimering. Ansatsdetektion innebär att lokalisera en händelse i en musiksignal. Med målet att utveckla en algoritm som presterar bättre än aktuella2 algoritmer ges här en genomgång av några nödvändiga teoretiska kunskaper i ämnet, bland annat korttids-Fouriertransformen (STFT) och hur fönsterfunktioner påverkar signalbehandling. Tidigare arbeten inom ansatsdetektion med olika infallsvinklar studeras och en möjlig förbättring av en av dem, den av Bello, Duxbury, Davies, & Sandler (2004), framträder. Algoritmen använder sig av STFT och analyserar ljudsignaler en tidsenhet i taget. Utifrån varje analyserad tidsenhet görs en extrapolation till nästa tidsenhet genom antagandet att signalen är konstant. Skillnaden mellan den extrapolerade tidsenheten och den faktiska tidsenheten i STFTn utgör detektionsfunktionen. Den möjliga förbättringen består i att använda idéer från olika algoritmer för ansatsdetektion – ljudsignalen analyseras i olika frekvensband med bandberoende inställningar för STFTn – och en förändrad extrapoleringsfunktion. Den föreslagna algoritmen jämförs med originalet av Bello, Duxbury, Davies, & Sandler (2004) och även med en variant utvecklad av Dixon (2006) genom att applicera dem på 20 spår med tre olika fönsterfunktioner. Resultaten utvärderas enligt MIREX (2005-2016) standarder och är lovande för algoritmen, då den har en bra träffbild, men både träffsäkerhet och F-värde ligger under de båda andra. Ett flertal möjliga förbättringar av algoritmen iakttas och presenteras.
2

A cross-cultural analysis of music structure

Tian, Mi January 2017 (has links)
Music signal analysis is a research field concerning the extraction of meaningful information from musical audio signals. This thesis analyses the music signals from the note-level to the song-level in a bottom-up manner and situates the research in two Music information retrieval (MIR) problems: audio onset detection (AOD) and music structural segmentation (MSS). Most MIR tools are developed for and evaluated on Western music with specific musical knowledge encoded. This thesis approaches the investigated tasks from a cross-cultural perspective by developing audio features and algorithms applicable for both Western and non-Western genres. Two Chinese Jingju databases are collected to facilitate respectively the AOD and MSS tasks investigated. New features and algorithms for AOD are presented relying on fusion techniques. We show that fusion can significantly improve the performance of the constituent baseline AOD algorithms. A large-scale parameter analysis is carried out to identify the relations between system configurations and the musical properties of different music types. Novel audio features are developed to summarise music timbre, harmony and rhythm for its structural description. The new features serve as effective alternatives to commonly used ones, showing comparable performance on existing datasets, and surpass them on the Jingju dataset. A new segmentation algorithm is presented which effectively captures the structural characteristics of Jingju. By evaluating the presented audio features and different segmentation algorithms incorporating different structural principles for the investigated music types, this thesis also identifies the underlying relations between audio features, segmentation methods and music genres in the scenario of music structural analysis.
3

Αυτόματο σύστημα εκμάθησης μουσικών οργάνων

Κομπογιάννης, Ηλίας 30 December 2014 (has links)
Ο σκοπός της παρούσας διπλωματικής είναι η κατασκευή ενός συστήματος εκμάθησης μουσικών οργάνων. Συγκεκριμένα, στα πλαίσια της διπλωματικής αυτής μελετήθηκε το όργανο της κιθάρας. Αυτό επετεύχθη με την βοήθεια του Matlab software όπου έχουμε το πρωτότυπο κομμάτι μουσικής και το κομμάτι το οποίο παίζει ο μαθητής και κάνουμε την σύγκριση μεταξύ των δύο. Για να γίνει αυτό όμως πρέπει να γίνουν κάποια βήματα προηγουμένως. Αρχικά, εντοπίζουμε σε ποιο χρονικό σημείο παίζονται οι νότες, δηλαδή βρίσκουμε τα onset points. Έπειτα, καθορίζουμε ποια νότα παίζεται στα αντίστοιχα χρονικά σημεία, το οποίο επιτυγχάνεται με την Harmonic Product Spectrum μέθοδο όπου βρίσκουμε την θεμελιώδη συχνότητα. Τέλος, καθορίζουμε με ποια κριτήρια θα γίνει η σύγκριση και τι αποτελέσματα θα παρέχουμε. / The purpose of this project is the construction of a musical-ιnstrument learning system. Specifically, in the context of this thesis, we studied the guitar. This was achieved with the help of Matlab software where we define the original music track and the track played by the student and make the comparison between the two. To do this, however, we must take some steps. First, we identify the time which the notes are played, that is to say we find the onset points. Then, we determine what note is played in the respective time points, which is obtained by the Harmonic Product Spectrum method, where we find the fundamental frequency. Finally, we determine the comparison criteria and what results are provided.
4

Computationally efficient methods for polyphonic music transcription

Pertusa, Antonio 09 July 2010 (has links)
Este trabajo propone una serie de métodos eficientes para convertir una señal de audio musical polifónica (WAV, MP3) en una partitura (MIDI).
5

Expressive Automatic Music Transcription : Using hard onset detection to transcribe legato slurs for violin / Expressiv Automatisk Musiktranskription : Användning av hård ansatsdetektion för transkription av legatobågar för violin

Falk, Simon January 2022 (has links)
Automatic Music Transcriptions systems such as ScoreCloud aims to convert audio signals to sheet music. The information contained in sheet music can be divided into increasingly descriptive layers, where most research on Automatic Music Transcription is restricted on note-level transcription and disregard expressive markings such as legato slurs. In case of violin playing, legato can be determined from the articulated, "hard" onsets that occur on the first note of a legato slur. We detect hard onsets in violin recordings by three different methods — two based on signal processing and one on Convolutional Neural Networks. ScoreCloud notes are then labeled as articulated or slurred, depending on the distance to the closest hard onset. Finally, we construct legato slurs between articulated notes, and count the number of notes where the detected slur label matches ground-truth. Our best-performing method correctly labels notes in 82.9% of the cases, when averaging on the test set recordings. The designed system serves as a proof-of-concept for including expressive notation within Automatic Music Transcription. Vibrato was seen to have a major negative impact on the performance, while the method is less affected by varying sound quality and polyphony. Our system could be further improved by using phase input, data augmentation, or high-dimensional articulation representations. / System för automatisk musiktranskription såsom ScoreCloud syftar till att konvertera ljudsignaler till notskrift. Informationen i en notbild kan delas in i flera lager med en ökande nivå av beskrivning, där huvuddelen av forskningen har begränsats till transkription av noter och har bortsett från uttrycksmarkeringar såsom legatobågar. I fallet med violin kan legato bestämmas från de artikulerade, ’hårda’ ansatser som uppkommer vid den första noten i en legatobåge. Vi detekterar här hårda ansatser i inspelningar av violin genom tre olika metoder — två baserade på signalbehandling och en baserat på faltningsnätverk. Noter från ScoreCloud märks sedan som artikulerade eller bundna, beroende på det närmaste avståndet till en hård ansats. Slutligen konstrueras legatobågar mellan artikulerade noter, och vi räknar antalet noter där den predicerade märkningen stämmer med den sanna. Vår bäst presterande metod gör en korrekt märkning i 82.9% i genomsnitt taget över testinspelningarna. Vårt system validerar idén att innefatta uttrycksmarkeringar i automatisk musiktranskription. Vibrato observerades påverka resultatet mycket negativt, medan metoden är mindre påverkad av varierande ljudkvalitet och polyfoni. Vårt system kan förbättras ytterligare genom användandet av fas i indata, datautvidgning och högdimensionella representationer av artikulation.
6

A Hierarchical Approach To Music Analysis And Source Separation

Thoshkahna, Balaji 11 1900 (has links) (PDF)
Music analysis and source separation have become important and allied areas of research over the last decade. Towards this, analyzing a music signal for important events such as onsets, offsets and transients are important problems. These tasks help in music source separation and transcription. Approaches in source separation too have been making great strides, but most of these techniques are aimed at Western music and fail to perform well for Indian music. The fluid style of instrumentation in Indian music requires a slightly modified approach to analysis and source separation. We propose an onset detection algorithm that is motivated by the human auditory system. This algorithm has the advantage of having a unified framework for the detection of both onsets and offsets in music signals. This onset detection algorithm is further extended to detect percussive transients. Percussive transients have sharp onsets followed closely by sharp offsets. This characteristic is exploited in the percussive transients detection algorithm. This detection does not lend itself well to the extraction of transients and hence we propose an iterative algorithm to extract all types of transients from a polyphonic music signal. The proposed iterative algorithm is both fast and accurate to extract transients of various strengths. This problem of transient extraction can be extended to the problem of harmonic/percussion sound separation(HPSS), where a music signal is separated into two streams consisting of components mainly from percussion and harmonic instruments. Many algorithms that have been proposed till date deal with HPSS for Western music. But with Indian classical/film music, a different style of instrumentation or singing is seen, including high degree of vibratos or glissando content. This requires new approaches to HPSS. We propose extensions to two existing HPSS techniques, adapting them for Indian music. In both the extensions, we retain the original framework of the algorithm, showing that it is easy to incorporate the changes needed to handle Indian music. We also propose a new HPSS algorithm that is inspired by our transient extraction technique. This algorithm can be considered a generalized extension to our transient extraction algorithm and showcases our view that HPSS can be considered as an extension to transient analysis. Even the best HPSS techniques have leakages of harmonic components into percussion and this can lead to poor performances in tasks like rhythm analysis. In order to reduce this leakage, we propose a post processing technique on the percussion stream of the HPSS algorithm. The proposed method utilizes signal stitching by exploiting a commonly used model for percussive envelopes. We also developed a vocals extraction algorithm from the harmonic stream of the HPSS algorithm. The vocals extraction follows the popular paradigm of extracting the predominant pitch followed by generation of the vocals signal corresponding to the pitch. We show that HPSS as a pre-processing technique gives an advantage in reducing the interference from percussive sources in the extraction stage. It is also shown that the performance of vocal extraction algorithms improve with the knowledge about locations of the vocal segments. This is shown with the help of an oracle to locate the vocal segments. The use of the oracle greatly reduces the interferences from other dominating sources in the extracted vocals signal.

Page generated in 0.0558 seconds