• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 9
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Fundamental frequency as basis for speech segmentation modeling

Marklund, Ellen January 2011 (has links)
The present study investigates the relevance of fundamental frequency in speech segmentation models intended to simulate infants. Speech from three different conditions (infant-directed speech to 3- and 12-month-olds, and adult-directed speech) was segmented based on fundamental frequency information, using a variant of the dpn-gram segmenting technique (highlighting similar segments as lexical candidates). The spectral distance between segments that were found based on fundamental frequency similarity was calculated, and compared to the spectral distance between segments that were found using transcription as basis for segmentation, as well as to the spectral distance between randomly paired segments from the same speech materials. The results show the greatest within-condition difference in speech directed to 3-month-olds, in which segmenting based on fundamental frequency similarity generated segment pairs with smaller spectral distance than did transcription-based segmentation or random segment pairs. Speech directed to 12-month-olds resulted in a somewhat smaller difference when using fundamental frequency data compared to when using transcriptions. For adult-directed speech, no difference was found in spectral distance between pairs generated by the different bases for segmentation. Neither segmenting speech by highlighting similar segments as lexical candidates, nor using fundamental frequency as basis for segmentation is optimal for a speech segmentation model intended to simulate 12-month-olds or adults. These groups are more likely to segment speech based on their already present or growing linguistic experience than on acoustic similarity only. However, for a model simulating a 3-month-old infant, the present segmentation procedure and its basis for segmentation are more plausible. When modeling speech segmentation in an infant-like manner it is important to take into account both that the cognitive abilities of infants develop rapidly during the first year of life, and that some aspects of their linguistic environment vary during this period.
12

Segmentace řeči / Speech segmentation

Kašpar, Ladislav January 2015 (has links)
My diploma thesis is devoted to the problem of segmentation of speech. It includes the basic theory on this topic. The theory focuses on the calculation of parameters for seg- mentation of speech that are used in the practical part. An application for segmentation of speech has been written in Matlab. It uses techniques as segmentation of the signal, energy of the signal and zero crossing function. These parameters are used as input for the algorithm k–means.
13

The Effects of Bi-Modal Input on Fostering L2 Japanese Speech Segmentation Skills

Natsumi Suzuki (6594341) 15 May 2019 (has links)
The purpose of this study was to investigate to what extent bi-modal input improves the word segmentation ability of L2 learners of Japanese. Accurately identifying words in continuous speech is a fundamental process for comprehending the overall message, but studies show that second language (L2) learners often find this task difficult, even when all individual words are familiar to them (e.g. Field, 2003; Goh, 2000). This is where the combination of written and audio input (bi-modal input), like when providing captions in the target language, could be helpful because it can provide orthographical image of the sound they hear, which in turn makes the input more intelligible (Charles & Trenkic, 2015). This study was implemented through a single-case design (SCD), where 12 third-year Japanese learners at a public university in the Midwestern United States underwent a semester-long pre-post design experiment. Participants watched a series of Japanese documentary with sound and captions (bi-modal input) throughout the semester. Before and after viewing each video, participants took Elicited Imitation Tasks (EIT) as the pre-post-tests, as well as at the beginning and at the end of the semester. The result showed that most participants improved their EIT scores throughout the semester, even to utterances from videos and speakers to which they had not been exposed. This study provided evidence that bi-modal input has the potential to help learners’ internal phonological representations of lexical items to become more stable and sophisticated, which would in turn contribute to L2 Japanese learners’ speech processing efficiency.
14

Explicit Segmentation Of Speech For Indian Languages

Ranjani, H G 03 1900 (has links)
Speech segmentation is the process of identifying the boundaries between words, syllables or phones in the recorded waveforms of spoken natural languages. The lowest level of speech segmentation is the breakup and classification of the sound signal into a string of phones. The difficulty of this problem is compounded by the phenomenon of co-articulation of speech sounds. The classical solution to this problem is to manually label and segment spectrograms. In the first step of this two step process, a trained person listens to a speech signal, recognizes the word and phone sequence, and roughly determines the position of each phonetic boundary. The second step involves examining several features of the speech signal to place a boundary mark at the point where these features best satisfy a certain set of conditions specific for that kind of phonetic boundary. Manual segmentation of speech into phones is a highly time-consuming and painstaking process. Required for a variety of applications, such as acoustic analysis, or building speech synthesis databases for high-quality speech output systems, the time required to carry out this process for even relatively small speech databases can rapidly accumulate to prohibitive levels. This calls for automating the segmentation process. The state-of-art segmentation techniques use Hidden Markov Models (HMM) for phone states. They give an average accuracy of over 95% within 20 ms of manually obtained boundaries. However, HMM based methods require large training data for good performance. Another major disadvantage of such speech recognition based segmentation techniques is that they cannot handle very long utterances, Which are necessary for prosody modeling in speech synthesis applications. Development of Text to Speech (TTS) systems in Indian languages has been difficult till date owing to the non-availability of sizeable segmented speech databases of good quality. Further, no prosody models exist for most of the Indian languages. Therefore, long utterances (at the paragraph level and monologues) have been recorded, as part of this work, for creating the databases. This thesis aims at automating segmentation of very long speech sentences recorded for the application of corpus-based TTS synthesis for multiple Indian languages. In this explicit segmentation problem, we need to force align boundaries in any utterance from its known phonetic transcription. The major disadvantage of forcing boundary alignments on the entire speech waveform of a long utterance is the accumulation of boundary errors. To overcome this, we force boundaries between 2 known phones (here, 2 successive stop consonants are chosen) at a time. Here, the approach used is silence detection as a marker for stop consonants. This method gives around 89% (for Hindi database) accuracy and is language independent and training free. These stop consonants act as anchor points for the next stage. Two methods for explicit segmentation have been proposed. Both the methods rely on the accuracy of the above stop consonant detection stage. Another common stage is the recently proposed implicit method which uses Bach scale filter bank to obtain the feature vectors. The Euclidean Distance of the Mean of the Logarithm (EDML) of these feature vectors shows peaks at the point where the spectrum changes. The method performs with an accuracy of 87% within 20 ms of manually obtained boundaries and also achieves a low deletion and insertion rate of 3.2% and 21.4% respectively, for 100 sentences of Hindi database. The first method is a three stage approach. The first is the stop consonant detection stage followed by the next, which uses Quatieri’s sinusoidal model to classify sounds as voiced/unvoiced within 2 successive stop consonants. The final stage uses the EDML function of Bach scale feature vectors to further obtain boundaries within the voiced and unvoiced regions. It gives a Frame Error Rate (FER) of 26.1% for Hindi database. The second method proposed uses duration statistics of the phones of the language. It again uses the EDML function of Bach scale filter bank to obtain the peaks at the phone transitions and uses the duration statistics to assign probability to each peak being a boundary. In this method, the FER performance improves to 22.8% for the Hindi database. Both the methods are equally promising for the fact that they give low frame error rates. Results show that the second method outperforms the first, because it incorporates the knowledge of durations. For the proposed approaches to be useful, manual interventions are required at the output of each stage. However, this intervention is less tedious and reduces the time taken to segment each sentence by around 60% as compared to the time taken for manual segmentation. The approaches have been successfully tested on 3 different languages, 100 sentences each -Kannada, Tamil and English (we have used TIMIT database for validating the algorithms). In conclusion, a practical solution to the segmentation problem is proposed. Also, the algorithm being training free, language independent (ES-SABSF method) and speaker independent makes it useful in developing TTS systems for multiple languages reducing the segmentation overhead. This method is currently being used in the lab for segmenting long Kannada utterances, spoken by reading a set of 1115 phonetically rich sentences.
15

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
16

Automatic speech segmentation with limited data / by D.R. van Niekerk

Van Niekerk, Daniel Rudolph January 2009 (has links)
The rapid development of corpus-based speech systems such as concatenative synthesis systems for under-resourced languages requires an efficient, consistent and accurate solution with regard to phonetic speech segmentation. Manual development of phonetically annotated corpora is a time consuming and expensive process which suffers from challenges regarding consistency and reproducibility, while automation of this process has only been satisfactorily demonstrated on large corpora of a select few languages by employing techniques requiring extensive and specialised resources. In this work we considered the problem of phonetic segmentation in the context of developing small prototypical speech synthesis corpora for new under-resourced languages. This was done through an empirical evaluation of existing segmentation techniques on typical speech corpora in three South African languages. In this process, the performance of these techniques were characterised under different data conditions and the efficient application of these techniques were investigated in order to improve the accuracy of resulting phonetic alignments. We found that the application of baseline speaker-specific Hidden Markov Models results in relatively robust and accurate alignments even under extremely limited data conditions and demonstrated how such models can be developed and applied efficiently in this context. The result is segmentation of sufficient quality for synthesis applications, with the quality of alignments comparable to manual segmentation efforts in this context. Finally, possibilities for further automated refinement of phonetic alignments were investigated and an efficient corpus development strategy was proposed with suggestions for further work in this direction. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
17

Steps towards end-to-end neural speaker diarization / Étapes vers un système neuronal de bout en bout pour la tâche de segmentation et de regroupement en locuteurs

Yin, Ruiqing 26 September 2019 (has links)
La tâche de segmentation et de regroupement en locuteurs (speaker diarization) consiste à identifier "qui parle quand" dans un flux audio sans connaissance a priori du nombre de locuteurs ou de leur temps de parole respectifs. Les systèmes de segmentation et de regroupement en locuteurs sont généralement construits en combinant quatre étapes principales. Premièrement, les régions ne contenant pas de parole telles que les silences, la musique et le bruit sont supprimées par la détection d'activité vocale (VAD). Ensuite, les régions de parole sont divisées en segments homogènes en locuteur par détection des changements de locuteurs, puis regroupées en fonction de l'identité du locuteur. Enfin, les frontières des tours de parole et leurs étiquettes sont affinées avec une étape de re-segmentation. Dans cette thèse, nous proposons d'aborder ces quatre étapes avec des approches fondées sur les réseaux de neurones. Nous formulons d’abord le problème de la segmentation initiale (détection de l’activité vocale et des changements entre locuteurs) et de la re-segmentation finale sous la forme d’un ensemble de problèmes d’étiquetage de séquence, puis nous les résolvons avec des réseaux neuronaux récurrents de type Bi-LSTM (Bidirectional Long Short-Term Memory). Au stade du regroupement des régions de parole, nous proposons d’utiliser l'algorithme de propagation d'affinité à partir de plongements neuronaux de ces tours de parole dans l'espace vectoriel des locuteurs. Des expériences sur un jeu de données télévisées montrent que le regroupement par propagation d'affinité est plus approprié que le regroupement hiérarchique agglomératif lorsqu'il est appliqué à des plongements neuronaux de locuteurs. La segmentation basée sur les réseaux récurrents et la propagation d'affinité sont également combinées et optimisées conjointement pour former une chaîne de regroupement en locuteurs. Comparé à un système dont les modules sont optimisés indépendamment, la nouvelle chaîne de traitements apporte une amélioration significative. De plus, nous proposons d’améliorer l'estimation de la matrice de similarité par des réseaux neuronaux récurrents, puis d’appliquer un partitionnement spectral à partir de cette matrice de similarité améliorée. Le système proposé atteint des performances à l'état de l'art sur la base de données de conversation téléphonique CALLHOME. Enfin, nous formulons le regroupement des tours de parole en mode séquentiel sous la forme d'une tâche supervisée d’étiquetage de séquence et abordons ce problème avec des réseaux récurrents empilés. Pour mieux comprendre le comportement du système, une analyse basée sur une architecture de codeur-décodeur est proposée. Sur des exemples synthétiques, nos systèmes apportent une amélioration significative par rapport aux méthodes de regroupement traditionnelles. / Speaker diarization is the task of determining "who speaks when" in an audio stream that usually contains an unknown amount of speech from an unknown number of speakers. Speaker diarization systems are usually built as the combination of four main stages. First, non-speech regions such as silence, music, and noise are removed by Voice Activity Detection (VAD). Next, speech regions are split into speaker-homogeneous segments by Speaker Change Detection (SCD), later grouped according to the identity of the speaker thanks to unsupervised clustering approaches. Finally, speech turn boundaries and labels are (optionally) refined with a re-segmentation stage. In this thesis, we propose to address these four stages with neural network approaches. We first formulate both the initial segmentation (voice activity detection and speaker change detection) and the final re-segmentation as a set of sequence labeling problems and then address them with Bidirectional Long Short-Term Memory (Bi-LSTM) networks. In the speech turn clustering stage, we propose to use affinity propagation on top of neural speaker embeddings. Experiments on a broadcast TV dataset show that affinity propagation clustering is more suitable than hierarchical agglomerative clustering when applied to neural speaker embeddings. The LSTM-based segmentation and affinity propagation clustering are also combined and jointly optimized to form a speaker diarization pipeline. Compared to the pipeline with independently optimized modules, the new pipeline brings a significant improvement. In addition, we propose to improve the similarity matrix by bidirectional LSTM and then apply spectral clustering on top of the improved similarity matrix. The proposed system achieves state-of-the-art performance in the CALLHOME telephone conversation dataset. Finally, we formulate sequential clustering as a supervised sequence labeling task and address it with stacked RNNs. To better understand its behavior, the analysis is based on a proposed encoder-decoder architecture. Our proposed systems bring a significant improvement compared with traditional clustering methods on toy examples.
18

The Slaying of Lady Mondegreen, being a Study of French Tonal Association and Alignment and their Role in Speech Segmentation

Welby, Pauline Susan January 2003 (has links)
No description available.
19

Le chunking perceptif de la parole : sur la nature du groupement temporel et son effet sur la mémoire immédiate

Gilbert, Annie 03 1900 (has links)
Dans de nombreux comportements qui reposent sur le rappel et la production de séquences, des groupements temporels émergent spontanément, créés par des délais ou des allongements. Ce « chunking » a été observé tant chez les humains que chez certains animaux et plusieurs auteurs l’attribuent à un processus général de chunking perceptif qui est conforme à la capacité de la mémoire à court terme. Cependant, aucune étude n’a établi comment ce chunking perceptif s’applique à la parole. Nous présentons une recension de la littérature qui fait ressortir certains problèmes critiques qui ont nui à la recherche sur cette question. C’est en revoyant ces problèmes qu’on propose une démonstration spécifique du chunking perceptif de la parole et de l’effet de ce processus sur la mémoire immédiate (ou mémoire de travail). Ces deux thèmes de notre thèse sont présentés séparément dans deux articles. Article 1 : The perceptual chunking of speech: a demonstration using ERPs Afin d’observer le chunking de la parole en temps réel, nous avons utilisé un paradigme de potentiels évoqués (PÉ) propice à susciter la Closure Positive Shift (CPS), une composante associée, entre autres, au traitement de marques de groupes prosodiques. Nos stimuli consistaient en des énoncés et des séries de syllabes sans sens comprenant des groupes intonatifs et des marques de groupements temporels qui pouvaient concorder, ou non, avec les marques de groupes intonatifs. Les analyses démontrent que la CPS est suscitée spécifiquement par les allongements marquant la fin des groupes temporels, indépendamment des autres variables. Notons que ces marques d’allongement, qui apparaissent universellement dans la langue parlée, créent le même type de chunking que celui qui émerge lors de l’apprentissage de séquences par des humains et des animaux. Nos résultats appuient donc l’idée que l’auditeur chunk la parole en groupes temporels et que ce chunking perceptif opère de façon similaire avec des comportements verbaux et non verbaux. Par ailleurs, les observations de l’Article 1 remettent en question des études où on associe la CPS au traitement de syntagmes intonatifs sans considérer les effets de marques temporels. Article 2 : Perceptual chunking and its effect on memory in speech processing:ERP and behavioral evidence Nous avons aussi observé comment le chunking perceptif d’énoncés en groupes temporels de différentes tailles influence la mémoire immédiate d’éléments entendus. Afin d’observer ces effets, nous avons utilisé des mesures comportementales et des PÉ, dont la composante N400 qui permettait d’évaluer la qualité de la trace mnésique d’éléments cibles étendus dans des groupes temporels. La modulation de l’amplitude relative de la N400 montre que les cibles présentées dans des groupes de 3 syllabes ont bénéficié d’une meilleure mise en mémoire immédiate que celles présentées dans des groupes plus longs. D’autres mesures comportementales et une analyse de la composante P300 ont aussi permis d’isoler l’effet de la position du groupe temporel (dans l’énoncé) sur les processus de mise en mémoire. Les études ci-dessus sont les premières à démontrer le chunking perceptif de la parole en temps réel et ses effets sur la mémoire immédiate d’éléments entendus. Dans l’ensemble, nos résultats suggèrent qu’un processus général de chunking perceptif favorise la mise en mémoire d’information séquentielle et une interprétation de la parole « chunk par chunk ». / In numerous behaviors involving the learning and production of sequences, temporal groups emerge spontaneously, created by delays or a lengthening of elements. This chunking has been observed across behaviors of both humans and animals and is taken to reflect a general process of perceptual chunking that conforms to capacity limits of short-term memory. Yet, no research has determined how perceptual chunking applies to speech. We provide a literature review that bears out critical problems, which have hampered research on this question. Consideration of these problems motivates a principled demonstration that aims to show how perceptual chunking applies to speech and the effect of this process on immediate memory (or “working memory”). These two themes are presented in separate papers in the format of journal articles. Paper 1: The perceptual chunking of speech: a demonstration using ERPs To observe perceptual chunking on line, we use event-related potentials (ERPs) and refer to the neural component of Closure Positive Shift (CPS), which is known to capture listeners’ responses to marks of prosodic groups. The speech stimuli were utterances and sequences of nonsense syllables, which contained intonation phrases marked by pitch, and both phrase-internal and phrase-final temporal groups marked by lengthening. Analyses of CPSs show that, across conditions, listeners specifically perceive speech in terms of chunks marked by lengthening. These lengthening marks, which appear universally in languages, create the same type of chunking as that which emerges in sequence learning by humans and animals. This finding supports the view that listeners chunk speech in temporal groups and that this perceptual chunking operates similarly for speech and non-verbal behaviors. Moreover, the results question reports that relate CPS to intonation phrasing without considering the effects of temporal marks. Paper 2: Perceptual chunking and its effect on memory in speech processing: ERP and behavioral evidence We examined how the perceptual chunking of utterances in terms of temporal groups of differing size influences immediate memory of heard speech. To weigh these effects, we used behavioural measures and ERPs, especially the N400 component, which served to evaluate the quality of the memory trace for target lexemes heard in the temporal groups. Variations in the amplitude of the N400 showed a better memory trace for lexemes presented in groups of 3 syllables compared to those in groups of 4 syllables. Response times along with P300 components revealed effects of position of the chunk in the utterance. This is the first study to demonstrate the perceptual chunking of speech on-line and its effects on immediate memory of heard elements. Taken together the results suggest that a general perceptual chunking enhances a buffering of sequential information and a processing of speech on a chunk-by-chunk basis.
20

Music And Speech Analysis Using The 'Bach' Scale Filter-Bank

Ananthakrishnan, G 04 1900 (has links)
The aim of this thesis is to define a perceptual scale for the ‘Time-Frequency’ analysis of music signals. The equal tempered ‘Bach ’ scale is a suitable scale, since it covers most of the genres of music and the error is equally distributed for each semi-tone. However, it may be necessary to allow a tolerance of around 50 cents or half the interval of the Bach scale, so that the interval can accommodate other common intonation schemes. The thesis covers the formulation of the Bach scale filter-bank as a time-varying model. It makes a comparative study with other commonly used perceptual scales. Two applications for the Bach scale filter-bank are also proposed, namely automated segmentation of speech signals and transcription of singing voice for query-by-humming applications. Even though this filter-bank is suggested with a motivation from music, it could also be applied to speech. A method for automatically segmenting continuous speech into phonetic units is proposed. The results, obtained from the proposed method, show around 82% accuracy for the English and 85% accuracy for the Hindi databases. This is an improvement of around 2 -3% when the performance is compared with other popular methods in the literature. Interestingly, the Bach scale filters perform better than the filters designed for other common perceptual scales, such as Mel and Bark scales. ‘Musical transcription’ refers to the process of converting a musical rendering or performance into a set of symbols or notations. A query in a ‘query-by-humming system’ can be made in several ways, some of which are singing with words, or with arbitrary syllables, or whistling. Two algorithms are suggested to annotate a query. The algorithms are designed to be fairly robust for these various forms of queries. The first algorithm is a frequency selection based method. It works on the basis of selecting the most likely frequency components at any given time instant. The second algorithm works on the basis of finding time-connected contours of high energy in the ‘Time-Frequency’ plane of the input signal. The time domain algorithm works better in terms of instantaneous pitch estimates. It results in an error of around 10 -15%, while the frequency domain method results in an error of around 12 -20%. A song rendered by two different people will have quite a few different properties. Their absolute pitches, rates of rendering, timbres based on voice quality and inaccuracies, may be different. The thesis discusses a method to quantify the distance between two different renderings of musical pieces. The distance function has been evaluated by attempting a search for a particular song from a database of a size of 315, made up of songs sung by both male and female singers and whistled queries. Around 90 % of the time, the correct song is found among the top five best choices picked. Thus, the Bach scale has been proposed as a suitable scale for representing the perception of music. It has been explored in two applications, namely automated segmentation of speech and transcription of singing voices. Using the transcription obtained, a measure of the distance between renderings of musical pieces has also been suggested.

Page generated in 0.0881 seconds