• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 169
  • 40
  • 33
  • 30
  • 14
  • 10
  • 9
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 390
  • 104
  • 100
  • 86
  • 79
  • 46
  • 39
  • 32
  • 32
  • 31
  • 30
  • 30
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Producing verbal play in English : a contrastive study of advanced German learners of English and English native speakers /

Baudy, Christian Marino. January 2008 (has links)
University, Diss--Hamburg, 2007.
72

Producing verbal play in English a contrastive study of advanced German learners of English and English native speakers

Baudy, Christian Marino January 2007 (has links)
Zugl.: Hamburg, Univ., Diss., 2007
73

Englisch als Lingua Franca in der wissenschaftlichen Lehre Charakteristika und Herausforderungen englischsprachiger Masterstudiengänge in Deutschland /

Soltau, Anja. Unknown Date (has links) (PDF)
Hamburg, Universiẗat, Diss., 2008.
74

Das Amt des Speaker of the House of Representatives im amerikanischen Regierungssystem /

Semmler, Jörg. January 2002 (has links) (PDF)
Univ., Diss.--Göttingen, 2001. / Text dt., Textbeisp. engl.
75

Real world approaches for multilingual and non-native speech recognition

Raab, Martin January 2010 (has links)
Zugl.: Erlangen, Nürnberg, Univ., Diss., 2010
76

The processing of lexical semantic and syntactic information in spoken sentences : neuroimaging and behavioral studies of native and non-native speakers /

Rüschemeyer, Shirley-Ann. January 2005 (has links)
Zugl.: Leipzig, University, Diss., 2005.
77

Parole de locuteur : performance et confiance en identification biométrique vocale / Speaker in speech : performance and confidence in voice biometric identification

Kahn, Juliette 19 December 2011 (has links)
Ce travail de thèse explore l’usage biométrique de la parole dont les applications sont très nombreuses (sécurité, environnements intelligents, criminalistique, surveillance du territoire ou authentification de transactions électroniques). La parole est soumise à de nombreuses contraintes fonction des origines du locuteur (géographique, sociale et culturelle) mais également fonction de ses objectifs performatifs. Le locuteur peut être considéré comme un facteur de variation de la parole, parmi d’autres. Dans ce travail, nous présentons des éléments de réponses aux deux questions suivantes :– Tous les extraits de parole d’un même locuteur sont-ils équivalents pour le reconnaître ?– Comment se structurent les différentes sources de variation qui véhiculent directement ou indirectement la spécificité du locuteur ? Nous construisons, dans un premier temps, un protocole pour évaluer la capacité humaine à discriminer un locuteur à partir d’un extrait de parole en utilisant les données de la campagne NIST-HASR 2010. La tâche ainsi posée est difficile pour nos auditeurs, qu’ils soient naïfs ou plus expérimentés.Dans ce cadre, nous montrons que ni la (quasi)unanimité des auditeurs ni l’auto-évaluation de leurs jugements ne sont des gages de confiance dans la véracité de la réponse soumise.Nous quantifions, dans un second temps, l’influence du choix d’un extrait de parole sur la performance des systèmes automatiques. Nous avons utilisé deux bases de données, NIST et BREF ainsi que deux systèmes de RAL, ALIZE/SpkDet (LIA) et Idento (SRI). Les systèmes de RAL, aussi bienfondés sur une approche UBM-GMM que sur une approche i-vector montrent des écarts de performances importants mesurés à l’aide d’un taux de variation autour de l’EER moyen, Vr (pour NIST, VrIdento = 1.41 et VrALIZE/SpkDet = 1.47 et pour BREF, Vr = 3.11) selon le choix du fichier d’apprentissage utilisé pour chaque locuteur. Ces variations de performance, très importantes, montrent la sensibilité des systèmes automatiques au choix des extraits de parole, sensibilité qu’il est important de mesurer et de réduire pour rendre les systèmes de RAL plus fiables.Afin d’expliquer l’importance du choix des extraits de parole, nous cherchons les indices les plus pertinents pour distinguer les locuteurs de nos corpus en mesurant l’effet du facteur Locuteur sur la variance des indices (h2). La F0 est fortement dépendante du facteur Locuteur, et ce indépendamment de la voyelle. Certains phonèmes sont plus discriminants pour le locuteur : les consonnes nasales, les fricatives, les voyelles nasales, voyelles orales mi-fermées à ouvertes.Ce travail constitue un premier pas vers une étude plus précise de ce qu’est le locuteur aussi bien pour la perception humaine que pour les systèmes automatiques. Si nous avons montré qu’il existait bien une différence cepstrale qui conduisait à des modèles plus ou moins performants, il reste encore à comprendre comment lier le locuteur à la production de la parole. Enfin, suite à ces travaux, nous souhaitons explorer plus en détail l’influence de la langue sur la reconnaissance du locuteur. En effet, même si nos résultats indiquent qu’en anglais américain et en français, les mêmes catégories de phonèmes sont les plus porteuses d’information sur le locuteur, il reste à confirmer ce point et à évaluer ce qu’il en est pour d’autres langues / This thesis explores the use of biometric speech. Speech is subjected to many constraints based on origins of the speaker (geographical , social and cultural ), but also according to his performative goals. The speaker may be regarded as a factor of variation in the speech , among others. In this work, we present some answers to the following two questions:- Are all speech samples equivalent to recognize a speaker?- How are structured the different acoustic cues carrying information about the speaker ?In a first step, a protocol to assess the human ability to discriminate a speaker from a speech sample using NIST-HASR 2010 data is presented. This task is difficult for our listeners who are naive or experienced. In this context, neither the (quasi) unanimity or the self-assessment do not assure the confidence in the veracity of the submitted answer .In a second step, the influence of the choice of a sample speech on the performance of automatic systems is quantified using two databases, NIST and BREF and two systems RAL , Alize / SpkDet (LIA, UBM-GMM system) and Idento (SRI, i-vector system).The two RAL systems show significant differences in performance measured using a measure of relative variation around the average EER, Vr (for NIST Idento Vr = 1.41 and Vr Alize / SpkDet = 1.47 and BREF, Vr = 3.11) depending on the choice of the training file used for each speaker. These very large variations in performance show the sensitivity of automatic systems to the speech sample. This sensitivity must be measured to make the systems more reliable .To explain the importance of the choice of the speech sample and find the relevant cues, the effect of the speaker on the variance of various acoustics features is measured (η 2) . F0 is strongly dependent of the speaker, independently of the vowel. Some phonemes are more discriminative : nasal consonants, fricatives , nasal vowels, oral half closed to open vowels .This work is a first step towards to understand where is the speaker in speech using as well the human perception as automatic systems . If we have shown that there was a cepstral difference between the more and less efficient models, it remains to understand how to bind the speaker to the speech production. Finally, following this work, we wish to explore more in detail the influence of language on speaker recognition. Even if our results indicate that for American English and French , the same categories of phonemes are the carriers of information about the speaker , it remains to confirm this on other languages ​​.
78

Speech segmentation and speaker diarisation for transcription and translation

Sinclair, Mark January 2016 (has links)
This dissertation outlines work related to Speech Segmentation – segmenting an audio recording into regions of speech and non-speech, and Speaker Diarization – further segmenting those regions into those pertaining to homogeneous speakers. Knowing not only what was said but also who said it and when, has many useful applications. As well as providing a richer level of transcription for speech, we will show how such knowledge can improve Automatic Speech Recognition (ASR) system performance and can also benefit downstream Natural Language Processing (NLP) tasks such as machine translation and punctuation restoration. While segmentation and diarization may appear to be relatively simple tasks to describe, in practise we find that they are very challenging and are, in general, ill-defined problems. Therefore, we first provide a formalisation of each of the problems as the sub-division of speech within acoustic space and time. Here, we see that the task can become very difficult when we want to partition this domain into our target classes of speakers, whilst avoiding other classes that reside in the same space, such as phonemes. We present a theoretical framework for describing and discussing the tasks as well as introducing existing state-of-the-art methods and research. Current Speaker Diarization systems are notoriously sensitive to hyper-parameters and lack robustness across datasets. Therefore, we present a method which uses a series of oracle experiments to expose the limitations of current systems and to which system components these limitations can be attributed. We also demonstrate how Diarization Error Rate (DER), the dominant error metric in the literature, is not a comprehensive or reliable indicator of overall performance or of error propagation to subsequent downstream tasks. These results inform our subsequent research. We find that, as a precursor to Speaker Diarization, the task of Speech Segmentation is a crucial first step in the system chain. Current methods typically do not account for the inherent structure of spoken discourse. As such, we explored a novel method which exploits an utterance-duration prior in order to better model the segment distribution of speech. We show how this method improves not only segmentation, but also the performance of subsequent speech recognition, machine translation and speaker diarization systems. Typical ASR transcriptions do not include punctuation and the task of enriching transcriptions with this information is known as ‘punctuation restoration’. The benefit is not only improved readability but also better compatibility with NLP systems that expect sentence-like units such as in conventional machine translation. We show how segmentation and diarization are related tasks that are able to contribute acoustic information that complements existing linguistically-based punctuation approaches. There is a growing demand for speech technology applications in the broadcast media domain. This domain presents many new challenges including diverse noise and recording conditions. We show that the capacity of existing GMM-HMM based speech segmentation systems is limited for such scenarios and present a Deep Neural Network (DNN) based method which offers a more robust speech segmentation method resulting in improved speech recognition performance for a television broadcast dataset. Ultimately, we are able to show that the speech segmentation is an inherently ill-defined problem for which the solution is highly dependent on the downstream task that it is intended for.
79

Speaker Prototyping Design

Lathe, Andrew 01 December 2020 (has links)
Audio design is a pertinent industry in today’s world, with an extremely large market including leaders such as Bose, Harman International, and Sennheiser. This project is designed to explore the processes that are necessary to create a new type of product in this market. The end goal is to have a functioning, high–quality set of speakers to prove various concepts of design and prototyping. The steps involved in this project go through the entire design process from initial choice of product to a finished prototype. Processes include the selection of outsourced components such as drivers and necessary connectors. The design stage will include any design processes necessary to create the enclosure or any electronics. Production will be controlled by shipping dates and any potential issues that lie within the methods chosen for production. The final product will be tested for response. The prototyping process is usually fulfilled by various departments with extreme expertise in the respective field.
80

A performance measurement of a Speaker Verification system based on a variance in data collection for Gaussian Mixture Model and Universal Background Model

Bekli, Zeid, Ouda, William January 2018 (has links)
Voice recognition has become a more focused and researched field in the last century,and new techniques to identify speech has been introduced. A part of voice recognition isspeaker verification which is divided into Front-end and Back-end. The first componentis the front-end or feature extraction where techniques such as Mel-Frequency CepstrumCoefficients (MFCC) is used to extract the speaker specific features of a speech signal,MFCC is mostly used because it is based on the known variations of the humans ear’scritical frequency bandwidth. The second component is the back-end and handles thespeaker modeling. The back-end is based on the Gaussian Mixture Model (GMM) andGaussian Mixture Model-Universal Background Model (GMM-UBM) methods forenrollment and verification of the specific speaker. In addition, normalization techniquessuch as Cepstral Means Subtraction (CMS) and feature warping is also used forrobustness against noise and distortion. In this paper, we are going to build a speakerverification system and experiment with a variance in the amount of training data for thetrue speaker model, and to evaluate the system performance. And further investigate thearea of security in a speaker verification system then two methods are compared (GMMand GMM-UBM) to experiment on which is more secure depending on the amount oftraining data available.This research will therefore give a contribution to how much data is really necessary fora secure system where the False Positive is as close to zero as possible, how will theamount of training data affect the False Negative (FN), and how does this differ betweenGMM and GMM-UBM.The result shows that an increase in speaker specific training data will increase theperformance of the system. However, too much training data has been proven to beunnecessary because the performance of the system will eventually reach its highest point and in this case it was around 48 min of data, and the results also show that the GMMUBM model containing 48- to 60 minutes outperformed the GMM models.

Page generated in 0.0844 seconds