• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 510
  • 40
  • 37
  • 35
  • 27
  • 25
  • 21
  • 21
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • Tagged with
  • 924
  • 924
  • 509
  • 216
  • 165
  • 150
  • 148
  • 100
  • 98
  • 86
  • 78
  • 72
  • 71
  • 71
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Learning pronunciation variation : A data-driven approach to rule-based lecxicon adaptation for automatic speech recognition

Amdal, Ingunn January 2002 (has links)
To achieve a robust system the variation seen for different speaking styles must be handled. An investigation of standard automatic speech recognition techniques for different speaking styles showed that lexical modelling using general-purpose variants gave small improvements, but the errors differed compared with using only one canonical pronunciation per word. Modelling the variation using the acoustic models (using context dependency and/or speaker dependent adaptation) gave a significant improvement, but the resulting performance for non-native and spontaneous speech was still far from read speech. In this dissertation a complete data-driven approach to rule-based lexicon adaptation is presented, where the effect of the acoustic models is incorporated in the rule pruning metric. Reference and alternative transcriptions were aligned by dynamic programming, but with a data-driven method to derive the phone-to-phone substitution costs. The costs were based on the statistical co-occurrence of phones, association strength. Rules for pronunciation variation were derived from this alignment. The rules were pruned using a new metric based on acoustic log likelihood. Well trained acoustic models are capable of modelling much of the variation seen, and using the acoustic log likelihood to assess the pronunciation rules prevents the lexical modelling from adding variation already accounted for as shown for direct pronunciation variation modelling. For the non-native task data-driven pronunciation modelling by learning pronunciation rules gave a significant performance gain. Acoustic log likelihood rule pruning performed better than rule probability pruning. For spontaneous dictation the pronunciation variation experiments did not improve the performance. The answer to how to better model the variation for spontaneous speech seems to lie neither in the acoustical nor the lexical modelling. The main differences between read and spontaneous speech are the grammar used and disfluencies like restarts and long pauses. The language model may thus be the best starting point for more research to achieve better performance for this speaking style.
462

Efficient Methods for Automatic Speech Recognition

Seward, Alexander January 2003 (has links)
This thesis presents work in the area of automatic speech recognition (ASR). The thesis focuses on methods for increasing the efficiency of speech recognition systems and on techniques for efficient representation of different types of knowledge in the decoding process. In this work, several decoding algorithms and recognition systems have been developed, aimed at various recognition tasks. The thesis presents the KTH large vocabulary speech recognition system. The system was developed for online (live) recognition with large vocabularies and complex language models. The system utilizes weighted transducer theory for efficient representation of different knowledge sources, with the purpose of optimizing the recognition process. A search algorithm for efficient processing of hidden Markov models (HMMs) is presented. The algorithm is an alternative to the classical Viterbi algorithm for fast computation of shortest paths in HMMs. It is part of a larger decoding strategy aimed at reducing the overall computational complexity in ASR. In this approach, all HMM computations are completely decoupled from the rest of the decoding process. This enables the use of larger vocabularies and more complex language models without an increase of HMM-related computations. Ace is another speech recognition system developed within this work. It is a platform aimed at facilitating the development of speech recognizers and new decoding methods. A real-time system for low-latency online speech transcription is also presented. The system was developed within a project with the goal of improving the possibilities for hard-of-hearing people to use conventional telephony by providing speech-synchronized multimodal feedback. This work addresses several additional requirements implied by this special recognition task. / QC 20100811
463

AURORA-2J: An Evaluation Framework for Japanese Noisy Speech Recognition

ENDO, Toshiki, FUJIMOTO, Masakiyo, MIYAJIMA, Chiyomi, MIZUMACHI, Mitsunori, SASOU, Akira, NISHIURA, Takanobu, KITAOKA, Norihide, KUROIWA, Shingo, YAMADA, Takeshi, YAMAMOTO, Kazumasa, TAKEDA, Kazuya, NAKAMURA, Satoshi 01 March 2005 (has links)
No description available.
464

CENSREC-3: An Evaluation Framework for Japanese Speech Recognition in Real Car-Driving Environments

NAKAMURA, Satoshi, TAKEDA, Kazuya, FUJIMOTO, Masakiyo 01 November 2006 (has links)
No description available.
465

Phonemic variability and confusability in pronunciation modeling for automatic speech recognition

Karanasou, Panagiota 11 June 2013 (has links) (PDF)
This thesis addresses the problems of phonemic variability and confusability from the pronunciation modeling perspective for an automatic speech recognition (ASR) system. In particular, several research directions are investigated. First, automatic grapheme-to- phoneme (g2p) and phoneme-to-phoneme (p2p) converters are developed that generate alternative pronunciations for in-vocabulary as well as out-of-vocabulary (OOV) terms. Since the addition of alternative pronunciation may introduce homophones (or close homophones), there is an increase of the confusability of the system. A novel measure of this confusability is proposed to analyze it and study its relation with the ASR performance. This pronunciation confusability is higher if pronunciation probabilities are not provided and can potentially severely degrade the ASR performance. It should, thus, be taken into account during pronunciation generation. Discriminative training approaches are, then, investigated to train the weights of a phoneme confusion model that allows alternative ways of pronouncing a term counterbalancing the phonemic confusability problem. The objective function to optimize is chosen to correspond to the performance measure of the particular task. In this thesis, two tasks are investigated, the ASR task and the KeywordSpotting (KWS) task. For ASR, an objective that minimizes the phoneme error rate is adopted. For experiments conducted on KWS, the Figure of Merit (FOM), a KWS performance measure, is directly maximized.
466

A Study of the Automatic Speech Recognition Process and Speaker Adaptation

Stokes-Rees, Ian James January 2000 (has links)
This thesis considers the entire automated speech recognition process and presents a standardised approach to LVCSR experimentation with HMMs. It also discusses various approaches to speaker adaptation such as MLLR and multiscale, and presents experimental results for cross­-task speaker adaptation. An analysis of training parameters and data sufficiency for reasonable system performance estimates are also included. It is found that Maximum Likelihood Linear Regression (MLLR) supervised adaptation can result in 6% reduction (absolute) in word error rate given only one minute of adaptation data, as compared with an unadapted model set trained on a different task. The unadapted system performed at 24% WER and the adapted system at 18% WER. This is achieved with only 4 to 7 adaptation classes per speaker, as generated from a regression tree.
467

A Dynamic Vocabulary Speech Recognizer Using Real-Time, Associative-Based Learning

Purdy, Trevor January 2006 (has links)
Conventional speech recognizers employ a training phase during which many of their parameters are configured - including vocabulary selection, feature selection, and decision mechanism tailoring to these selections. After this stage during normal operation, these traditional recognizers do not significantly alter any of these parameters. Conversely this work draws heavily on high level human thought patterns and speech perception to outline a set of precepts to eliminate this training phase and instead opt to perform all its tasks during the normal operation. A feature space model is discussed to establish a set of necessary and sufficient conditions to guide real-time feature selection. Detailed implementation and preliminary results are also discussed. These results indicate that benefits of this approach can be seen in increased speech recognizer adaptability while still retaining competitive recognition rates in controlled environments. Thus this can accommodate such changes as varying vocabularies, class migration, and new speakers.
468

A Study of the Automatic Speech Recognition Process and Speaker Adaptation

Stokes-Rees, Ian James January 2000 (has links)
This thesis considers the entire automated speech recognition process and presents a standardised approach to LVCSR experimentation with HMMs. It also discusses various approaches to speaker adaptation such as MLLR and multiscale, and presents experimental results for cross­-task speaker adaptation. An analysis of training parameters and data sufficiency for reasonable system performance estimates are also included. It is found that Maximum Likelihood Linear Regression (MLLR) supervised adaptation can result in 6% reduction (absolute) in word error rate given only one minute of adaptation data, as compared with an unadapted model set trained on a different task. The unadapted system performed at 24% WER and the adapted system at 18% WER. This is achieved with only 4 to 7 adaptation classes per speaker, as generated from a regression tree.
469

A Dynamic Vocabulary Speech Recognizer Using Real-Time, Associative-Based Learning

Purdy, Trevor January 2006 (has links)
Conventional speech recognizers employ a training phase during which many of their parameters are configured - including vocabulary selection, feature selection, and decision mechanism tailoring to these selections. After this stage during normal operation, these traditional recognizers do not significantly alter any of these parameters. Conversely this work draws heavily on high level human thought patterns and speech perception to outline a set of precepts to eliminate this training phase and instead opt to perform all its tasks during the normal operation. A feature space model is discussed to establish a set of necessary and sufficient conditions to guide real-time feature selection. Detailed implementation and preliminary results are also discussed. These results indicate that benefits of this approach can be seen in increased speech recognizer adaptability while still retaining competitive recognition rates in controlled environments. Thus this can accommodate such changes as varying vocabularies, class migration, and new speakers.
470

Auditory Front-Ends for Noise-Robust Automatic Speech Recognition

Yeh, Ja-Zang 25 August 2010 (has links)
The human auditory perception system is much more noise-robust than any state-of the art automatic speech recognition (ASR) system. It is expected that the noise-robustness of speech feature can be improved by employing the human auditory based feature extraction procedure. In this thesis, we investigate modifying the commonly-used feature extraction process for automatic speech recognition systems. A novel frequency masking curve, which is based on modeling the basilar membrane as a cascade system of damped simple harmonic oscillators, is used to replace the critical-band masking curve to compute the masking threshold. We mathematically analyze the coupled motion of the oscillator system (basilar membrane) when they are driven by short-time stationary (speech) signals. Based on the analysis, we derive the relation between the amplitudes of neighboring oscillators, and accordingly insert a masking module in the front-end signal processing stage to modify the speech spectrum. We evaluate the proposed method on the Aurora 2.0 noisy-digit speech database. When combined with the commonly-used cepstral mean subtraction post-processing, the proposed auditory front-end module achieves a significant improvement. The method of correlational masking effect curve combine with CMS can achieves relative improvements of 25.9% over the baseline respectively. After applying the methods iteratively, the relative improvement improves from 25.9% to 30.3%.

Page generated in 0.0894 seconds