• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 40
  • 33
  • 30
  • 14
  • 10
  • 9
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 391
  • 104
  • 101
  • 86
  • 80
  • 47
  • 39
  • 33
  • 32
  • 31
  • 30
  • 30
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

The Acquisition of Vowel Normalization during Early Infancy: Theory and Computational Framework

Plummer, Andrew R. 02 June 2014 (has links)
No description available.
212

Examining Pupillometric Measures of Cognitive Effort Associated with Speaker Variability During Spoken Word Recognition

Douds, Lillian R. 01 May 2017 (has links)
No description available.
213

Korean-American Literature as Autobiographical Metafiction: Focusing on the Protagonist’s “Writer” Identity in <i>East Goes West</i>, <i>Dictee</i>, and <i>Native Speaker</i>

Choi, Ha Young 22 September 2008 (has links)
No description available.
214

Balancing the Legislative Agenda: Scheduling in the United States House of Representatives

Hasecke, Edward Brooke January 2002 (has links)
No description available.
215

Insider at border: interactions of technology, language, culture, and gender in computer-mediated communication by Korean female learners of English

Baek, Mi-Kyung 09 March 2005 (has links)
No description available.
216

Understanding the Effects of Smart-Speaker Based Surveys on Panelist Experience in Immersive Consumer Testing

Soldavini, Ashley M. 22 July 2022 (has links)
No description available.
217

Spanish Native-Speaker Perception of Accentedness in Learner Speech

Moranski, Kara January 2012 (has links)
Building upon current research in native-speaker (NS) perception of L2 learner phonology (Zielinski, 2008; Derwing & Munro, 2009), the present investigation analyzed multiple dimensions of NS speech perception in order to achieve a more complete understanding of the specific linguistic elements and attitudinal variables that contribute to perceptions of accent in learner speech. In this mixed-methods study, Spanish monolinguals (n = 18) provided information regarding their views of L1 American English (AE) speakers learning Spanish and also evaluated the extemporaneous production of L2 learners from this same population. The evaluators' preconceived attitudinal notions of L1 AE speakers learning Spanish negatively correlated with numerical accentedness ratings for the speech samples, indicating that evaluators with more positive perceptions of the learners rated their speech as less accented. Following initial numerical ratings, evaluators provided detailed commentary on the individual phonological elements from each utterance that they perceived as "nonnative." Results show that differences in the relative salience of the nonnative segmental productions correspond with certain phonetic and phonemic processes occurring within the sounds, such as aspiration, spirantization and lateralization. / Spanish
218

Personalized Voice Activated Grasping System for a Robotic Exoskeleton Glove

Guo, Yunfei 05 January 2021 (has links)
Controlling an exoskeleton glove with a highly efficient human-machine interface (HMI), while accurately applying force to each joint remains a hot topic. This paper proposes a fast, secure, accurate, and portable solution to control an exoskeleton glove. This state of the art solution includes both hardware and software components. The exoskeleton glove uses a modified serial elastic actuator (SEA) to achieve accurate force sensing. A portable electronic system is designed based on the SEA to allow force measurement, force application, slip detection, cloud computing, and a power supply to provide over 2 hours of continuous usage. A voice-control-based HMI referred to as the integrated trigger-word configurable voice activation and speaker verification system (CVASV), is integrated into a robotic exoskeleton glove to perform high-level control. The CVASV HMI is designed for embedded systems with limited computing power to perform voice-activation and voice-verification simultaneously. The system uses MobileNet as the feature extractor to reduce computational cost. The HMI is tuned to allow better performance in grasping daily objects. This study focuses on applying the CVASV HMI to the exoskeleton glove to perform a stable grasp with force-control and slip-detection using SEA based exoskeleton glove. This research found that using MobileNet as the speaker verification neural network can increase the speed of processing while maintaining similar verification accuracy. / Master of Science / The robotic exoskeleton glove used in this research is designed to help patients with hand disabilities. This thesis proposes a voice-activated grasping system to control the exoskeleton glove. Here, the user can use a self-defined keyword to activate the exoskeleton and use voice to control the exoskeleton. The voice command system can distinguish between different users' voices, thereby improving the safety of the glove control. A smartphone is used to process the voice commands and send them to an onboard computer on the exoskeleton glove. The exoskeleton glove then accurately applies force to each fingertip using a force feedback actuator.This study focused on designing a state of the art human machine interface to control an exoskeleton glove and perform an accurate and stable grasp.
219

Speaker Identification and Verification Using Line Spectral Frequencies

Raman, Pujita 17 June 2015 (has links)
State-of-the-art speaker identification and verification (SIV) systems provide near perfect performance under clean conditions. However, their performance deteriorates in the presence of background noise. Many feature compensation, model compensation and signal enhancement techniques have been proposed to improve the noise-robustness of SIV systems. Most of these techniques require extensive training, are computationally expensive or make assumptions about the noise characteristics. There has not been much focus on analyzing the relative importance, or speaker-discriminative power of different speech zones, particularly under noisy conditions. In this work, an automatic, text-independent speaker identification (SI) system and speaker verification (SV) system is proposed using Line Spectral Frequency (LSF) features. The performance of the proposed SI and SV systems are evaluated under various types of background noise. A score-level fusion based technique is implemented to extract complementary information from static and dynamic LSF features. The proposed score-level fusion based SI and SV systems are found to be more robust under noisy conditions. In addition, we investigate the speaker-discriminative power of different speech zones such as vowels, non-vowels and transitions. Rapidly varying regions of speech such as consonant-vowel transitions are found to be most speaker-discriminative in high SNR conditions. Steady, high-energy vowel regions are robust against noise and are hence most speaker-discriminative in low SNR conditions. We show that selectively utilizing features from a combination of transition and steady vowel zones further improves the performance of the score-level fusion based SI and SV systems under noisy conditions. / Master of Science
220

Automatic Phoneme Recognition with Segmental Hidden Markov Models

Baghdasaryan, Areg Gagik 10 March 2010 (has links)
A speaker independent continuous speech phoneme recognition and segmentation system is presented. We discuss the training and recognition phases of the phoneme recognition system as well as a detailed description of the integrated elements. The Hidden Markov Model (HMM) based phoneme models are trained using the Baum-Welch re-estimation procedure. Recognition and segmentation of the phonemes in the continuous speech is performed by a Segmental Viterbi Search on a Segmental Ergodic HMM for the phoneme states. We describe in detail the three phases of the phoneme joint recognition and segmentation system. First, the extraction of the Mel-Frequency Cepstral Coefficients (MFCC) and the corresponding Delta and Delta Log Power coefficients is described. Second, we describe the operation of the Baum-Welch re-estimation procedure for the training of the phoneme HMM models, including the K-Means and the Expectation-Maximization (EM) clustering algorithms used for the initialization of the Baum-Welch algorithm. Additionally, we describe the structural framework of - and the recognition procedure for - the ergodic Segmental HMM for the phoneme segmentation and recognition. We include test and simulation results for each of the individual systems integrated into the phoneme recognition system and finally for the phoneme recognition/segmentation system as a whole. / Master of Science

Page generated in 0.0365 seconds