• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 896
  • 325
  • 88
  • 81
  • 27
  • 21
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 17
  • 15
  • 13
  • Tagged with
  • 2330
  • 2330
  • 995
  • 946
  • 651
  • 617
  • 500
  • 486
  • 397
  • 339
  • 276
  • 274
  • 254
  • 225
  • 216
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Paralinguistic event detection in children's speech

Rao, Hrishikesh 07 January 2016 (has links)
Paralinguistic events are useful indicators of the affective state of a speaker. These cues, in children's speech, are used to form social bonds with their caregivers. They have also been found to be useful in the very early detection of developmental disorders such as autism spectrum disorder (ASD) in children's speech. Prior work on children's speech has focused on the use of a limited number of subjects which don't have sufficient diversity in the type of vocalizations that are produced. Also, the features that are necessary to understand the production of paralinguistic events is not fully understood. To account for the lack of an off-the-shelf solution to detect instances of laughter and crying in children's speech, the focus of the thesis is to investigate and develop signal processing algorithms to extract acoustic features and use machine learning algorithms on various corpora. Results obtained using baseline spectral and prosodic features indicate the ability of the combination of spectral, prosodic, and dysphonation-related features that are needed to detect laughter and whining in toddlers' speech with different age groups and recording environments. The use of long-term features were found to be useful to capture the periodic properties of laughter in adults' and children's speech and detected instances of laughter to a high degree of accuracy. Finally, the thesis focuses on the use of multi-modal information using acoustic features and computer vision-based smile-related features to detect instances of laughter and to reduce the instances of false positives in adults' and children's speech. The fusion of the features resulted in an improvement of the accuracy and recall rates than when using either of the two modalities on their own.
122

Linearisation of analogue to digital and digital to analogue converters

Dent, Alan Christopher January 1990 (has links)
No description available.
123

Digital signal processing for the analysis of fetal breathing movements

Ansourian, Megeurditch N. January 1989 (has links)
No description available.
124

Computational model of visual attention : integrative approach

Lee, KangWoo January 2003 (has links)
No description available.
125

A new neural network based approach to position and scale invariant pattern recognition

Mertzanis, Emmanouel Christopher January 1992 (has links)
No description available.
126

A quadrilateral-based method for object segmentation and tracking

Chung, Hing-yip, Ronald., 鍾興業. January 2003 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
127

Verification of off-line handwritten signatures

Fang, Bin, 房斌 January 2001 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
128

Subword units and parallel processing for automatic speech recognition

Chong, Michael Wai Hing January 1990 (has links)
No description available.
129

The design and implementation of a multiple views 3D object recognition system

Hodgetts, Mark Anthony January 1995 (has links)
No description available.
130

Automatic speech synthesis using auditory transforms and artificial neural networks

Tuerk, Christine M. January 1992 (has links)
No description available.

Page generated in 0.148 seconds