• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Effect of Laryngeal Activity on the Articulatory Kinematics of /i/ and /u/

Peacock, Mendocino Nicole 12 June 2020 (has links)
This study examined the effects of laryngeal activity on articulation by comparing the articulatory kinematics of the /i/ and /u/ vowels produced in different speaking conditions (loud, comfortable, soft and whisper). Participants included 10 males and 10 females with no history of communication disorders. The participants read six stimulus sentences in loud, comfortable, soft and whispered conditions. An electromagnetic articulograph was used to track the articulatory movements. The experimenters selected the sentence We do agree the loud noise is annoying from the other utterances and the words we do agree were segmented from the sentence. We do agree was chosen because of the tongue and lip movements associated with the retracted and rounded vowels. Results reveal the soft condition generally has smaller and slower articulatory movements than the comfortable condition, whereas the whispered condition shows an increase in size and the loud condition shows the greatest increase in both size and speed compared to the comfortable condition. The increase in the size of the movements in whispered speech may be due to unfamiliarity as well as a decrease in auditory feedback that requires the speaker to rely more on tactile feedback. These findings suggest that adjusting laryngeal activity by speaking more loudly or softly influences articulation; this may be useful in treating both voice and articulation impairments.
2

Whisper and Phonation: Aerodynamic Comparisons across Adduction and Loudness Levels

Konnai, Ramya Mohan 26 March 2012 (has links)
No description available.
3

Speech motor control variables in the production of voicing contrasts and emphatic accent

Mills, Timothy Ian Pandachuck January 2009 (has links)
This dissertation looks at motor control in speech production. Two specific questions emerging from the speech motor control literature are studied: the question of articulatory versus acoustic motor control targets, and the question of whether prosodic linguistic variables are controlled in the same way as segmental linguistic variables. In the first study, I test the utility of whispered speech as a tool for addressing the question of articulatory or acoustic motor control targets. Research has been done probing both sides of this question. The case for articulatory specifications is developed in depth in the Articulatory Phonology framework of Haskins researchers (eg. Browman & Goldstein 2000), based on the task-dynamic model of control presented by Saltzman & Kelso (1987). The case for acoustic specifications is developed in the work of Perkell and others (eg Perkell, Matthies, Svirsky & Jordan 1993, Guenther, Espy-Wilson, Boyce, Matthies, Zandipour & Perkell 1999, Perkell, Guenther, Lane, Matthies, Perrier, Vick,Wilhelms-Tricarico & Zandipour 2000). It has also been suggested that some productions are governed by articulatory targets while others are governed by acoustic targets (Ladefoged 2005). This study involves two experiments. In the first, I make endoscopic video recordings of the larynx during the production of phonological voicing contrasts in normal and whispered speech. I discovered that the glottal aperture differences between voiced obstruents (ie, /d) and voiceless obstruents (ie, /t) in normal speech was preserved in whispered speech. Of particular interest was the observation that phonologically voiced obstruents tended to exhibit a narrower glottal aperture in whisper than vowels, which are also phonologically voiced. This suggests that the motor control targets of voicing is different for vowels than for voiced obstruents. A perceptual experiment on the speech material elicited in the endoscopic recordings elicited judgements to see whether listeners could discriminate phonological voicing in whisper, in the absence of non-laryngeal cues such as duration. I found that perceptual discrimination in whisper, while lower than that for normal speech, was significantly above chance. Together, the perceptual and the production data suggest that whispered speech removes neither the acoustic nor the articulatory distinction between phonologically voiced and voiceless segments. Whisper is therefore not a useful tool for probing the question of articulatory versus acoustic motor control targets. In the second study, I look at the multiple parameters contributing to relative prominence, to see whether they are controlled in a qualitatively similar way to the parameters observed in bite block studies to contribute to labial closure or vowel height. I vary prominence by eliciting nuclear accents with a contrastive and a non-contrastive reading. Prominence in this manipulation is found to be signalled by f0 peak, accented syllable duration, and peak amplitude, but not by vowel de-centralization or spectral tilt. I manipulate the contribution of f0 in two ways. The first is by eliciting the contrastive and non-contrastive readings in questions rather than statements. This reduces the f0 difference between the two readings. The second is by eliciting the contrastive and non-contrastive readings in whispered speech, thus removing the acoustic f0 information entirely. In the first manipulation, I find that the contributions of both duration and amplitude to signalling contrast are reduced in parallel with the f0 contribution. This is a qualitatively different behaviour from all other motor control studies; generally, when one variable is manipulated, others either act to compensate or do not react at all. It would seem, then, that this prosodic variable is controlled in a different manner from other speech motor targets that have been examined. In the whisper manipulation, I find no response in duration or amplitude to the manipulation of f0. This result suggests that, like in the endoscopy study, perhaps whisper is not an effective means of perturbing laryngeal articulations.
4

The Effects of Laryngeal Activity on Articulatory Kinematics

Barber, Katherine Marie 01 October 2015 (has links)
The current study examined the effects of three speech conditions (voiced, whispered, mouthed) on articulatory kinematics at the sentence and word level. Participants included 20 adults (10 males, 10 females) with no history of speech, language, or hearing disorders. Participants read aloud six target utterances in the three different speaking conditions while articulatory kinematics were measured using the NDI Wave electromagnetic articulograph. The following articulators were examined: mid tongue, front of tongue, jaw, lower lip, and upper lip. One of the target utterances was chosen for analysis (It's time to shop for two new suits) at the sentence level and then further segmented for more detailed analysis of the word time. Results revealed a number of significant changes between the voiced and mouthed conditions for all articulators at the sentence level. Significant increases in sentence duration, articulatory stroke count, and stroke duration as well as significant decreases in peak stroke speed, stroke distance, and hull volume were found in the mouthed condition at the sentence level when compared to the voiced condition. Peak velocity significantly decreased in the mouthed condition at the word level, but overall the sentence level measures were more sensitive to change. These findings suggest that both laryngeal activation and auditory feedback may be necessary in the production of normally articulate speech, and that the absence of these may account for the significant changes between the voiced and mouthed conditions.
5

My soul looks back in wonder, how I got over: black women’s narratives on spirituality, sexuality, and informal learning

McClish, Keondria E. January 1900 (has links)
Doctor of Philosophy / Department of Adult Learning and Leadership / Kakali Bhattacharya / Royce Ann Collins / The purpose of this qualitative study was to explore how two Black women, born 1946 to 1964, discuss their sexuality in relation to their understanding of spirituality and informal learning. Using the Black Feminine Narrative Inquiry framework informed by womanism, Black feminism, and narrative structures used by Black women novelists, this qualitative study analyzed the vulnerable, empowered, and spirit-driven narratives (VES Narratives) collected from the participants to explore their experiences with spirituality, sexuality, and informal learning. The data collection methods included wisdom whisper talks to elicit spirituality and sexuality timelines and glean information from the participants’ treasure chests.
6

Role of Driver Hearing in Commercial Motor Vehicle Operation: An Evaluation of the FHWA Hearing Requirement

Lee, Suzanne E. 25 August 1998 (has links)
The Federal Highway Administration (FHWA) currently requires that all persons seeking a commercial driver's license for interstate commerce possess a certain minimal level of hearing. After an extensive literature review on topics related to hearing and driving, a human factors engineering approach was used to evaluate the appropriateness of this hearing requirement, the methods currently specified to test drivers' hearing, and the appropriate hearing levels required. Task analysis, audiometry, dosimetry, in-cab noise measurements, and analytical prediction of both speech intelligibility and masked thresholds were all used in performing the evaluation. One of the methods currently used to test truck driver hearing, the forced-whisper test, was also evaluated in a laboratory experiment in order to compare its effectiveness to that of standard pure-tone audiometry. Results indicated that there are truck driving tasks which require the use of hearing, that truck drivers may be suffering permanent hearing loss as a result of driving, that team drivers may be approaching a 100% OSHA noise dose over 24 hours, and that truck-cab noise severely compromises the intelligibility of live and CB speech, as well as the audibility of most internal and external warning signals. The forced whisper experiment demonstrated that there is significant variability in the sound pressure level of whispers produced using this technique (in the words, word types, and trials main effects). The test was found to be repeatable for a group of listeners with good hearing, but was found to have only a weak relationship to the results of pure-tone audiometry for a group of 21 subjects with hearing levels ranging from good to very poor. Several truck cab and warning signal design changes, as well as regulatory changes, were recommended based on the overall results of this evaluation. / Ph. D.
7

En undersökning av AI-verktyget Whisper som potentiell ersättare till det manuella arbetssättet inom undertextframtagning / A Study of the AI-tool Whisper as a Potential Substitute to the Manual Process of Subtitling

Kaka, Mailad Waled Kider, Oummadi, Yassin January 2023 (has links)
Det manuella arbetssättet för undertextframtagning är en tidskrävande och kostsam process. Arbetet undersöker AI-verktyget Whisper och dess potential att ersätta processen som används idag. Processen innefattar både transkribering och översättning.  För att verktyget ska kunna göra denna transkribering och översättning behöver den i första hand kunna omvandla tal till text. Detta kallas för taligenkänning och är baserat på upptränade språkmodeller. Precisionen för transkriberingen kan mätas med ordfelfrekvens (Word Error Rate – WER) och för översättningen med COMET-22.  Resultaten visade sig klara av Microsofts krav för maximalt tillåten WER och anses därför vara tillräckligt bra för användning. Resultaten indikerade även att de maskinproducerade översättningarna uppnår tillfredställande kvalitet. Undertextframtagning, som är det andra steget i processen, visade sig Whisper ha svårare för när det gäller skapandet av undertexter. Detta gällde både för transkriberingen i originalspråk samt den engelsköversatta versionen. Kvaliteten på undertexternas formatering, som mäts med SubER-metoden, kan tolkas som för låga för att anses vara användbara. Resultaten låg i intervallet 59 till 96% vilket innebär hur stor del av den automatiskt tillverkade undertexten behöver korrigeras för att matcha referensen.  Den övergripande slutsatsen man kan dra är att Whisper eventuellt kan ersätta den faktiska transkriberings -och översättningsprocessen, då den både är snabbare och kostar mindre resurser än det manuella tillvägagångssättet. Den är dock inte i skrivande stund tillräcklig för att ersätta undertextframtagningen. / The manual process of subtitling creation is a time consuming and costly process. This study examines the AI-tool Whisper and its potential of substituting the process used today. The process consists of both speech recognition and speech translation.  For the tool to accomplish the transcription and translation, it first needs to be able to convert speech-to-text. This is called speech recognition and is based on trained speech models. The precision for the transcription can be measured using the Word Error Rate (WER), while the translation uses COMET-22 for measuring precision.  The results met the requirements for maximal allowed WER-value and were therefore considered to be usable. The results also indicated that the machine produced translations reached satisfactory quality. Subtitle creation, which is the second part of the process, turned out to be more of a challenge for Whisper. This applied to both the transcription in the original language and the English translated version.  The quality of the subtitling format, measured using the SubER-method, can be interpreted as too low to be considered useful. The results were in the interval of 59 to 96% which informs how large part of the automatically created subtitle need to be corrected to match the reference.  The conclusion one can draw is that Whisper could eventually substitute the actual transcription and translation process, since it is both faster and costs less resources than the manual process. Though it is not good enough, in the moment of writing, to substitute the subtitling creation.
8

Model větrné elektrárny pro výzkumné a laboratorní využití / Wind Turbine Model for Research and Laboratory Applications

Števček, Tomáš January 2015 (has links)
A major portion of this thesis is devoted to the Whisper 200 wind turbine model in Matlab-Simulink environment. The turbine is installed at the Department of Electrical Power Engineering, FEEC BUT. In the model, several types of simulations can be executed. On that basis, the power curve and mathematical relationships between wind speed and other physical quantities, such as RPM, electic current, and voltage, were obtained. Comparisons of the simulations' results with measurement data illustrate adequate agreement, but limitations of the model remain significant, as is exhaustively documented and commented upon in the thesis. As a partial advancement towards elimination of the model's deficiencies, conditions for substantial performance improvements of the dynamic simulation have been elaborately derived.
9

A Comparative Analysis of Whisper and VoxRex on Swedish Speech Data

Fredriksson, Max, Ramsay Veljanovska, Elise January 2024 (has links)
With the constant development of more advanced speech recognition models, the need to determine which models are better in specific areas and for specific purposes becomes increasingly crucial. Even more so for low-resource languages such as Swedish, dependent on the progress of models for the large international languages. Lagerlöf (2022) conducted a comparative analysis between Google’s speech-to-text model and NLoS’s VoxRex B, concluding that VoxRex was the best for Swedish audio. Since then, OpenAI released their Automatic Speech Recognition model Whisper, prompting a reassessment of the preferred choice for transcribing Swedish. In this comparative analysis using data from Swedish radio news segments, Whisper performs better than VoxRex in tests on the raw output, highly affected by more proficient sentence constructions. It is not possible to conclude which model is better regarding pure word prediction. However, the results favor VoxRex, displaying a lower variability, meaning that even though Whisper can predict full text better, the decision of what model to use should be determined by the user’s needs.

Page generated in 0.0606 seconds