• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 9
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

EFFECTS OF AUGMENTED REAL-TIME AUDITORY FEEDBACK ON TOP-LEVEL PRECISION SHOOTING PERFORMANCE

Underwood, Stacy Marie 01 January 2009 (has links)
This study examined the effects of training with real-time auditory feedback in precision shooting. Top-level shooters (N=9) were randomly assigned to the feedback or nonfeedback group. Each group performed a pre-test, a 4-week training intervention and a post-test. The feedback group was provided with augmented real-time auditory feedback based on postural and rifle barrel stability during training sessions. Increases in performance were measured through changes in postural stability, rifle barrel stability, shot outcome and shot group diameter. Real-time auditory feedback did not increase postural or rifle barrel stability in the feedback group. No meaningful differences were found related to shot outcome or shot group diameter in air rifle testing. The feedback group was able to reduce shot group diameter during smallbore testing. In summary, the augmented real-time auditory feedback did not improve postural or rifle barrel stability. Future research should focus on examining the effects of auditory feedback on smallbore performance.
12

Incorporating Auditory and Visual Feedback and Student Choice into an Interdependent Group Contingency to Improve On-Task Behavior

Giguere, Beth 21 March 2018 (has links)
Group contingencies are efficient and effective behavioral interventions that allow teachers to apply a reinforcement criterion to a large group of students. However, most research on group contingencies has not examined the impact of types of teacher feedback and student choice of teacher feedback incorporated into the use of group contingencies. The current study used a multiple baseline across participants design with an embedded alternating treatments design to explore the use of an interdependent group contingency that compared the effectiveness of incorporating auditory or visual feedback to improve student on-task behavior of three students in public elementary school classrooms. The study also explored whether incorporating student choice into the feedback would enhance the outcomes for student behavior. The results indicated that the interdependent group contingency intervention was successful in increasing the on-task behavior of all three participants. The results also indicated that while both auditory and visual feedback were effective in increasing on-task behavior of all three students, two of the students engaged in slightly higher levels of on-task behavior when auditory feedback was used. When students were given the option to choose which type of feedback would be used, two of the three students favored auditory feedback over visual feedback, and on-task behaviors maintained for all three participants. These results have implications for the use of auditory feedback and choice in the classroom setting as part of a group contingency.
13

Braille Hero : Feedback modalities and their effectiveness on alphabetic braille learning

Hellkvist, Marcus January 2017 (has links)
Braille literacy is an important and vital part of visually impaired and blind peoples’ everyday lives. The purpose of this paper was to evaluate different feedback modalities used in a smartphone game and analyze their impact and effectiveness on alphabetic braille learning. In this study, three different modalities were used and tested. These were tactile feedback, auditory feedback, and a combination of both. A quantitative method and a post-test consisting of braille writing and reading exercises was used to measure the effectiveness of each feedback modality. 18 people, equally distributed between the three different feedback modalities participated in the study. Each played the game using blindfolds. The result show that there was no statistically significant difference between the feedback modalities as determined by a one-way ANOVA test. However, a practical difference when playing the game was found. The respondents who used the combined feedback method performed better in the game. On average, the respondent learned to identify seven out of twelve braille characters and was able to read one out of five words in braille print. The study concluded that the game could be played autonomously and that the feedback modalities could be used separately or in combination with each other without affecting the knowledge post-test.
14

The Effect of Two Rate Change Approaches on Speech Movement Patterns

Lewis, Noelle Marie 12 May 2022 (has links)
The current study examined the effect of different rate change approaches on speech movement patterns, including increasing and decreasing speaking rate volitionally, as well as with delayed auditory feedback (DAF). There were 10 participants, five male and five female, with a mean age of 25 years. All were typical speakers. Participants spoke the sentence “Don’t fight or pout over a toy car” under slow, fast and DAF speaking conditions. A total of 5 sensors were glued to each participant’s tongue, teeth, and lips. NDI Wave electromagnetic articulography recorded the articulatory movements from these sensors as the participants spoke. Metrics for the individual movement strokes, or articulatory gestures, were calculated based on the movement speed of the articulators during the target utterance. Ten tokens of the target utterance were analyzed for stroke count, stroke speed, duration, and hull area. Vertical movements of the tongue, jaw, lips, and lip aperture were used to calculate the spatiotemporal index to assess variability in speech movements across 10 sentence repetitions. Statistical analysis revealed that articulatory patterns changed significantly in slower speech. A speaker’s efforts to naturally decrease speech rate affected articulation patterns more than did the fast and DAF conditions. Findings from this study can be used as a foundation for future studies with dysarthric individuals, which may increase our understanding of mechanisms of change in the remediation of disordered speech.
15

Effects of Delayed Auditory Feedback on the Bereitschaftspotential

Johnson, Jennifer L. 19 November 2007 (has links) (PDF)
This study examined the brain electrical activity of normal speakers in a non-delayed auditory feedback (DAF) condition and when experiencing DAF to determine the effect DAF would have on the Bereitschaftspotential (BP). The BP reflects the preparatory state of a person prior to motor execution of an act and can be observed 1500 to 500 ms prior to voluntary movement. The participants in the study included 10 adults with normal speech. Each read a series of 30 sentences, both without DAF and with DAF, while the BP was measured. Results indicate that the BP is present across the scalp in both the control condition and the DAF condition; however, the BP is reduced in the DAF condition. The scalp distribution maps indicate an increased negativity in the left frontal lobe in the DAF condition. These findings suggest that while the brain is engaged in processing current information that has already been initiated, the motor system may not be able to be primed for the next sequential motor event. There is still a need for more research to explore the motor control of speech and the ways altered feedback may disrupt the speech motor control.
16

Sorry, I can't hear you : A hearing impaired classical singer's exploration of vocal self-perception

Ekmark, Gustav January 2023 (has links)
Vocal self-perception plays an important role in the learning process as a classical singer, especially to a hearing impaired classical singer like myself. To explore and challenge my vocal self-perception, I used two different enchanced feedback methods to observe how I responded with my singing technique: one based on auditory feedback and one based on visual feedback. I formulated two training sequences with a defined schedule and procedure. I sang excerpts from two contrasting arias and made a total of seventeen audio recordings and eleven video recordings. Those recordings were then evaluated by me in listening sessions, focusing on the quality of tone. I chose six audio recordings to play for a small discussion group and collected the group's perceptual data. The results suggest that these methods did not positively impact my singing technique, but the experience did lead me to some important realizations about certain timbral qualities in my voice, and I learned a great deal about different aspects of vocal self-perception in my singing practice.
17

The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

Liu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
18

Assessing Temporal Compensation of Speech due to Delayed Auditory Feedback

Davis, Samantha N. 01 May 2017 (has links)
No description available.
19

THE SENSORIMOTOR CONTROL OF SEQUENTIAL FORCES: INVESTIGATIONS INTO VISUAL-SOMATOSENSORY FEEDBACK MODALITIES AND MODELS OF FORCE-TIMING INTERACTIONS

Therrien, Amanda S. 10 1900 (has links)
<p>Many daily motor tasks involve the precise control of both force level and motor timing. The neural mechanisms concurrently managing these movement parameters remain unclear, as the dominant focus of previous literature has been to examine each in isolation. As a result, little is understood regarding the contribution of various sensory modalities to force output and interval production in sequential motor tasks. This thesis uses a sequential force production task to investigate the roles of visual and somatosensory feedback in the timed control of force. In Chapter 2 we find that removal of visual force feedback resulted in specific force output errors, but leaves motor timing behavior relatively unaffected according to predictions of the two-level timing model by Wing and Kristofferson (1973). In Chapter 3, we show that force output errors exhibited in the absence of a visual reference may be related to the processing of reafferent somatosensation from self-generated force pulses. The results of Chapter 4 reveal evidence that force errors exhibited following visual feedback removal are consistent with a shift in the perceived magnitude of force output and that the direction of error may be determined by prior task constraints. In Chapter 5 we find evidence of effector-specificity in the processing of and compensation for reafferent somatosensation. Lastly, in Chapter 6 we find that the interplay between audition and somatosensation in the control of sound level by the vocal effectors resembles that which is observed between vision and somatosensation in the control of force by the distal effectors.</p> / Doctor of Philosophy (PhD)
20

MusiKeys: Exploring Auditory-Physical Feedback Replacement for Mid-Air Text-Entry

Krasner, Alexander Laurence 07 August 2023 (has links)
Extended reality (XR) technology is positioned to become more ubiquitous in life and the workplace in the coming decades, but the problem of how to best perform precision text-entry in XR remains unsolved. Physical QWERTY keyboards are the current standard for these kind of tasks, but if they are recreated virtually, the feedback information from sense of touch is lost. We designed and ran study with 24 participants to explore the effects of using auditory feedback to communicate this missing information that typists normally get from touching a physical keyboard. The study encompassed four VR mid-air keyboards with increasing levels of auditory information, along with a fifth physical keyboard for reference. We evaluated the auditory augmentations in terms of performance, usability, and workload, while additionally assessing the ability of our technique to communicate the touch-feedback information. Results showed that providing clicking feedback on key-press and key-release improves typing compared to not providing auditory feedback, which is consistent with literature on the topic. However, we also found that using audio to substitute the information contained in physical-touch feedback, in place of actual physical-touch feedback, yielded no statistically significant difference in performance. The information can still be useful, but potentially would take a lot of time to develop the muscle memory reflexes that typists already have when using physical keyboards to type. Nonetheless, we recommend others consider incorporating auditory feedback of key-touch into their mid-air keyboards, since it received the highest levels of user preference among keyboards tested. / Master of Science / Extended reality (XR) refers to technology that allows users to either immerse themselves in virtual worlds or incorporate virtual objects into the real world. XR is positioned to become more ubiquitous in life and the workplace in the coming decades, but the problem of how to best perform precision text-entry in XR remains unsolved. Physical QWERTY keyboards are the current standard for these kind of tasks, but if they are recreated virtually, the information inherent to sense of touch is lost. We designed and ran study with 24 participants to explore the effects of using auditory feedback to communicate this missing information that typists normally get from touching a physical keyboard. The study encompassed four virtual reality (VR) mid-air keyboards with increasing levels of auditory information, along with a fifth physical keyboard for reference. We evaluated the auditory augmentations in terms of performance, usability, and workload, while additionally assessing the ability of our technique to communicate the touch-feedback information. Results showed that providing clicking feedback on key-press and key-release improves typing compared to not providing auditory feedback, which is consistent with literature on the topic. However, we also found that using audio to substitute the information contained in physical-touch feedback, in place of actual physical-touch feedback, yielded no statistically significant difference in performance. The information can still be useful, but potentially would take a lot of time to develop the muscle memory reflexes that typists already have when using physical keyboards to type. Nonetheless, we recommend others consider incorporating auditory feedback of key-touch into their mid-air keyboards, since it received the highest levels of user preference among keyboards tested.

Page generated in 0.038 seconds