• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Countercontrol as a Factor in Teaching Vocal Imitation to an Autistic Child and it Relationship to Motivational Parameters

Hughes, Lois V. 08 1900 (has links)
Operant conditioning techniques were used to establish imitation in the manner outlined by Baer. Countercontrol was assessed in motor and vocal imitation across four motivational levels. Three levels of food deprivation, i.e., three hour, fourteen hour, and twenty-one hour, plus a final response contingent shock level, composed the parameters.
2

Vocal imitation for query by vocalisation

Mehrabi, Adib January 2018 (has links)
The human voice presents a rich and powerful medium for expressing sonic ideas such as musical sounds. This capability extends beyond the sounds used in speech, evidenced for example in the art form of beatboxing, and recent studies highlighting the utility of vocal imitation for communicating sonic concepts. Meanwhile, the advance of digital audio has resulted in huge libraries of sounds at the disposal of music producers and sound designers. This presents a compelling search problem: with larger search spaces, the task of navigating sound libraries has become increasingly difficult. The versatility and expressive nature of the voice provides a seemingly ideal medium for querying sound libraries, raising the question of how well humans are able to vocally imitate musical sounds, and how we might use the voice as a tool for search. In this thesis we address these questions by investigating the ability of musicians to vocalise synthesised and percussive sounds, and evaluate the suitability of different audio features for predicting the perceptual similarity between vocal imitations and imitated sounds. In the fi rst experiment, musicians were tasked with imitating synthesised sounds with one or two time{varying feature envelopes applied. The results show that participants were able to imitate pitch, loudness, and spectral centroid features accurately, and that imitation accuracy was generally preserved when the imitated stimuli combined two, non-necessarily congruent features. This demonstrates the viability of using the voice as a natural means of expressing time series of two features simultaneously. The second experiment consisted of two parts. In a vocal production task, musicians were asked to imitate drum sounds. Listeners were then asked to rate the similarity between the imitations and sounds from the same category (e.g. kick, snare etc.). The results show that drum sounds received the highest similarity ratings when rated against their imitations (as opposed to imitations of another sound), and overall more than half the imitated sounds were correctly identi ed with above chance accuracy from the imitations, although this varied considerably between drum categories. The fi ndings from the vocal imitation experiments highlight the capacity of musicians to vocally imitate musical sounds, and some limitations of non- verbal vocal expression. Finally, we investigated the performance of different audio features as predictors of perceptual similarity between the imitations and imitated sounds from the second experiment. We show that features learned using convolutional auto-encoders outperform a number of popular heuristic features for this task, and that preservation of temporal information is more important than spectral resolution for differentiating between the vocal imitations and same-category drum sounds.
3

Evaluation of a Computer-Based Observer-Effect Training on Mothers' Vocal Imitation of Their Infant

Shea, Kerry A. 01 December 2019 (has links)
Infants begin to learn important skills, such as contingency learning, social referencing, and joint attention through everyday interactions with their environment. When infants learn that their behavior produces a change in the environment (e.g., attention from others), infants engage in behavior that produces that effect (e.g., increases in smiling sustained engagement. When mothers and other caregivers respond immediately to infant behavior, they help their infant learn that the infant’s own behavior is effective, producing a change in the environment. The current investigation evaluated the effect of a computer-based training that aimed at teaching mothers to play a vocal-imitation contingency-learning game. The training included observer-effect methodology, meaning the mothers engaged in observation and evaluation of other mothers engaging in vocal imitation but did not themselves receive any direct coaching or feedback. All mothers completed the training during one session and in less than 45 min. Results indicate that all mothers increased their use of vocal imitation post training and maintained their performance at a two-week follow-up. Results are discussed in terms of how computer training may facilitate dissemination of responsive caregiver training.
4

Infant vocal imitation of music

Pereira da Cruz Benetti, Lucia 02 August 2017 (has links)
No description available.
5

Caractérisation acoustique et perceptive du bruit moteur dans un habitacle automobile

Sciabica, Jean-Francois 19 September 2011 (has links)
L’ambiance sonore dans l’habitacle tend à être mieux maitrisée grâce au progrès de l’isolation et à l’introduction de motorisations plus silencieuses. Dans le cas des véhicules thermiques, il est désormais possible de modifier ou d’ajouter des organes au moteur pour rendre sa sonorité plus expressive et améliorer ainsi la sensation d’accélération. La synthèse sonore permet de simuler ces différents réglages du moteur et d’étudier leur ressenti. Pour être simple et efficace, cette synthèse doit donc répondre aux attentes des concepteurs sonores, par exemple donner une sonorité sportive au moteur.Le bruit dans l’habitacle est un bruit complexe puisque son timbre varie avec la dynamique du véhicule. Sa description perceptive est connue, notamment par l’emploi d’onomatopées (« ON », « AN » et « REU »), mais la caractérisation des ces descripteurs demeurent incomplètes. Il est donc difficile de manipuler les paramètres du signal lors de la synthèse pour reproduire ces attributs perceptifs dans les sons créés. Notre but est de proposer une nouvelle synthèse pour établir ce lien manquant entre perception et signal.Une expérience en simulateur de conduite étudie tout d’abord le couplage entre le ressenti acoustique et le ressenti dynamique du véhicule. Ensuite, nous cherchons à établir le lien entre la perception du bruit moteur et la synthèse par l’utilisation d’imitations vocales du bruit moteur reprenant les onomatopées « ON » et « AN ». Une modélisation du bruit dans l’habitacle est ainsi construite en s’inspirant d’un modèle source/filtre, puis testée dans deux expériences en laboratoire acoustique. Le bruit moteur peut alors être « métaphoriquement » assimilé aux cordes vocales du véhicule tandis que les résonances de l’habitacle sont considérées comme le conduit vocal. / Automotive acoustics is living a new challenge due to the introduction of new power-trains. Therefore, interior car noise is being well designed, offering new perspectives in terms of sound ambiance. The interior car noise for combustion engine cars can be modified in order to increase the comfort by producing the impression of a quiet car, or oppositely, by rendering it more expressive, the driver feels better the dynamics of the car. Currently, these sensations are produced by modifying the engine components. But a very interesting alternative is presented by sound synthesis, which gives a new dimension to sound conception. Perceptive studies and synthesis control contributes to develop this technology. Engine noise is a complex noise with a timbre varying with car dynamics, and it can be described using perceptual descriptors, like onomatopoeia (“ON”, “AN” and “REU”). However, the use of perceptive descriptors does not allow finding the link between them and the physical characteristics of engine sound, therefore there they are difficult to integrate in sound synthesis. The goal of this research is to produce a sound synthesis based on human sound perception and car dynamics. A first study was developed on a driving simulator in order to describe the relation between acoustic perception and motion perception. The next step was to establish the link between perception and synthesis by vocal imitation based on onomatopoeia “ON” and “AN” reproducing engine noise. Based on these results, a subtractive synthesis of interior car noise was further built, inspired by a source/filter model. Last, but not least, we tested the impact of engine noise and car interior resonance in two experiments in acoustic laboratory. Engine noise can be metaphorically considered as the car “vocal chords”, while the resonance of the interior of the vehicle can be considered as its “vocal tract”.
6

Comparing Contingent Vocal Imitation and Contingent Vocal Responses to Increase Verbal Communication in Young children with Autism Spectrum Disorder

Jaffar, Zehra January 2021 (has links)
Individuals diagnosed with Autism Spectrum Disorder (ASD) have difficulties in forming functional communication. The purpose of this study was to replicate Ishizuka and Yamamoto (2016) to determine which intervention, contingent vocal imitation or contingent vocal responses, produced the highest level of vocalizations of young children diagnosed with ASD in a play-based setting. For the contingent vocal response treatment phase, the experimenter vocally responded to each child vocalization with a response that was topographically different than the child’s response. For the contingent vocal imitation treatment phase, the experimenter vocally imitated the child’s vocalization with a topographically identical response. Two children diagnosed with ASD, ages 41 and 57 months, participated in this study. An alternating treatment design was used to compare the effects of each treatment on increasing child vocalizations. . Results indicated that contingent vocal imitation resulted in a higher number of child vocal imitations for both children. Results also indicated that contingent vocal responses and contingent vocal imitation produced comparable levels of overall vocalizations, which replicated the findings of Ishizuka and Yamamoto (2016). / Applied Behavioral Analysis
7

Comparing Contingent Vocal Imitation and Contingent Vocal Responses to Increase Verbal Communication in Young children with Autism Spectrum Disorder

Jaffar, Zehra January 2021 (has links)
Individuals diagnosed with Autism Spectrum Disorder (ASD) have difficulties in forming functional communication. The purpose of this study was to replicate Ishizuka and Yamamoto (2016) to determine which intervention, contingent vocal imitation or contingent vocal responses, produced the highest level of vocalizations of young children diagnosed with ASD in a play-based setting. For the contingent vocal response treatment phase, the experimenter vocally responded to each child vocalization with a response that was topographically different than the child's response. For the contingent vocal imitation treatment phase, the experimenter vocally imitated the child's vocalization with a topographically identical response. Two children diagnosed with ASD, ages 41 and 57 months, participated in this study. An alternating treatment design was used to compare the effects of each treatment on increasing child vocalizations. . Results indicated that contingent vocal imitation resulted in a higher number of child vocal imitations for both children. Results also indicated that contingent vocal responses and contingent vocal imitation produced comparable levels of overall vocalizations, which replicated the findings of Ishizuka and Yamamoto (2016). / Applied Behavioral Analysis

Page generated in 0.0882 seconds