• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 34
  • 33
  • 27
  • 27
  • 15
  • 12
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 5
  • Tagged with
  • 575
  • 142
  • 135
  • 75
  • 46
  • 44
  • 42
  • 35
  • 32
  • 30
  • 29
  • 29
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Development of seriation of pitch in musical perception

Gray, Janice January 1977 (has links)
No description available.
82

An Intonational Description of Mayan Q'eqchi'

Wagner, Karl Olaw Christian 02 July 2014 (has links) (PDF)
Q'eqchi' is one of many Mayan languages spoken in Guatemala, C.A. This study provides the first Tone Break and Indices (ToBI) transcription system (Silverman et al., 1992) labeling of Q'eqchi' within the Autosegmental-Metrical (AM) model of intonation (Liberman, 1975; Pierrehumbert, 1980; Ladd, 1996). As an exploratory study into the basic intonation patterns of the language, observations were made on a variety of phenomenon relating to the intonational structure and contour pattern of the language. Three native male speakers of Q'eqchi' each provided 75 spoken sentences designed to best observe the basic patterns of intonation in the language. Each spoken utterance was analyzed through the labeling of pitch accents, phrase accents, and boundary tones in accordance with ToBI transcription guidelines (Beckman & Hirschberg, 1994; Beckman & Elam, 1997). The study reinforces previous observation on the stress pattern in the language, identifies the pitch accents and boundary tones which best describe the behavior of the intonational contour of the Q'eqchi' speakers, and proves the existence of prosodic phrases which dictate the intonational patterns of speech. In addition, the different patterns observed in declarative, imperative, and interrogative sentences are exemplified and discussed along with other phenomenon observed in the spoken data.
83

The Influence of Musical Training and Maturation on Pitch Perception and Memory

Weaver, Aurora J. 25 August 2015 (has links)
No description available.
84

The kappa effect in pitch/time context

MacKenzie, Noah 08 March 2007 (has links)
No description available.
85

Discrimination of pitch direction : a developmental study

Descombes, Valérie. January 1999 (has links)
No description available.
86

The effect of three vocal models on uncertain singers' ability to match and discriminate pitches /

Gratton, Martine January 1989 (has links)
No description available.
87

Acoustic Structure of Early Infant Babble

Lily Braedenrose Berlstein (13204803) 08 August 2022 (has links)
<p>  </p> <p>There is a plethora of information surrounding the stages of infant vocal development, and canonical babble’s predictive power concerning future language outcomes. However, there is less information regarding how the acoustic features of early babble differ between canonical and non-canonical syllable types over the course of development. Furthermore, previous studies rely on small sample sizes which limit their findings’ generalizability. This project examined the pitch range, mean pitch, and syllabic nuclei duration of monosyllabic canonical and non-canonical infant vocalizations over the course of development. </p> <p>Audio files of monosyllabic utterances were obtained from 29 infants at low risk for developing a speech or language disorder, aged 10-26 months. The infants were divided into three age bands: 10-12 months (M=11.74, N=10, 5=F), 13-22 months (M=16.08, N=9, 6=F), and 23-26 months (M=24.67, N=9, 2=F). We listened to each utterance and marked syllable nucleus boundaries prior to running scripts to measure acoustic cues. Between 6 and 15 utterances were selected from each participant. The number of canonical utterances was matched to the number of noncanonical utterances (e.g., if 13 canonical utterances were selected for a specific participant, 13 non-canonical utterances were also selected). We then ran a Praat script which yielded the mean pitch, pitch range, and duration of the syllabic nucleus for each audio file. </p> <p>We found that there was a significant effect of syllable type on duration, as canonical syllables were shorter in duration than non-canonical syllables (F (1, 618.34) = 10.64, <em>p </em>= .001), and on mean pitch, as canonical syllables were lower in mean pitch than non-canonical syllables (F (1, 618.57) = 7.18, <em>p</em> = .008). We did not find an effect of syllable type on pitch range, age on mean pitch or duration, or any interaction effects between syllable type and age. However, we did find an effect of age on pitch range, because infants in the oldest age bracket (23-26 months) were more likely to have a wider pitch range than younger infants (F (2, 44.77) = 5.05, <em>p</em> = .011). </p> <p>This provides preliminary evidence that there are pitch and duration distinctions between canonical and non-canonical syllable types and suggests that as infants age they are more likely to use greater pitch variation within their vocalizations. However, as our study only examined monosyllabic utterances, further research is necessary in order to thoroughly investigate pitch and duration distinctions present in canonical and non-canonical syllables. </p>
88

African American vernacular English : origins and issues

Sutcliffe, David January 1998 (has links)
No description available.
89

Computer Recognition of Pitch for Musical Applications

Clendinning, Jane Piper 08 1900 (has links)
No description available.
90

Multiple Fundamental Frequency Pitch Detection for Real Time MIDI Applications

Hilbish, Nathan 18 July 2012 (has links)
This study aimed to develop a real time multiple fundamental frequency detection algorithm for real time pitch to MIDI conversion applications. The algorithm described here uses neural network classifiers to make classifications in order to define a chord pattern (combination of multiple fundamental frequencies). The first classification uses a binary decision tree that determines the root note (first note) in a combination of notes; this is achieved through a neural network binary classifier. For each leaf of the binary tree, each classifier determines the frequency group of the root note (low or high frequency) until only two frequencies are left to choose from. The second classifier determines the amount of polyphony, or number of notes played. This classifier is designed in the same fashion as the first, using a binary tree made up of neural network classifiers. The third classifier classifies the chord pattern that has been played. The chord classifier is chosen based on the root note and amount of polyphony, the first two classifiers constrain the third classifier to chords containing only a specific root not and a set polyphony. This allows for the classifier to be more focused and of a higher accuracy. To further increase accuracy, an error correction scheme was devised based on repetitive coding, a technique that holds out multiple frames and compares them in order to detect and correct errors. Repetitive coding significantly increases the classifiers accuracy; it was found that holding out three frames was suitable for real-time operation in terms of throughput, though holding out more frames further increases accuracy it was not suitable real time operation. The algorithm was tested on a common embedded platform, which through benchmarking showed the algorithm was well suited for real time operation.

Page generated in 0.1078 seconds