• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1264
  • 538
  • 192
  • 158
  • 113
  • 98
  • 75
  • 75
  • 75
  • 75
  • 75
  • 68
  • 58
  • 56
  • 40
  • Tagged with
  • 3104
  • 1454
  • 881
  • 831
  • 593
  • 405
  • 382
  • 346
  • 280
  • 241
  • 223
  • 204
  • 190
  • 189
  • 188
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Reduplication in Lexical Phonology: Javanese Plural Reduplication

Schlindwein, Debbie January 1989 (has links)
No description available.
72

Preface (Arizona Phonology Conference, Volume 2, 1989)

January 1989 (has links)
No description available.
73

Deriving Abstract Representations Directly from the Level of Connected Speech

Bourgeois, Thomas C. January 1990 (has links)
No description available.
74

Is Voicing a Privative Feature?

Cho, Young-mee Yu January 1990 (has links)
A typology of voicing assimilation has been presented in Cho (1990a), whose result will be summarized in section 2. Like many other marked assimilations, voicing assimilation is characterized as spreading of only one value of the feature [voice]. The main body of this paper will compare a privative theory of voicing with a binary theory. It has often been noted that assimilation rules are natural rules since they are cross - linguistically very common. It has also been observed that they are asymmetric in nature (Schachter 1969, Schane 1972). For example, nasalization, palatalization, and assimilation of coronals to noncoronals are extremely common but the reverse processes are not frequently found in natural languages. On the other hand, voicing assimilation has been known to be relatively free in choosing its propagating value. Whereas the other assimilation rules are sensitive to the marked and the unmarked value of a given feature, assimilating a voiced consonant to a voiceless consonant has been assumed to be as natural as the reverse process (Anderson 1979, Mohanan (forthcoming)). I have argued that voicing assimilation is no different in its asymmetry from the other types of assimilation by demonstrating the need for two parameters and one universal delinking rule. A universal typology emerges from the possible interaction among the values associated with delinking and spreading parameters. The following theoretical assumptions will be utilized throughout this paper. First, I follow the standard assumption in Autosegmental Phonology that assimilation rules involve not a change or a copy but a reassociation of the features. This operation of reassociation called spreading is assumed to be the sole mechanism of assimilation rules (Goldsmith 1979, Steriade 1982, Hayes 1986). Second, I assume Underspecification Theory, which requires that some feature values be unspecified in the underlying representation ( Kiparsky. 1982, Archangeli and Pulleyblank (forthcoming)). Distinguishing different versions of Underspecification Theory will not be relevant in the discussion since I will discuss whether voicing is universally a privative feature or a binary opposition. Third, I assume the principle of Structure Preservation (Kiparsky 1985, Borowsky 1986), which is expressed in terms of constraints that apply in underlying representations and to each stage in the derivation up to the level at. which they are turned off (usually in the lexicon). Structure Preservation will be invoked to classify obstruents on the one hand, and sonorants and the other redundantly voiced segments on the other. Last, I translate the Classical Praguean conception of the relation between neutralization and assimilation into the autosegmental framework, and assume that assimilation is always feature-filling. All instances of the effect of feature-changing assimilation rules, then, are the result of two independent rules of (1) delinking and (2) spreading (Poser 1982, Mascaró 1987).
75

Tunica Partial Vowel Harmony as Support for a Height Node

Wiswall, Wendy J. January 1991 (has links)
No description available.
76

Palatalization in Biscayan Basque and Feature Geometry

Hualde, Jose January 1988 (has links)
Archangeli (1987) has pointed out that the hierarchical model of feature representation combined with the statement of phonological rules in terms of conditions and parameters offers the advantage that it allows the expression as a single rule of unitary processes that must be stated as multiple operations within other frameworks. In this paper I will offer an example of this (cf. Hualde, 1987 for another example). I will show that a seemingly complex process of palatalization that must be stated as two related but different operations within a linear model, can be straightforwardly captured in the hierarchical /parametrical approach by taking into account the geometrical structures on which the palatalization rule applies; in particular, the branching structures created by a rule of place assimilation. I will assume that assimilatory processes have the effect of creating complex structures where features or nodes are shared by several segments. From this assumption we can make predictions about how other rules may apply to the output of a process of assimilation. These predictions are very different in some cases from what one would expect from a formulation of the rules in a linear, feature -changing framework. In the case to be examined here, the predictions made by taking into account derived geometrical structures receive very strong confirmation. I will consider a rule of palatalization in two Basque dialects. In one of them, the process of palatalization can be captured quite simply by a linear rule. In the other dialect, the facts appear as more complex and requiring several operations within a linear framework, but are actually simpler to state within a geometrical /parametrical framework. Only within such a theory can we capture the fact that the more pervasive palatalization observed in this second dialect arose from a simplification in the rule that other dialects possess.
77

Is Plane Conflation Bracket Erasure?

Kang, Hyunsook January 1988 (has links)
No description available.
78

Vowel Reduction in Tiberian Biblical Hebrew as Evidence for a Sub-foot Level of Maximally Trimoraic Metrical Constituents

Churchyard, Henry January 1989 (has links)
No description available.
79

Floating Accent in Mayo

Hagberg, Larry January 1989 (has links)
A major claim of this paper is that the distinctive features of lexical accent are formally identical to those of tone, or at least to a subset of tonal features. The terms accent and tone have been used in many different ways in the literature, but throughout this paper I will use both terms to refer only to lexical features that surface as contrastive pitch, length, volume and/or other features of prominence. By lexical I mean features those phonetic realization cannot be predicted by any regular metrical structure or phonological rule. I am assuming that the placement of stress is always determined by a set of language- particular (but parameter - based) rules which build metrical structure, with the location of exceptional stress indicated by a lexical diacritic called accent. Examples of such systems of rules are described in Hayes (1982), Hammond (1986) and Halle and Vergnaud (1987 a and b). Although metrical structure has generally been associated with non-tonal languages, there are also some tonal languages which exhibit the presence of metrical structure. Examples of such languages include Creek (Haas (1977)), Malayalam (Mohanan (1982)) and Capella Trique (Hollenbach (1988)). Thus the presence of metrical structure is not sufficient in itself to distinguish a non-tonal language from a tonal language. What, then, distinguishes these two categories from one another? There are two general distinctions which have traditionally been made in classifying languages as tonal versus non-tonal. One distinction is that many tonal languages exhibit a variety of lexically contrastive tones, while mast, if not all, of the degrees of stress in a non-tonal language can usually be explained using only one kind of lexical accent. Thus, tonal languages can have more than one kind of lexical tone, whereas non -tonal languages can have lexical accent but not tone, and there is apparently only one kind of lexical accent. I will discuss this apparent asymmetry in section three. The other distinction between tonal and non-tonal, for which I present counterevidence in this paper, is that autosegmental status has been attributed to tone, but not to accent, in a number of languages; see, for example, Goldsmith (1976), Williams (1976) and Pulleyblank (1983). For all such languages, the Universal Association Convention (UAC) (Goldsmith (1976)) predicts the location of most tones, with the remaining tones accounted for by lexical pre- linking. From an examination of the literature it appears, then, that the main distinction between the terms tonal and non-tonal is that tonal languages have lexical tone while non-tonal languages have lexical accent. Formally, both of these devices are lexical diacritics, but they appear to differ in that tone can be an autosegment, while no such status has ever been claimed for accent. Therefore, the question to be addressed in this paper is this: Can an accentual diacritic have autosegmental status? Using data from Mayo, a Uto-Aztecan language of northwestern Mexico, I will show that the answer is yes. The implication, then, is that accent is formally the same as tone, or at least the same as one variety of tone. A significant claim follows from this. If accent is formally the same as a tone, then no language can exist in which lexical accent occurs independently of all tonal features. As far as I know, no such language has been shown to exist. The paper is organized as follows. Section one presents the data and provides two possible analyses of Mayo stress using the theory of Halle and Vergnaud (1987 a and b) (henceforth H&V). I show that Mayo has lexical accent which floats in underlying representation (UR), just like an autosegmental tone. Section two demonstrates that stress assignment crucially has to precede and follow reduplication, thus indicating that the rules of stress assignment are cyclic and that lexical accent refloats at the end of each cycle. In section three I explore the theoretical implications of this analysis and propose that accent is formally the same as tone.
80

Against [lateral]: Evidence from Chinese Sign Language and American Sign Language

Ann, Jean January 1990 (has links)
American Sign Language (ASL) signs are claimed to be composed of four parameters: handshape, location, movement (Sto]çoe 1960) and palm orientation (Battison 1974). This paper focuses solely on handshape, that is, the configuration of the thumb and the fingers in a given sign. Handshape is significant in ASL and Chinese Sign Language (CSL); that is, minimal pairs exist for handshape in each. Thus, the two ASL signs in (1) differ in one parameter: the handshapes are different, but the location, palm orientation and movement are the same. Similarly, the two CSL signs in (2) differ in one parameter: handshape. A logical next question asks if handshapes are further divisible into parts; more specifically, are handshapes composed of distinctive features? This question is not new; in fact, researchers have made many proposals for ASL handshape features (Lane, Boyes -Braem and Bellugi, 1979; Mandel, 1981; Liddell and Johnson, 1985; Sandler, 1989; Corina and Sagey, 1988 and others). This paper focuses on the proposal of Corina and Sagey (1988). In Section 2, I outline the proposed system for the distinctive handshapes of ASL, of which [lateral] is a part. Then using data from ASL and CSL, I give three arguments in support of the claim that there is not sufficient justification in ASL or CSL for the feature [lateral]. First, I show in Section 3 that the prediction which follows from the claim that [lateral] applies only to the thumb, namely that the thumb behaves differently from the other fingers, is not borne out by CSL data. Second, I argue in Section 4 that since other features (proposed by Corina and Sagey, 1988) can derive the same phonetic effects as [lateral], [lateral] is unnecessary to describe thumb features in either ASL or CSL. Third, in Section 5, I use ASL and CSL data to argue that the notion of fingers as "specified" or "unspecified ", although intuitively pleasing, should be discarded. If this notion cannot be used, the feature [lateral] does not uniquely identify a particular set of handshapes. I show that CSL data suggests that two other features, [contact to palm] and [contact to thumb] are independently needed. With these two features, and the exclusion of [lateral], the handshapes of both ASL and CSL can be explained. In Section 6, the arguments against [lateral] are summarized.

Page generated in 0.0563 seconds