This dissertation presents Error-Selective Learning, an error-driven model of phonological acquisition in Optimality Theory which is both restrictive and gradual. Together these two properties provide a model that can derive many attested intermediate stages in phonological development, and yet also explains how learners eventually converge on the target grammar. Error-Selective Learning is restrictive because its ranking algorithm is a version of Biased Constraint Demotion (BCD: Prince and Tesar, 2004). BCD learners store their errors in a table called the Support, and use ranking biases to build the most restrictive ranking compatible with their Support. The version of BCD adopted here has three such biases: (i) one for high-ranking Markedness (Smolensky 1996) (ii) on for high-ranking OO-Faith constraints (McCarthy 1998); Hayes 2004); and (iii) one for ranking specific IO-Faith constraints above general ones (Smith 2000; Hayes 2004). Error-Selective Learning is gradual because it uses a novel mechanism for introducing errors into the Support. As errors are made they are not immediately used to learn new rankings, but rather stored temporarily in an Error Cache. Learning via BCD is only triggered once some constraint has caused too many errors to be ignored. Once learning is triggered, the learner chooses one best error in the Cache to add to the Support---an error that will cause minimal changes to the current grammar. The first main chapter synthesizes the existing arguments for this BCD algorithm, and emphasizes the necessity of the Support's stored errors. The subsequent chapter presents Error-Selective Learning, using cross-linguistic examples of attested intermediate stages that can be accounted for in this approach. The third chapter compares ESL to a well-known alternative, the Gradual Learning Algorithm (GLA: Boersma, 1997; Boersma and Hayes, 2001), and argues that the GLA is overall not well-suited to learning restrictively because it does not store its errors, and because it cannot reason from errors to rankings as does the BCD. The final chapter presents an artificial language learning experiment, designed to test for high-ranking OO-faith in children's grammar, whose results are consistent with the biases and stages of Error-Selective Learning. .
Identifer | oai:union.ndltd.org:UMASS/oai:scholarworks.umass.edu:dissertations-4636 |
Date | 01 January 2007 |
Creators | Tessier, Anne-Michelle |
Publisher | ScholarWorks@UMass Amherst |
Source Sets | University of Massachusetts, Amherst |
Language | English |
Detected Language | English |
Type | text |
Source | Doctoral Dissertations Available from Proquest |
Page generated in 0.014 seconds