• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Neighbor frequency effects during reading: Is there a parallel with lexical ambiguity?

Slattery, Timothy James 01 January 2007 (has links)
The following four eye movement experiments examined the hypothesis that sentence context has a similar effect on words with higher frequency neighbors and lexically ambiguous words. This would be consistent with the notion that lexically ambiguous words can be thought of as extreme examples of word neighbors (word roommates). Experiment 1 presented words with higher frequency neighbors (birch, birth) in sentences that provided either a neutral context (i.e., the target word and its higher frequency neighbor could both fit equally well into the sentence) or biased context (i.e., the target word was a better fit than its higher frequency neighbor). Experiment 2 used the items from Experiment 1 with a group of elderly readers (65 years of age or older) to investigate age related differences in the neighbor frequency effect. A prior study by Rayner, Reichle, Stroud, Williams & Pollatsek (2006) concluded that elderly readers adopt a riskier reading strategy that relies heavily on partial parafoveal information. Therefore, elderly readers may be more likely to miscode words that have higher frequency neighbors. Experiment 3 examined the role that syntax plays in the neighbor frequency effect during reading. Prior research by Folk and Morris (2003) using ambiguous word stimuli that spanned syntactic category suggests that syntax can mediate the meaning resolution process. A critical difference between lexically ambiguous words and the words used in experiments 1-3 is that the two meanings of lexically ambiguous words have the phonological code. Therefore, Experiment 4 used words that are homonyms with their higher frequency neighbor (beech, beach).
2

Investigating the role of stimulus and goal driven factors in the guidance of eye movements

Dahlstrom-Hakki, Ibrahim H 01 January 2008 (has links)
Three experiments investigated the influence and timing of various goal- and stimulus-driven factors on the guidance of eye movements in a simple visual search task. Participants were asked to detect the presence of an object of a given color from among various distractor objects that could share either the color or shape of the target object. The contrast of one or more objects was manipulated to investigate the influence of an irrelevant salience cue on the eye movements. A time dependant analysis showed that participants' early eye movements were generally directed towards the upper left object in the display. The analysis further indicated that color then quickly became the primary guiding factor for the eye movements with salience and shape having minimal effects in early processing. Further analyses indicated that shape also influenced eye movement behavior, but largely to cancel eye movements to the target object and to end the trial without an eye movement. These analyses also indicated that shape was only processed when an object was attended because it had the target color. A model was developed and fit to the data of Experiment 1.
3

Prosodic parsing: The role of prosody in sentence comprehension

Schafer, Amy Jean 01 January 1997 (has links)
This work presents an investigation of how prosodic information is used in natural language processing and how prosody should be incorporated into models of sentence comprehension. It is argued that the processing system builds a prosodic representation in the early stages of processing, and is guided by this prosodic representation through multiple stages of analysis. Specifically, the results of four sentence comprehension experiments demonstrate that prosodic phrasing influences syntactic attachment decisions, focus interpretation, and the availability of contextual information in the resolution of lexical ambiguity. Two explicit hypotheses of how prosodic structure is used in processing are proposed to account for these effects: one which accounts for effects of phonological phrasing on syntactic processing decisions and a second which accounts for effects of intonational phrasing on semantic/pragmatic interpretation. Three sources of evidence are provided in support of the central claim that the processor must build and use a prosodic representation from the early stages of processing. First, an experiment on the resolution of prepositional phrase attachment ambiguity demonstrates that syntactic attachment decisions are influenced by the overall pattern of phonological phrasing in utterance, and not simply by prosodic boundaries located at the point of syntactic ambiguity. Thus, the effects of a single kind of prosodic element, at a single level in the prosodic hierarchy, must be accounted for with respect to the larger prosodic structure. A second experiment shows that the interpretation of focus is dependent on both the pattern of pitch accents in the utterance and the pattern of prosodic phrasing, establishing that different kinds of prosodic elements in the prosodic structure are used jointly in processing decisions. Two additional experiments, one on the interpretation of context-sensitive adjectives and a second on the resolution of within-category lexical ambiguity, demonstrate that phonologically distinct levels of prosodic phrasing have separable effects on language processing. Taken together, the four experiments suggest that prosody has a much broader role in sentence comprehension than previously recognized, and that models of sentence processing should be modified to incorporate prosodic structure.
4

Three -year -olds' reasoning about deceptive objects: Can actions speak louder than words?

Sylvia, Monica R 01 January 2002 (has links)
The appearance-reality distinction refers to the understanding that objects can have misleading appearances that contradict reality. Traditionally, studies investigating children's ability to make this distinction have used a verbal-based task that requires children to answer two questions regarding the appearance and reality of a target object whose appearance has been altered. In general, these studies have found that children are not successful in this task until 4–5 years of age. The purpose of the current study was to investigate three different hypotheses regarding why 3-year-olds fail the traditional verbal-based task in order to determine whether their poor performance truly represents an inability to distinguish appearance from reality. In Experiment 1, the hypothesis that 3-year-olds fail the traditional task simply because they are unfamiliar with the property-distorting devices typically used to alter the appearances of target objects, rather than an inability to distinguish appearance from reality, was examined. Experiments 1 and 2 also examined the hypothesis that 3-year-olds' failure in this task may be due to an inability to assign conflicting, dual representations to a single object. Finally, the role of the language used in making the appearance-reality distinction also was examined in both experiments. In this case, the hypothesis that 3-year-olds may be able to distinguish appearances from reality in an action-based, but not verbal-based task, was evaluated. In Experiment 1, all of this was done using a property-distorting device typically used in traditional appearance-reality studies, whereas a completely new method for altering the appearances of objects was used in Experiment 2. No supporting evidence for the familiarity or dual representation hypotheses was found in either experiment, however, children in both experiments performed better on an action-based task than on two verbal-based tasks. Children went from answering the traditional appearance-reality questions on the basis of misleading perceptual information to overriding this misleading information in an action-based task. Together, these results provide evidence that 3-year-olds have some competence in distinguishing appearances from reality that is masked by the language demands of the traditional verbal-based task.
5

The development in children of future time perspective

Silverman, Joseph L 01 January 1996 (has links)
Little is known about how children develop their concepts of the future. However, future time perspective (FTP) is considered important in the development of abilities such as planning, goal setting, and the delay of gratification. FTP has also been related to mental health in adults and academic achievement in adolescents. This study explored FTP, defined as the ability to temporally locate and organize future events, and compared participants' ability to locate and organize the same events with respect to their past occurrences. There were 167 participants from four grade levels with average ages of the groups ranging from 7.4 to 10.5 years of age. Participants located five recurrent events on four timelines representing; a past(day), a past(year), a future(day), and a future(year). Participants also took tests to assess their knowledge of conventional time (i.e., clocks and calendars). Hypotheses were proposed that: (a) participants would show a general developmental improvement on all tasks, (b) participants would perform better on day-scale than year-scale timelines, (c) participants would perform better on past than future timelines, and (d) knowledge of conventional time would be used by older participants to structure year-scale, but not day-scale, timelines. Results supported the first two hypotheses but, contrary to expectations, participants performed better on future than past timelines. The author proposed that location of sequences in the past is more cognitively challenging because it moves counter to the unidirectional flow of time; events that are more distant from the present are earlier in the sequence. Results supported the hypothesis that more sophisticated representations of conventional time are needed for location of events in longer durations, and that such representations are developmentally acquired, but a causal relationship could not be established. Participants relied heavily on event schemas in locating events; these schemas helped participants produce a correct sequence but often with the incorrect start of the sequence given the instructions regarding use of the present as a reference point. Results also suggested that children might have a different concept of the relationship between the present and the past and future than that of adults.

Page generated in 0.3456 seconds