1 |
Perseveration Errors in the Performance of Dichotic Listening Tasks by Schizophrenics: The Role of Stimulus FusionGard, Diane M. 12 1900 (has links)
The purpose of the present study was to compare the number of perseverations on fused (no delay) versus unfused (0.5 msec delay) CV-DL tasks with measures on a battery of executive functions across three groups: Schizophrenics (SCZ), Manic-Depressives (MD), and normal controls (NC).
|
2 |
Executive control in speech comprehension : bilingual dichotic listening studiesMiura, Takayuki January 2014 (has links)
In this dissertation, the traditional dichotic listening paradigm was integrated with the notion of working memory capacity (WMC) to explore the cognitive mechanism of bilingual speech comprehension at the passage level. A bilingual dichotic listening (BDL) task was developed and administered to investigate characteristics of bilingual listening comprehension, which include semantic relatedness, unattended language, ear preference, auditory attentional control, executive control, voluntary note-taking, and language switching. The central concept of the BDL paradigm is that the auditory stimuli are presented in the bilinguals’ two languages and their attention is directed to one of their ears while they have to overcome cognitive and linguistic conflicts caused by information in the other ear. Different experimental manipulations were employed in the BDL task to examine the characteristics of bilingual listening comprehension. The bilingual population examined was Japanese- English bilinguals with relatively high second language (L2) proficiency and WMC. Seven experiments and seven cross-experimental comparisons are reported. Experiment 1 employed the BDL task with pairs of passages that had different semantic relationships (i.e., related or unrelated) and were heard in different languages (i.e., L1 or L2). The semantically related passages were found to interfere with comprehension of the attended passage more than the semantically unrelated passages, whether the attended and unattended languages were the same or different. Contrary to the theories of bilingual language control, unattended L1 was found to enhance comprehension of the attended passage, regardless of semantic relationships and language it was heard in. L2 proficiency and WMC served as good predictors of resolution of the cognitive and linguistic conflicts. The BDL task is suggested to serve as an experimental paradigm to explore executive control and language control in bilingual speech comprehension. Experiment 2 was conducted to investigate language lateralisation (i.e., ear preference) on bilingual speech comprehension, hence, the participants in Experiment 1 used their preferred ear, whereas participants in Experiment 2 used their non-preferred ear, whether it was left or right, in the BDL task. Comprehension was better through the preferred ear, indicating that there is a favourable ear-to-hemisphere route for understanding bilinguals’ two languages. Most of the participants were found to be left-lateralised (i.e., right-eared) and some to be right-lateralised (i.e., left-eared) presumably depending on their L2 proficiency and WMC. Experiment 3 was concerned with auditory attentional control, and explored whether there would be a right-ear advantage (REA). The participants indicated an REA whether the attended and unattended languages were L1 or L2. When they listened to Japanese in the left ear, they found it more difficult to suppress Japanese in the right ear than English. WMC was not required as much as expected for auditory attentional control probably because the passages in Experiment 3 did not yield as much semantic competition as those in Experiment 1. L2 proficiency was crucial for resolving within- and between-language competition in each ear. Experiments 4, 5, and 6 were replications of Experiments 1, 2 and 3, but these latter experiments considered the effect of note-taking that is commonly performed in everyday listening situations. Note-taking contributed to better performance and clearer understanding of the role of WMC in bilingual speech comprehension. A cross-experimental analysis between Experiments 1, 2, 4, and 5 revealed not only a facilitatory role of note-taking in bilingual listening comprehension in general, but also a hampering role when listening through the preferred ear. Experiment 7 addressed the effect of predictability of language switching by presenting L1 and L2 in a systematic order while switching attention between ears and comparing the result with that of Experiment 6 where language switching was unpredictable. The effect of predictability of language switching was different between ears. When language switches were predictable, higher comprehension was observed in the left ear than the right ear, and when language switches were unpredictable, higher comprehension was observed in the right ear than the left ear, thereby suggesting a mechanism of asymmetrical language control. WMC was more related to processing of predictable language switches than that of unpredictable language switches. The dissertation ends with discussions of the implications from the seven BDL experiments and possible applications, along with experimental techniques from other relevant disciplines that might be used in future research to yield additional insight into how bilingual listeners sustain their listening performance in their two languages in the real-life situations.
|
3 |
The Impact of Degraded Speech and Stimulus Familiarity in a Dichotic Listening TaskSinatra, Anne M. 01 January 2012 (has links)
It has been previously established that when engaged in a difficult attention intensive task, which involves repeating information while blocking out other information (the dichotic listening task), participants are often able to report hearing their own names in an unattended audio channel (Moray, 1959). This phenomenon, called the cocktail party effect is a result of words that are important to oneself having a lower threshold, resulting in less attention being necessary to process them (Treisman, 1960). The current studies examined the ability of a person who was engaged in an attention demanding task to hear and recall low-threshold words from a fictional story. These low-threshold words included a traditional alert word, "fire" and fictional character names from a popular franchise-Harry Potter. Further, the role of stimulus degradation was examined by including synthetic and accented speech in the task to determine how it would impact attention and performance. In Study 1 participants repeated passages from a novel that was largely unfamiliar to them, The Secret Garden while blocking out a passage from a much more familiar source, Harry Potter and the Deathly Hallows. Each unattended Harry Potter passage was edited so that it would include 4 names from the series, and the word "fire" twice. The type of speech present in the attended and unattended ears (Natural or Synthetic) was varied to examine the impact that processing a degraded speech would have on performance. The speech that the participant shadowed did not impact unattended recall, however it did impact shadowing accuracy. The speech type that was present in the unattended ear did impact the ability to recall low-threshold, Harry Potter information. When the unattended speech type was synthetic, significantly less Harry Potter information was recalled. Interestingly, while Harry Potter information was recalled by participants with both high and low Harry Potter experience, the traditional low-threshold word, "fire" was not noticed by participants. In order to determine if synthetic speech impeded the ability to report low-threshold Harry Potter names due to being degraded or simply being different than natural speech, Study 2 was designed. In Study 2 the attended (shadowed) speech was held constant as American Natural speech, and the unattended ear was manipulated. An accent which was different than the native accent of the participants was included as a mild form of degradation. There were four experimental stimuli which contained one of the following in the unattended ear: American Natural, British Natural, American Synthetic and British Synthetic. Overall, more unattended information was reported when the unattended channel was Natural than Synthetic. This implies that synthetic speech does take more working memory processing power than even an accented natural speech. Further, it was found that experience with the Harry Potter franchise played a role in the ability to report unattended Harry Potter information. Those who had high levels of Harry Potter experience, particularly with audiobooks, were able to process and report Harry Potter information from the unattended stimulus when it was British Natural. While, those with low Harry Potter experience were not able to report unattended Harry Potter information from this slightly degraded stimulus. Therefore, it is believed that the previous audiobook experience of those in the high Harry Potter experience group acted as training and resulted in less working memory being necessary to encode the unattended Harry Potter information. A pilot study was designed in order to examine the impact of story familiarity in the attended and unattended channels of a dichotic listening task. In the pilot study, participants shadowed a Harry Potter passage (familiar) in one condition with a passage from The Secret Garden (unfamiliar) playing in the unattended ear. A second condition had participants shadowing The Secret Garden (unfamiliar) with a passage from Harry Potter (familiar) present in the unattended ear. There was no significant difference in the number of unattended names recalled. Those with low Harry Potter experience reported significantly less attended information when they shadowed Harry Potter than when they shadowed The Secret Garden. Further, there appeared to be a trend such that those with high Harry Potter experience were reporting more attended information when they shadowed Harry Potter than The Secret Garden. This implies that experience with a franchise and characters may make it easier to recall information about a passage, while lack of experience provides no assistance. Overall, the results of the studies indicate that we do treat fictional characters in a way similarly to ourselves. Names and information about fictional characters were able to break through into attention during a task that required a great deal of attention. The experience one had with the characters also served to assist the working memory in processing the information in degraded circumstances. These results have important implications for training, design of alerts, and the use of popular media in the classroom.
|
4 |
The role of semantic context and attentional resource distribution in semantic comprehension in Swedish pre-schoolersSchelhaas, Johanna Renate January 2016 (has links)
Research on semantic processing focused mainly on isolated units in language, which does not reflect the complexity of language. In order to understand how semantic information is processed in a wider context, the first goal of this thesis was to determine whether Swedish pre-school children are able to comprehend semantic context and if that context is semantically built up over time. The second goal was to investigate how the brain distributes attentional resources by means of brain activation amplitude and processing type. Swedish preschool children were tested in a dichotic listening task with longer children’s narratives. The development of event-related potential N400 component and its amplitude were used to investigate both goals. The decrease of the N400 in the attended and unattended channel indicated semantic comprehension and that semantic context was built up over time. The attended stimulus received more resources, processed the stimuli in more of a top-down manner and displayed prominent N400 amplitude in contrast to the unattended stimulus. The N400 and the late positivity were more complex than expected since endings of utterances longer than nine words were not accounted for. More research on wider linguistic context is needed in order to understand how the human brain comprehends natural language. / Tidigare forskning på semantisk processning har fokuserad på isolerade språkliga enheter vilket inte reflekterar språkets komplexitet. För att kunna förstår hur semantisk information processeras i en större kontext, var studiens första syfte att undersöka om svenska förskolebarn om svenska förskolebarn förmår att förstå semantisk kontext och om denna kontext byggs upp över tid. Det andra syftet var att undersöka hur hjärnan fördelar uppmärksamhetsbaserade resurser i avseende på hjärnaktiveringsamplitud och olika processesningstyper. För detta testades svenska förskolebarn i ett dikotiskt lyssningstest med olika barnsagor. Utvecklingen av N400-komponenten, en händelse-relaterad potential, användes för detta. Nedgången av N400 komponenten och den sena positiviteten visades i både de uppmärksammade och ouppmärksammade kanalerna och detta indikerar semantisk förståelse och att semantisk kontext byggdes upp över tid. Därutöver kunde en större N400-amplitud observeras i den uppmärksammade kanalen, vilket indikerar att den fick mer hjärnresurser och använde sig av top-down-bearbetning i större utsträckning än bottom-up-processer. N400-komponenten och sena positiviteten visade sig vara mer komplex än förväntat. Det kan bero för att de sista orden i ett yttrande som var längre än nio ord exkluderades från analysen. Det finns ett behov av forskning som använder sig av längre lingvistiska kontexter och deras effekter i människohjärnan.
|
Page generated in 0.1217 seconds