• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 60
  • 53
  • 8
  • 8
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 444
  • 444
  • 295
  • 126
  • 115
  • 95
  • 88
  • 86
  • 80
  • 77
  • 68
  • 65
  • 64
  • 56
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Word-form recognition in 6-month-olds? Using event-related potentials to study the influence of infant-directed speech / Igenkänning av ordformer vid 6 månaders ålder?Användning av eventrelaterade potentialer för att undersöka inflytandet av barnriktat tal

Sand Aronsson, Bente January 2023 (has links)
By 4.5 months infants listen longer to their names compared to matched foils, which is the earliest empirically demonstrated sign of word-form recognition. This ability develops gradually in the first year of life and becomes increasingly advanced. The present study investigated word-form recognition in 6-month-olds using event-related potentials (ERPs). To date, few studies have demonstrated word-form recognition at this age, and only one study has presented electrophysiological evidence. In addition, the present study investigated the effect of speech register on word-form recognition. Studies on language acquisition indicate that adjustments adults and older children make in interaction with infants are relevant for language learning. This speech register, commonly referred to as infant-directed speech (IDS), differs from adult-directed speech (ADS) in several respects. Studies on word-form recognition did typically not compare recognition effects for word forms familiarized (meaning trained) in IDS with word forms familiarized in ADS, however, the present study did. No recognition effects for either IDS or ADS were found. Moreover, there were no differences in ERP-responses for word forms familiarized in IDS compared to word forms familiarized in ADS. The main conclusion is that word-form recognition is still unstable at 6 months. / Vid 4.5 månaders ålder lyssnar spädbarn längre till sitt namn än andra liknande namn, vilket är det tidigast påvisade tecknet på igenkänning av ordformer. Förmågan utvecklas gradvis under det första levnadsåret och blir mer avancerad. Den aktuella studien undersökte igenkänning av ordformer hos 6 månaders spädbarn genom eventrelaterade potentialer (ERPer). I dagsläget har ett fåtal studier demonstrerat igenkänningseffekter vid den här åldern, och endast en studie har presenterat elektrofysiologiska bevis. Den aktuella studien undersökte även hur talstilen påverkar spädbarns igenkänning av ordformer. Studier inom barns språkutveckling indikerar att anpassningar som vuxna och äldre barn gör i interaktion med spädbarn är relevanta för språkinlärningen. Den här talstilen, ofta kallad barnriktat tal (BRT) skiljer sig från vuxenriktat tal (VRT) i flera avseenden. Studier som har undersökt igenkänning av ordformer hos spädbarn har inte jämfört igenkänningseffekter för ordformer som presenterats (dvs. tränats) i BRT med ordformer som presenterats i VRT, vilket den aktuella studien gör. Inga igenkänningseffekter påvisades för BRT eller VRT. Vidare fanns inga skillnader i ERP-responsen för ordformer presenterade i BRT jämfört med ordformer presenterade i VRT. Slutsatsen är att spädbarns igenkänning av ordformer fortfarande är instabil vid 6 månader.
322

Error Awareness and Apathy in Moderate-to-Severe Traumatic Brain Injury

Logan, Dustin Michael 01 June 2014 (has links) (PDF)
Moderate-to-severe traumatic brain injury (M/S TBI) is a growing public health concern with significant impact on the cognitive functioning of survivors. Cognitive control and deficits in awareness have been linked to poor recovery and rehabilitation outcomes. One way to research cognitive control is through awareness of errors using electroencephalogram and event-related potentials (ERPs). Both the error-related negativity and the post-error positivity components of the ERP are linked to error awareness and cognitive control processes. Attentional capacity and levels of apathy influence error awareness in those with M/S TBI. There are strong links between awareness, attention, and apathy. However, limited research has examined the role of attention, awareness, and apathy using electrophysiological indices of error awareness to further understand cognitive control in a M/S TBI sample. The current study sought to elucidate the role of apathy in error awareness in those with M/S TBI. Participants included 75 neurologically-healthy controls (divided randomly into two control groups) and 24 individuals with M/S TBI. All participants completed self-report measures of mood, apathy, and executive functioning, as well as a brief neuropsychological battery to measure attention and cognitive ability. To measure awareness, participants completed the error awareness task (EAT), a modified Stroop go/no-go task. Participants signaled awareness of errors committed on the previous trial. The M/S TBI group decreased accuracy while improving or maintaining error awareness compared to controls over time. There were no significant between-group differences for ERN and Pe amplitudes. Levels of apathy in the M/S TBI group were included in three multiple regression analyses predicting proportion of unaware errors, ERN amplitude, and Pe amplitude. Apathy was predictive of error awareness, although not in the predicted direction. Major analyses were replicated using two distinct control groups to determine potential sample effects. Results showed consistent results comparing both control groups to a M/S TBI group. Findings show variable levels of awareness and accuracy over time for those with M/S TBI when compared to controls. Conclusions include varying levels of attention and awareness from the M/S TBI group over time, evidenced by improving awareness of errors when they are happening, but an inability to regulate performance sufficiently to improve accuracy. Levels of apathy are playing a role in error awareness, however, not in predicted directions. The study provides support for the role of attentional impairments in error awareness and encourages future studies to look for varying levels of performance within a given task when using populations linked to elevated levels of apathy and attentional deficits.
323

Brain Mapping of the Mismatch Negativity Response to Vowel Variances of Natural and Synthetic Phonemes

Smith, Lyndsy Marie 26 November 2013 (has links) (PDF)
The mismatch negativity (MMN) is a specific event-related potential (ERP) component used frequently in the observation of auditory processing. The MMN is elicited by a deviant stimulus randomly presented in the presence of repeating stimuli. The current study utilized the MMN response in order to determine the temporal (timing) and linguistic processing of natural and synthetic vowel stimuli. It was hypothesized that a significant MMN response would be elicited by natural and synthetic vowel stimuli. Brain mapping of the MMN response was hypothesized to yield temporal resolution information, which would provide detail regarding the sequential processing differences between natural and synthetic vowel stimuli. It was also hypothesized that the location of dipoles within the cortex would provide information pertaining to differences in cortical localization of processing for natural and synthetic stimuli. Vowel stimuli were presented to twenty participants (10 females and 10 males between the ages of 18 and 26 years) in a three-forced-choice response paradigm. Data from behavioral responses, reaction times, and ERPs were recorded for each participant. Results demonstrated that there were differences in the behavioral and electrophysiological responses between natural and synthesized vowels presented to young, normal hearing adults. In addition, significant MMN responses were evoked by both natural and synthetic vowel stimuli. Greater reaction times existed for the synthetic vowel phonemes compared to the natural vowel phonemes. Electrophysiological differences were primarily seen in the processing of the synthetic /u/ stimuli. Scalp distribution of cognitive processing was essentially the same for naturally produced phonemes. Processing of synthetic phonemes also had similar scalp distributions; however, the synthetic /u/ phoneme required more complex processing compared to the synthetic /æ/ phoneme. The most significant processing localizations were located in the superior temporal gyrus, which is known for its role in linguistic processing. Continued processing in the frontal lobe was observed, suggesting continual evaluation of natural and synthetic phonemes throughout processing.
324

Robust Deep Learning Under Application Induced Data Distortions

Rajeev Sahay (10526555) 21 November 2022 (has links)
<p>Deep learning has been increasingly adopted in a multitude of settings. Yet, its strong performance relies on processing data during inference that is in-distribution with its training data. Deep learning input data during deployment, however, is not guaranteed to be in-distribution with the model's training data and can often times be distorted, either intentionally (e.g., by an adversary) or unintentionally (e.g., by a sensor defect), leading to significant performance degradations. In this dissertation, we develop algorithms for a variety of applications to improve the performance of deep learning models in the presence of distorted data. We begin by first designing feature engineering methodologies to increase classification performance in noisy environments. Here, we demonstrate the efficacy of our proposed algorithms on two target detection tasks and show that our framework outperforms a variety of state-of-the-art baselines. Next, we develop mitigation algorithms to improve the performance of deep learning in the presence of adversarial attacks and nonlinear signal distortions. In this context, we demonstrate the effectiveness of our methods on a variety of wireless communications tasks including automatic modulation classification, power allocation in massive MIMO networks, and signal detection. Finally, we develop an uncertainty quantification framework, which produces distributive estimates, as opposed to point predictions, from deep learning models in order to characterize samples with uncertain predictions as well as samples that are out-of-distribution from the model's training data. Our uncertainty quantification framework is carried out on a hyperspectral image target detection task as well as on counter unmanned aircraft systems (cUAS) model. Ultimately, our proposed algorithms improve the performance of deep learning in several environments in which the data during inference has been distorted to be out-of-distribution from the training data. </p>
325

Electrophysiological evidence for the integral nature of tone in Mandarin spoken word recognition

Ho, Amanda 11 1900 (has links)
Current models of spoken word recognition have been predominantly based on studies of Indo-European languages. As a result, little is known about the recognition processes involved in the perception of tonal languages (e.g., Mandarin Chinese), and the role of lexical tone in speech perception. One view is that tonal languages are processed phonologically through individual segments, while another view is that they are processed lexically as a whole. Moreover, a recent study claimed to be the first to discover an early phonological processing stage in Mandarin (Huang et al., 2014). There seems to be a lack of investigations concerning tonal languages, as no clear conclusions have been made about the nature of tonal processes, or a model of spoken word recognition that best incorporates lexical tone. The current study addressed these issues by presenting 18 native Mandarin speakers with aural sentences with medial target words, which either matched or mismatched the preceding visually presented sentences with medial target words (e.g, 家 /jia1/ “home”). Violation conditions involved target words that differed in the following ways: tone violation, where only the tone was different (e.g., 价 /jia4/ “price”), onset violation, where only the onset was different (e.g., 虾 /xia1/ “shrimp”), and syllable violation, where both the tone and the onset were different (e.g., 糖 /tang2/ “candy”). We did not find evidence for an early phonological processing stage in Mandarin. Instead, our findings indicate that Mandarin syllables are processed incrementally through phonological segments and that lexical tone is strongly associated with semantic access. These results are discussed with respect to modifications for existing models in spoken word recognition to incorporate the processes involved with tonal language recognition. / Thesis / Master of Science (MSc)
326

ERP Analyses of Perceiving Emotions and Eye Gaze in Faces: Differential Effects of Motherhood and High Autism Trait

Bagherzadeh-Azbari, Shadi 08 May 2023 (has links)
Die Blickrichtung und ihre Richtung sind wichtige nonverbale Hinweise für die Etablierung von sozialen Interaktionen und die Wahrnehmung von emotionalen Gesichtsausdrücken bei anderen. Ob der Blick direkt auf den Betrachter gerichtet ist (direkter Blick) oder abgewendet (abgewandter Blick), beeinflusst unsere soziale Aufmerksamkeit und emotionale Reaktionen. Dies deutet darauf hin, dass Emotionen und Blickrichtung informative Werte haben, die sich möglicherweise in frühen oder späteren Stadien der neurokognitiven Verarbeitung interagieren. Trotz theoretischer Grundlage, der geteilten Signal-Hypothese (Adams & Kleck, 2003), gibt es einen Mangel an strukturierten elektrophysiologischen Untersuchungen zu den Wechselwirkungen zwischen Emotionen und Blickrichtung sowie ihren neuronalen Korrelaten und wie sie sich in verschiedenen Bevölkerungsgruppen unterscheiden. Um diese Lücke zu schließen, verwendete diese Doktorarbeit ereigniskorrelierte Hirnpotentiale (ERPs), um die Reaktionen auf emotionale Ausdrücke und Blickrichtung in einem neuen Paradigma zu untersuchen, das statischen und dynamischen Blick mit Gesichtsausdrücken kombiniert. Es wurden drei verschiedene Populationen untersucht. Studie 1 untersuchte in einer normalen Stichprobe die Amplituden der ERP-Komponenten, die durch die erstmalige Präsentation von Gesichtern und nachfolgende Änderungen der Blickrichtung in der Hälfte der Durchgänge ausgelöst wurden. In Studie 2 wurden aufgrund der atypischen Gesichtsverarbeitung und verminderten Reaktionen auf Augenblick beim Autismus die ERPs und Augenbewegungen bei zwei Stichproben von Kindern mit unterschiedlichem Schweregrad ihrer Autismusmerkmale untersucht. In Studie 3 wurde in einer großen Stichprobe die vermutlich erhöhte Sensitivität bei der Emotionsverarbeitung und Reaktion auf Augenblick bei Müttern im postpartalen Zeitraum mit besonderem Fokus auf die Gesichter von Säuglingen untersucht. Zusammenfassend zeigen die Ergebnisse der drei Studien, dass in sozialen Interaktionen die emotionalen Effekte von Gesichtern durch die dynamische Blickrichtung moduliert werden. / The eye gaze and its direction are important and relevant non-verbal cues for the establishment of social interactions and the perception of others’ emotional facial expressions. Gaze direction itself, whether eyes are looking straight at the viewer (direct gaze) or whether they look away (averted gaze), affects our social attention and emotional response. This implies that both emotion and gaze have informational values, which might interact at early or later stages of neurocognitive processing. Despite the suggestion of a theoretical basis for this interaction, the shared signal hypothesis (Adams & Kleck, 2003), there is a lack of structured electrophysiological investigations into the interactions between emotion and gaze and their neural correlates, and how they vary across populations. Addressing this need, the present doctoral dissertation used event-related brain potentials (ERPs) to study responses to emotional expressions and gaze direction in a novel paradigm combining static and dynamic gaze with facial expressions. The N170 and EPN were selected as ERP components believed to reflect gaze perception and reflexive attention, respectively. Three different populations were investigated. Study 1, in a normal sample, investigated the amplitudes of the ERP components elicited by the initial presentation of faces and subsequent changes of gaze direction in half of the trials. In Study 2, based on the atypical face processing and diminished responses to eye gaze in autism, the ERPs and eye movements were examined in two samples of children varying in the severity of their autism traits. In Study 3, In a large sample, I addressed the putatively increased sensitivity in emotion processing and response to eye gaze in mothers during their postpartum period with a particular focus on infant's faces. Taken together, the results from three studies demonstrate that in social interactions, the emotional effects of faces are modulated by dynamic gaze direction.
327

The Brain Differentially Prepares Inner and Overt Speech Production: Electrophysiological and Vascular Evidence

Stephan, Franziska, Saalbach, Henrik, Rossi, Sonja 13 April 2023 (has links)
Speech production not only relies on spoken (overt speech) but also on silent output (inner speech). Little is known about whether inner and overt speech are processed differently and which neural mechanisms are involved. By simultaneously applying electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), we tried to disentangle executive control from motor and linguistic processes. A preparation phase was introduced additionally to the examination of overt and inner speech directly during naming (i.e., speech execution). Participants completed a picture-naming paradigm in which the pure preparation phase of a subsequent speech production and the actual speech execution phase could be differentiated. fNIRS results revealed a larger activation for overt rather than inner speech at bilateral prefrontal to parietal regions during the preparation and at bilateral temporal regions during the execution phase. EEG results showed a larger negativity for inner compared to overt speech between 200 and 500 ms during the preparation phase and between 300 and 500 ms during the execution phase. Findings of the preparation phase indicated that differences between inner and overt speech are not exclusively driven by specific linguistic and motor processes but also impacted by inhibitory mechanisms. Results of the execution phase suggest that inhibitory processes operate during phonological code retrieval and encoding.
328

Inner versus Overt Speech Production: Does This Make a Difference in the Developing Brain?

Stephan, Franzisk, Saalbach, Henrik, Rossi, Sonja 13 April 2023 (has links)
Studies in adults showed differential neural processing between overt and inner speech. So far, it is unclear whether inner and overt speech are processed differentially in children. The present study examines the pre-activation of the speech network in order to disentangle domain-general executive control from linguistic control of inner and overt speech production in 6- to 7-year-olds by simultaneously applying electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Children underwent a picture-naming task in which the pure preparation of a subsequent speech production and the actual execution of speech can be differentiated. The preparation phase does not represent speech per se but it resembles the setting up of the language production network. Only the fNIRS revealed a larger activation for overt, compared to inner, speech over bilateral prefrontal to parietal regions during the preparation phase. Findings suggest that the children’s brain can prepare the subsequent speech production. The preparation for overt and inner speech requires different domain-general executive control. In contrast to adults, the children’s brain did not show differences between inner and overt speech when a concrete linguistic content occurs and a concrete execution is required. This might indicate that domain-specific executive control processes are still under development.
329

Emotions in visual word processing / time course and boundary conditions

Schacht, Annekathrin 08 February 2008 (has links)
Die Einflüsse von Emotionen auf Informationsverarbeitungsprozesse zählen zu einem der zentralen Aspekte kognitionspsychologischer und neurowissenschaftlicher Forschung. Studien zur Prozessierung affektiver Bilder und emotionaler Gesichtsausdrücke haben gezeigt, daß emotionale Stimuli – vermutlich aufgrund ihrer starken intrinsischen Relevanz für den Organismus – in besonderem Maße Aufmerksamkeit binden und hierdurch einer präferierten und elaborierteren Weiterverarbeitung zugeführt werden. Evidenz zur Aktivierung und Verarbeitung emotionaler Valenz in der visuellen Wortverarbeitung ist hingegen gering und größtenteils inkonsistent. In einer Serie von Experimenten, die in der vorliegenden Arbeit zusammenfassend beschrieben und diskutiert werden, wurde mit Hilfe Ereigniskorrelierter Potentiale (EKPs) versucht, die Effekte emotionaler Valenz von deutschsprachigen Verben innerhalb des Wortverarbeitungsprozesses zu lokalisieren. In den EKPs zeigen sich – hinsichtlich ihrer Latenz und Topographie – dissoziierbare emotionsrelatierte Komponenten, die mit unterschiedlichen Stufen der Verarbeitungsprozesse in Verbindung gebracht werden können. Die Befunde legen nahe, daß die emotionale Valenz von Verben auf einer (post-) lexikalischen Verarbeitungsstufe aktiviert wird. Dieser frühen Registrierung liegen wahrscheinlich domänenunspezifische neuronale Mechanismen zugrunde, die weitestgehend ressourcen- und aufgabenunabhängig wirken. Auf späteren Stufen hingegen scheinen emotions-relatierte Prozesse durch zahlreiche weitere Faktoren beeinflußt zu werden. Die Modulation der Dynamik früher, nicht aber später Emotionsprozessierung durch nicht-valente Kontextinformation sowie in Abhängigkeit der Stimulusdomäne legt einen zeitlich variablen Verarbeitungsprozeß emotionaler Information nahe, der mit streng seriellen Modellen der Informationsverarbeitung nicht vereinbar ist, und möglicherweise der flexiblen Verhaltensanpassung an verschiedene Umweltbedingungen dient. / In recent cognitive and neuroscientific research the influences of emotion on information processing are of special interest. As has been shown in several studies on affective picture as well as facial emotional expression processing, emotional stimuli tend to involuntarily draw attentional resources and preferential and sustained processing, possibly caused by their high intrinsic relevance. However, evidence for emotion effects in visual word processing is scant and heterogeneous. As yet, little is known about at which stage and under what conditions the specific emotional content of a word is activated. A series of experiments which will be summarized and discussed in the following section aimed to localize the effects of emotion in visual word processing by recording event-related potentials (ERPs). Distinct effects of emotional valence on ERPs were found which were distinguishable with regard to their temporal and spatial distribution and might be therefore related to different stages within the processing stream. As a main result, the present findings indicate that the activation of emotional valence of verbs occurs on a (post-) lexical stage. The underlying neural mechanisms of this early registration appear to be domain-unspecific, and further, largely independent of processing resources and task demands. On later stages, emotional processes are modulated by several different factors. Further, the findings of an acceleration of early but not late emotion effects caused by neutral context information as well as by domain-specifity indicate a flexible dynamic of emotional processes which would be hard to account for by strictly serial processing models.
330

Prédictions dans le domaine auditif : études électrophysiologiques

Simal, Amour 06 1900 (has links)
Le but de cette thèse était d’étudier les processus de prédiction dans le domaine auditif et de l’activité cérébrale associée à ces prédictions. L’électroencéphalographie a été utilisée afin de mesurer l’activité électrique du cerveau, ainsi que la technique de potentiels reliés aux évènements (PRE) qui permet de mesurer l’activité lié à des processus d’intérêt à la milliseconde près. La grande majorité des études existantes s’intéressent aux prédictions de manière indirecte, par l’observation de signaux d’erreur ou de confirmation de prédictions. Contrairement à ces études, nous avons développé des paradigmes novateurs qui permettent de créer un contexte dans lequel un ou plusieurs stimuli permettent de générer des prédictions. Dans une première étude, les résultats démontrent qu’un son qui permet la prédiction de sons subséquents est accompagné d’une augmentation de l’amplitude des composantes de PRE auditives, la N1 et plus particulièrement la P2, aux électrodes frontocentrales. Dans une seconde étude, cette modulation était entrainée par une modification marquée des oscillations entre 4 et 7 Hz, dans la bande de fréquence thêta. Une troisième étude, était focalisé sur l’activité de PRE en lien avec des prédictions d’ordre temporel. En créant des patrons rhythmiques contextuels dans l’expérience, nous avons pu observer qu’un son qui permet d’identifier le patron, et donc d’anticiper la suite, génère une positivité précoce, et ce, même si le patron rhythmique est non-pertinent à l’exécution de la tâche. Ces études sont une démonstration de faisabilité et fondent une base solide pour la poursuite de la recherche sur les processus dynamiques liés à la génération de prédictions. / The goal of this thesis was to study predictive processes in the auditory domain, and to find the electrophysiological signature associated with those processes. We used electroencephalography to measure the brain electrical activity, as well as the event-related potentials technique (ERP), allowing us to measure brain activity of interest with a millisecond precision. The majority of existing studies that look at predictive processes indirectly, by measuring prediction error or prediction confirmation. Contrary to those studies, we developed novel paradigms that allowed us to generate contexts in which one or more specific auditory stimuli allow the generation of predictions. In a first study, we showed a tone allowing the prediction of other tones to be presented elicits larger N1 and P2 auditory ERP components at frontocentral electrodes. In a second study, we showed this modulation is driven by an increase in oscillatory activity in the theta frequency band, between 4 and 7 Hz. In the third study of this thesis, we were interested in ERP activity related to temporal predictions. By creating contextual rhythmic patterns, we were able to determine that a tone allowing identification of the currently heard pattern, that would allow prediction of when other tones are to be heard, generate and early positivity, even though the rhythmic pattern was task irrelevant. These studies are proof of concept and a solid basis for future research on the dynamics related to predictions in the brain.

Page generated in 0.0725 seconds