• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 556
  • 130
  • 70
  • 54
  • 44
  • 44
  • 27
  • 25
  • 22
  • 19
  • 13
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 1236
  • 200
  • 179
  • 160
  • 152
  • 141
  • 126
  • 121
  • 120
  • 107
  • 106
  • 103
  • 98
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

EEG koreláty egocentrických a allocentrických odhadů vzdáleností ve virtuálním prostředí u lidí / EEG correlates of egocentric and allocentric distance estimates in virtual environment in humans

Kalinová, Jana January 2019 (has links)
Cognitive processes associated with spatial orientation can use different reference frames: egocentric, centered on observer and allocentric, centered on objects in the environment. In this thesis, we use EEG to investigate the dynamics of brain processes accompanying spatial orientation based on these reference frames. Participants were instructed to estimate distances between objects or themselves and objects located in a virtual circular arena; this task was presented in both 2D and 3D displays. Task-related EEG changes were analyzed using a time-frequency analysis and event-related potential analysis of 128-channel EEG recordings. Through time-frequency analysis we found significant power differences in delta, theta, alpha, beta and gamma bands amongst the control, egocentric and allocentric testing conditions. We noted a decrease in alpha power in occipital and parietal regions, while a significantly stronger decrease was observed for the allocentric condition compared to both egocentric and control conditions. A similar pattern was also detectable for the beta band. We also report an increase in theta and delta power in temporal, fronto-temporal and lateral frontal regions that was significantly stronger for the egocentric condition compared to control and, in some electrodes, even...
692

Wavelet analysis of EEG signals as a tool for the investigation of the time architecture of cognitive processes

Der, Ralf, Steinmetz, Ulrich 15 July 2019 (has links)
Cognitive processes heavily rely on a dedicated spatio-temporal architecture of the underlying neural system - the brain. The spatial aspect is substantiated by the modularization as it has been brought to light in much detail by recent sophisticated neural imaging investigations. The time aspect is less well investigated although the role of time is prominent in several approaches to understanding the organization of the information processing in the brain. By way of example we mention (i) the synchronization hypothesis for the resolution of the binding problem, cf. [5] [4], [3] and the efforts to relate the information contained in observed spike rates back to the neuronal mechanisms underlying the cognitive event. In particular, in Refs. [1], [2] Amit et. al. tried to bridge the gap between the Miyashita data [10] and the hypothesis that associative memory is realized by the (strange) attractor states of dynamical systems.
693

The Effects of Neurocognitive Aging on Sentence Processing

Beese, Caroline 16 December 2019 (has links)
Across the lifespan, successful language comprehension is crucial for continued participation in everyday life. The success of language comprehension relies on the intact functioning of both language-specific processes as well as domain-general cognitive processes that support language comprehension in general. This two-sided nature of successful language comprehension may contribute to the two diverging observations in healthy aging: the preservation and the decline of language comprehension on both the cognitive and the neural level. To date, our understanding of these two competing facets is incomplete and unclear. While greater language experience comes with increasing age, most domain-general cognitive functions, like verbal working memory, decline in healthy aging. The here presented thesis shows that when the electrophysiological network relevant for verbal working memory is already compromised at rest, language comprehension declines in older adults. Moreover, it could be shown that, as verbal working memory capacity declines with age, resources may be- come insufficient to successfully encode language-specific information into memory, yielding language comprehension difficulties in old age. Age differences in the electrophysiological dynamics underlying sentence encoding indicate that the encoding of detailed information may increasingly be inhibited throughout the lifespan, possibly to avoid overloading the verbal working memory. However, limitations in verbal working memory could be attenuated by the use of language-specific constraints. That is, semantic and syntactic constraints can be used to establish relations between words which reduces the memory load from individual word information to information about word group. Here, it was found that older adults do not benefit from the use of syntactic constraints as much as younger adults while the benefit of using semantic constraints was comparable across age. Overall, the here presented thesis suggests that previous findings on language comprehension in healthy aging are not contradictory but rather converge on a simultaneous combination of selective preservation and decline of various language-specific processes, burdened by domain-general neurocognitive aging.
694

Auditory motion: perception and cortical response

Sarrou, Mikaella 10 April 2019 (has links)
Summary The localization of sound sources in humans is based on the binaural cues, interaural time and level differences (ITDs, ILDs) and the spectral cues (Blauert 1997). The ITDs relate to the timing of sound arrival at the two ears. For example, a sound located at the right side will arrive at the right ear earlier than at the left ear. The ILDs refer to the difference of sound pressure-level between the two ears. In the example mentioned above, if the sound located at the right has short wavelength then it will arrive at the right ear with higher sound-pressure than at the left ear. This is because a sound with short wavelength cannot bypass the head. In other words, the head creates an obstacle that diffracts the waves and that is why the sound arriving at the ear closer to the sound source will receive the sound with higher sound-pressure. Due to the association of each of the binaural cues with the wavelength of a sound, Rayleigh (1907) proposed the ‘duplex theory’ of sound source localization suggesting that on the azimuth, the ITDs is the main localization cue for low frequency sounds and the ILDs is the main localization cue for high frequency sounds. The spectral cues are based on the shape of the pinna’s folds and they are very useful for sound source localization in elevation but they also help in azimuthal localization (Schnupp et al. 2012). The contribution of the spectral cues on the azimuthal localization arises from the fact that due to the symmetrical position of the ears on the head, the binaural cues vary symmetrically as a function of spatial location (King et al. 2001). Whereas the ITDs have a very symmetrical distribution, the ILDs become more symmetrical the higher the sound frequency is. This way, there are certain locations within the left-frontal and left-posterior hemifield, as well as the right-frontal and the right-posterior hemifield that share the same binaural cues, which makes the binaural cues ambiguous and so the auditory system cannot depend solely on these for sound source localization. To resolve this ambiguity, our auditory system uses the spectral cues that help to disambiguate frontal-back confusion (King et al. 2001, Schnupp et al. 2012). The role of these cues in localizing sounds in our environment is well established. But their role in acoustic motion localization is not yet clear. This is the topic of the current thesis. The auditory localization cues are processed on the subcortical and cortical level. The ITDs and ILDs are processed from different neurons along the auditory pathway (Schnupp et al. 2012). Their parallel processing stages seem to converge at the inferior colliculus as evidence shows from cat experiments (Chase and Young 2005). But in humans, an electroencephalographic (EEG) study measuring the mismatch negativity (MMN; Schröger 1996) and a study using magnetoencephalographie (MEG; Salminen et al. 2005) showed that these cues are not integrated. One of the models of the spatial representation of sound sources is Jeffress’ place code (1948). This model suggests that each location of the azimuthal space is encoded differently, thus the name ‘place code’. Evidence in support of this model comes from studies on the cat (Yin and Chan 1990). However, arguments against this model come from studies in gerbils whose results showed that their subcortical neurons respond maximally to locations that are outside the physiological range based on the size of their heads (Pecka et al. 2008). An alternative model of auditory spatial encoding is the hemifield code (von Bekesy 1960). This model proposes that subcortical neurons are separated into two populations, one tuned to the left hemifield and another tuned to the right. Thus, the receptive field of the neurons is wide and the estimation of the sound source location is derived from the balance of activity of these two populations. Evidence from human studies support this model. Salminen and colleagues (2009) employed an adaptation paradigm during MEG recording. They presented sets of adaptor and probe stimuli that either had the same or different spatial location. Their results showed that the response to the probe was more reduced when the adaptor was located at the far left location and not when the adaptor and probe shared the exact same location. Also, an EEG study on auditory motion showed that sounds that move from central to lateral locations elicit higher amplitudes than when the move in the opposite direction (Magezi and Krumbholz 2010). The authors concluded that these results are based on the movement of the sound source towards the location of the maximal activity of the neurons (also in Salminen et al. 2012). The ability to detect moving objects is well-embedded into our nature. Whereas it enriches predators and prey with the skills to survive, in everyday life it enables us to interact with our environment. For example, the task of crossing a street (without traffic signs) safely is based on the encoding of visual and auditory features of moving vehicles. In the visual modality, the capability of the system to encode motion is based on motion-specific neurons (Mather 2011). In the auditory modality, the debate over whether these sensors exist is still ongoing. One theory on how the auditory system encodes motion is the ‘snapshot’ theory (Chandler and Grantham 1991, Grantham 1986). In a series of experiments, Grantham (1986) showed that auditory perception was not affected by features of motion such as velocity, but it was more sensitive on distance as a spatial cue. Thus, what he suggested is that the encoding of auditory motion is based on the mechanisms that encode stationary sounds. In other words, when a sound is moving it activates the neurons that correspond to the points that are located along the trajectory of that sound but in a serial manner. This way, the perception of auditory motion is based on ‘snapshots’ instead of processing motion as a complete feature. This mechanism of auditory motion processing corroborates with Jeffress’ place code (1948). Animal studies on monkeys (Ahissar et al. 1992) and owls (Wagner et al. 1994) showed that neurons responded similarly to moving and stationary sounds. Evidence against this theory come from a recent behavioural study that introduced velocity changes within acoustic motion and showed that participants were able to detect them (Locke et al. 2016). The authors concluded that if ‘snapshot’ theory would be true, then these detections of velocity change would not occur. Another theory of auditory motion was evolved that supports the motion-specific mechanisms in the brain (Warren et al. 2002, Docummun et al. 2004, Poirier et al. 20017). A human study using functional magnetic resonance imaging (fMRI) and positron-emission tomography (PET) showed evidence of a motion-specific cortical network that includes the planum temporale and the parietotemporal operculum (Warren et al. 2002). The authors suggested that these areas are part of a posterior processing stream that is responsible for analysis of auditory moving objects. Moreover, a recent primate fMRI study provided evidence of motion-specificity in the activity of the posterior belt and parabelt regions of the primary auditory cortex (Poirier et al. 2017). The authors contrasted cortical response to auditory motion with stationary and spectrotemporal sounds and found that the aforementioned cortical areas were only activated by moving sounds. All in all, the neuronal mechanism underlying auditory motion perception has been vaguely described. However, there is an increasing number of evidence that show that specialized motion areas and mechanisms exist in the cortex. To study how exactly these mechanisms function, it is important to know which aspects of the stimulus paradigm affect the response. Study 1. In this study, I focused on eliciting the cortical motion-onset response (MOR) in the freefield. This specific response is measured with EEG and it is elicited when a sound motion follows a stationary sound without any temporal gaps between them. The stationary part serves as an adaptive sound and the onset of motion provides a release-of-adaptation, which gives rise to the MOR. One of the focus was to investigate the effect on the MOR when the initial part is moving in space instead of being stationary. In addition, a secondary focus was the effect of the stimuli frequency on the MOR. I hypothesized that, due to the adaptation provided by the initial stimulus part, the motion response would be smaller after moving than after stationary adaptation. Also, I expected that the effects of frequency would follow the literature and since the motion response is a late response, the amplitude would be smaller after the high frequency than low frequency stimulus presentation. The results showed that the current paradigm did not elicit the MOR. Comparison of the current experimental settings with those used previously in the literature showed that the MOR is strongly depended on the adaptation time provided by the first part of the stimuli. Study 2. In this study, the stimulus characteristics were adapted after failing to elicit the response in the previous study. In addition, I employed an active instead of a passive paradigm, since data from the literature show that the motion response is strongly dependent on the allocation of attention on auditory motion. Thus, in this study, the elicitation of the MOR was successful. The current study examines the modulation of the MOR based on the frequency-range of sound stimuli. Higher amplitude on the motion response was expected after the presentation of stimuli with high frequency spectrum. Also, I studied the effects of hemifield presentation and the direction of motion on the MOR. The results showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Study 3. This study is focused on analysis from data collected in the previous study. The focus, however, is on the effects of the stimulus paradigm on the MOR. I hypothesized that after the adaptation provided by an initial moving part, lower amplitude was expected in comparison to the stimuli with an initial stationary part. These responses were also analysed based on the effects of stimulus frequency. The results showed that the stimulus paradigm with the initial moving part elicited a response that resembles the MOR but has lower amplitude. In addition, the effects of stimulus frequency evident from the previous analysis apply here as well, with high frequency stimuli eliciting higher MOR amplitude than low frequency stimuli. Study 4. This study examined further the effects of stimuli characteristics on the MOR. Since the latency of the MOR in the previous study was a bit later than what is usually reported in the literature, the focus here was to test the effects of motion velocity and adaptation duration on the MOR. The results showed that faster velocity elicited higher amplitudes on the peak-to-peak comparison. Separate analysis on the MOR components, showed that this effect was based on higher cN1 amplitude. A separate analysis between the electrodes over the left and right hemisphere, showed that the peak-to-peak amplitude was stronger on the electrodes over the right hemisphere. Lastly, the strong adaptation created by the long duration of the initial stationary part provided abundant evidence of auditory motion, which led to the separation of the cP2 into its constituent parts. Study 5. This behavioural study focused on the effect of motion adaptation on the rear field to the presentation of motion in the frontal field. Thus, the presentation of adaptors and probes within the left-frontal and left-rear fields aimed at locations that share the same ITDs and ILDs. The disambiguation of auditory localization of motion is based on how these interaural cues interact with the spectral cues. A moving probe was presented in the left hemifield, following an adaptor that spanned either the same trajectory or a trajectory located in the opposite field (frontal/ rear). Participants had to indicate the direction of the probe. The results showed that performance was worse when adaptor and probe were sharing the same binaural cues, even if they were in different hemifields and their direction was opposite. But the magnitude of the adaptation effect when the pair was in different hemifields was smaller, thus showing motion-direction detection depends on the integration of interaural and spectral cues.
695

Taste Perception in Obesity

Hardikar, Samyogita 15 May 2019 (has links)
No description available.
696

A Multi-Channel EEG Mini-Cap for Recording Auditory Brainstem Responses in Chinchillas

Hannah M Ginsberg (9757334) 14 December 2020 (has links)
<p>According to the World Health Organization, disabling hearing loss affects nearly 466 million people worldwide. Sensorineural hearing loss (SNHL), which is characterized as damage to the inner ear (e.g., cochlear hair cells) and/or to the neural pathways connecting the inner ear and brain, accounts for 90\% of all disabling hearing loss. One important clinical measure of SNHL is an auditory evoked potential called the auditory brainstem response (ABR). The ABR is a non-invasive measure of synchronous neural activity across the peripheral auditory pathway (auditory nerve to the midbrain), comprised of a series of multiple waves occurring within the first 10 milliseconds after stimulus onset. In humans, oftentimes ABRs are recorded using a high-density EEG electrode cap (e.g., with 32 channels). In our lab, a long-term goal is to establish and characterize reliable and efficient non-invasive measures of hearing loss in our pre-clinical chinchilla models of SNHL that can be directly related to human clinical measures. Thus, bridging the gap between chinchilla and human data collection by using analogous measures is imperative. \par</p><p><br></p><p>For this project, a 32-channel EEG electrode mini-cap for recording ABRs in chinchillas was studied. Firstly, the feasibility of this new method to record ABRs demonstrated. Secondly, the sources of bias and variability relevant to the mini cap were investigated. In this investigation, the ability of the mini cap to produce highly reliable, repeatable, reproducible, and valid ABRs was verified. Finally, the benefits of this new method, in comparison to our current approach using three subdermal electrodes, were characterized. It was found that ABR responses were comparable across channels both in magnitude and morphology when referenced to a tiptrode in the ipsilateral ear canal. Consequently, averaging across several channels led to a reduction in overall noise and the need for fewer repetitions (in comparison to the subdermal method) to obtain reliable response. Other methodological benefits of the mini cap included closer alignment with human ABR data collection, more efficient data collection, and capability for more in-depth data analyses, like source localization (e.g., in cortical responses). Future work will include collecting ABRs using the EEG mini-cap before and after noise exposure, as well as exploring the potential to leverage different channels to isolate brainstem and midbrain contributions to evoked responses from simultaneous recordings. </p>
697

Anxious Apprehension, Anxious Arousal, and Asymmetrical Brain Activity

Kolnogorova, Kateryna 01 June 2020 (has links)
No description available.
698

A Deep Learning Approach to Brain Tracking of Sound

Hermansson, Oscar January 2022 (has links)
Objectives: Development of accurate auditory attention decoding (AAD) algorithms, capable of identifying the attended sound source from the speech evoked electroencephalography (EEG) responses, could lead to new solutions for hearing impaired listeners: neuro-steered hearing aids. Many of the existing AAD algorithms are either inaccurate or very slow. Therefore, there is a need to develop new EEG-based AAD methods. The first objective of this project was to investigate deep neural network (DNN) models for AAD and compare them to the state-of-the-art linear models. The second objective was to investigate whether generative adversarial networks (GANs) could be used for speech-evoked EEGdata augmentation to improve the AAD performance. Design: The proposed methods were tested in a dataset of 34 participants who performed an auditory attention task. They were instructed to attend to one of the two talkers in the front and ignore the talker on the other side and back-ground noise behind them, while high density EEG was recorded. Main Results: The linear models had an average attended vs ignored speech classification accuracy of 95.87% and 50% for ∼30 second and 8 seconds long time windows, respectively. A DNN model designed for AAD resulted in an average classification accuracy of 82.32% and 58.03% for ∼30 second and 8 seconds long time windows, respectively, when trained only on the real EEG data. The results show that GANs generated relatively realistic speech-evoked EEG signals. A DNN trained with GAN-generated data resulted in an average accuracy 90.25% for 8 seconds long time windows. On shorter trials the GAN-generated EEG data have shown to significantly improve classification performances, when compared to models only trained on real EEG data. Conclusion: The results suggest that DNN models can outperform linear models in AAD tasks, and that GAN-based EEG data augmentation can be used to further improve DNN performance. These results extend prior work and brings us closer to the use of EEG for decoding auditory attention in next-generation neuro-steered hearing aids.
699

Correspondence Between TOVA Test Results and Characteristics of EEG Signals Acquired Through the Muse Sensor in Positions AF7–AF8

Castillo, Ober, Sotomayor, Simy, Kemper, Guillermo, Clement, Vincent 01 January 2021 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / This paper seeks to study the correspondence between the results of the test of variable of attention (TOVA) and the signals acquired by the Muse electroencephalogram (EEG) in the positions AF7 and AF8 of the cerebral cortex. There are a variety of research papers that estimates an index of attention in which the different characteristics in discrete signals of the brain activity were used. However, many of these results were obtained without contrasting them with standardized tests. Due to this fact, in the present work, the results will be compared with the score of the TOVA, which aims to identify an attention disorder in a person. The indicators obtained from the test are the response time variability, the average response time, and the d′ prime score. During the test, the characteristics of the EEG signals in the alpha, beta, theta, and gamma subbands such as the energy, average power, and standard deviation were extracted. For this purpose, the acquired signals are filtered to reduce the effect of the movement of the muscles near the cerebral cortex and then went through a subband decomposition process by applying transformed wavelet packets. The results show a well-marked correspondence between the parameters of the EEG signal of the indicated subbands and the visual attention indicators provided by TOVA. This correspondence was measured through Pearson’s correlation coefficient which had an average result of 0.8. / Revisión por pares / Revisión por pares
700

Digital Manipulation of Human Faces: Effects on Emotional Perception and Brain Activity

Knoll, Martin 01 May 2022 (has links)
The study of human face-processing has granted insight into key adaptions across various social and biological functions. However, there is an overall lack of consistency regarding digital alteration styles of human-face stimuli. In order to investigate this, two independent studies were conducted examining unique effects of image construction and presentation. In the first study, three primary forms of stimuli presentation styles (color, black and white, cutout) were used across iterations of non-thatcherized/thatcherized and non-inverted/inverted presentations. Outcome measures included subjective reactions measured via ratings of perceived “grotesqueness,” and objective outcomes of N170 event-related potentials (ERPs) measured via encephalography. Results of subjective measures indicated that thatcherized images were associated with an increased level of grotesque perception, regardless of overall condition variant and inversion status. A significantly larger N170 component was found in response to cutout-style images of human faces, thatcherized images, and inverted images. Results suggest that cutout image morphology may be considered a well-suited image presentation style when examining ERPs and facial processing of otherwise unaltered human faces. Moreover, less emphasis can be placed on decision making regarding main condition morphology of human face stimuli as it relates to negatively valent reactions. The second study explored commonalities between thatcherized and uncanny images. The purpose of the study was to explore commonalities between these two styles of digital manipulation and establish a link between previously disparate areas of human-face processing research. Subjective reactions to stimuli were measured via participant ratings of “off-putting.” ERP data were gathered in order to explore if any unique effects emerged via N170 and N400 presentations. Two main “morph continuums” of stimuli, provided by Eduard Zell (see Zell et al., 2015), with uncanny features were utilized. A novel approach of thatcherizing images along these continuums was used. thatcherized images across both continuums were regarded as more off-putting than non-thatcherized images, indicating a robust subjective effect of thatcherization that was relatively unimpacted by additional manipulation of key featural components. Conversely, results from brain activity indicated no significant differences of N170 between level of shape stylization and their thatcherized counterparts. Unique effects between continuums and exploratory N400 results are discussed.

Page generated in 0.0386 seconds