• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 85
  • 9
  • 2
  • 1
  • Tagged with
  • 248
  • 157
  • 146
  • 143
  • 140
  • 99
  • 94
  • 94
  • 87
  • 71
  • 55
  • 53
  • 24
  • 23
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Where symbols meet meanings: The organization of gestures and words in the middle temporal gyrus

Agostini, Beatrice January 2017 (has links)
Every day we use actions, gestures and words to interact with other people and with the environment. Being able to understand people’s movements and communicative intentions is critical to our ability to act successfully in the world. Here we present three studies aiming at investigating the relationship between actions, gestures and words in the brain. In the first study we described and offer a standardized data set of 230 well-controlled stimuli of meaningful (pantomimes and emblems) and meaningless gestures together with norms, with the aim of promoting replicability between studies. One hundred and thirty raters (Italian and non-Italian speakers) rated the meaningfulness of the gestures, and provided a name and a description for each of them. To our knowledge, this is the first data set of meaningful and meaningless gestures presented in the literature. In the second study, we aimed 1) at characterizing the neural network associated with the processing of different categories of gestures (pantomimes, emblems and meaningless gestures) using fMRI, and 2) at contrasting the role of precentral and temporal areas in action understanding, using rTMS. In particular, we applied rTMS to the posterior middle temporal gyrus (pMTG) and to the ventral premotor cortex (PMv) in different sessions, while participants were performing either a semantic or a perceptual judgment task. According to motor theories of action understanding, rTMS applied to the PMv, but not to the pMTG, should impair performance during the semantic judgment task. By contrast, according to cognitive theories of action understanding, rTMS applied to pMTG, but not to PMv, should impair performance in this task. Results from the fMRI experiment revealed a sensitivity of the MTG to meaningful in comparison to meaningless gestures. Additionally, three different brain areas seemed to contribute to the processing of pantomimes and emblems: superior parietal lobe (SPL) and precentral gyrus (PCG) in the case of pantomimes and IFG in the case of emblems. Unfortunately, we did not observe any significant effect of rTMS in any condition. The third study aimed at investigating how pantomimes, emblems and words are organized in the middle temporal gyrus, using fMRI. We observed a posterior-to-anterior structure, both in the left and in the right hemisphere, that might reflect the input modality and also the arbitrariness of the relationship between form and meaning.
32

The neurophysiology of internally-driven actions

Ficarella, Stefania January 2015 (has links)
Acting in the world in a way that matches our goals, overriding impulses, is one of the first abilities that we must learn while growing up. We often change the course of our actions because of external influences or because we simply “change our mind”. As John H. Patterson said, “Only fools and dead men don't change their minds. Fools won’t. Dead men can’t”. An important distinction must first be made between the impact of internal and external sources on action decisions, and the first part of the introduction will be devoted to this topic. In the second part, I will discuss the topic of inhibitory control. In the scientific literature, action inhibition is often treated as a unitary phenomenon, while the distinction among different types of inhibitions might explain the diverse results and be useful for future studies. My experimental work has been devoted to both externally-triggered and internally-driven voluntary action inhibition, in particular, in Experiment 1 I conducted a set of studies aiming at understanding the underlying cortical circuits for internally-driven action inhibition, whereas Experiment 2 focused on proactive inhibition mechanisms. While it is beyond the scope of this manuscript to cover the entire literature on inhibitory control, I would like to propose a common view to unify the different theories concerning how the brain exerts voluntary inhibitory control and provide some suggestions for future investigations to study the way we flexibly control our actions to cope with the constantly changing external, and internal, environment.
33

Word Recognition in Predictive Contexts

Zandomeneghi, Paolo January 2012 (has links)
Over the last years several results demonstrated that context-based expectations on both word-class and concepts influence the word processing at very early stages, namely at sensory analysis level. Given that these early effects are modulations of the process of stimulus analysis they depend on physical and orthographical properties of critical words in interaction with linguistic expectations. This evidence on early effects is in contrast with a syntax-first approach for which the cognitive system builds at first the syntactic structure by exploiting word-class information only. This strong syntax-first assumption pushed forward by Friederici (2002) model is based on a very early ERPs effect with latency around 150 ms that is elicited by word-class violations (eLAN: early left-anterior negativity). I studied three linguistic violations with an ERPs sentence processing paradigm. In two studies in Italian word-class violations on prepositions and verbs were implemented, overcoming the more important methodological limitations of previous studies on word-class violations. In a third study we investigated determiner-noun gender agreement in Italian using nouns for which grammatical gender is expressed unambiguously by a long derivational morpheme, that is very salient at orthographic and visual level. ERPs results show a LAN (300ms latency) followed by a P600 (500ms latency) for all the conditions. The lack of replicability of eLAN, already discussed in the literature, makes Friederici (2002) model difficult to be maintained. The ERPs elicited by gender disagreeing nouns also show an effect on the amplitude of the N250 (200ms onset), an effect specific to morphological processing since a previous study with no control on how gender was expressed reported a LAN+P600 pattern only (Molinaro et al., 2008). The latter result shows that gender agreement can affect word recognition (at least the morphological parsing) during sentence processing earlier than violation detection indexed by the LAN. This result enrich the evidence about early context top-down effects that are different from syntagmatic structural processing.
34

Selectivity for Movement Direction in the Human Brain

Fabbri, Sara January 2011 (has links)
In daily life, we frequently execute reaching movements, for example to be able to grasp our mobile phone. The processing of movement direction is fundamental to efficiently reach the target object. Many neurophysiological studies reported neuronal populations selective for movement direction in many regions of the monkey brain. In my thesis, I investigated which areas in the human brain show directional selectivity. Moreover, I measured to what extent directional selective regions are sensitive to changes in other movement parameters, like the type of motor act and movement amplitude. In three functional magnetic resonance imaging (fMRI) experiments, participants were adapted to execute reaching movements in the adaptation direction. Occasionally, test trials were presented. Test trials differed from adaptation trials in movement direction only, or in movement direction as well as in another movement parameter (Experiment 1 and 2: type of motor act; Experiment 3: movement amplitude). By comparing the fMRI signal in conditions where only movement direction was manipulated with conditions where also other movement parameters changed, we were able to measure sensitivity of directionally tuned neuronal populations to these additional movement parameters. Multiple regions in the human visuomotor system showed selectivity for movement direction. This selectivity was modulated by the type of motor act to varying degrees, with the largest effect in M1 and the smallest modulation in the parietal reach region. Moreover, directional selectivity was clearly sensitive also to changes in movement amplitude. These results extend the current knowledge on the representation of actions from monkey physiology to the human brain and furthermore may have important practical implications for restoring lost motor functions in tetraplagic patients.
35

Motor Resonance meets Motor Performance: Neurocognitive investigations with transcranial magnetic stimulation

Barchiesi, Guido January 2012 (has links)
The classical mirror neuron theory of action understanding asserts that when we observe an action, the representations that are engaged for performing it, are automatically activated. In order to do gain information about the role of the simulation in action understanding a state dependent TMS experiment has been carried out. The fundamental idea is to adapt a neural population in the motor system and then testing the effects of this adaptation when participants categorize visually presented actions. The second aim of the present work is to find a paradigm, or a particular cognitive set, that does not allow the simulation process to take place when the participants are observing actions. This step will be important in testing whether the simulation process is necessary in order to understand a visually presented action.
36

Attentional Mechanisms in Natural Scenes

Battistoni, Elisa January 2018 (has links)
The visual analysis of the world around us is an incredibly complex neural process that allows humans to function appropriately within the environment. When one considers the intricacy of both the visual input and the (currently known) neural mechanisms necessary for its analysis, it is difficult not to remain enchanted by the fact that, even though the signal that hits the retina has a tremendous amount of simple visual features and that is ever changing, ambiguous and incomplete, we experience the world around us in a very easy, stable and straightforward manner. So much effort has been put into the study of vision, and despite the enormous scientific advances and important findings, many questions still need answers. During my years spent as Ph.D. student, I investigated some questions related to top-down attentional mechanisms in real-world visual search. Specifically, Chapter 2 and 3 address the processing stage of preparation, by investigating the characteristics of attentional templates when preparing to search for objects in scenes; Chapter 4 addresses the stage of guidance and selection, by investigating the temporal course of spatial attention guidance; and finally, Chapter 5 addresses the identification phase, by investigating the temporal dynamics of size-constancy mechanisms in real-world scenes. To anticipate some results, we proposed that attentional templates in real-world visual search tasks are based on category-diagnostic features and code the expected target size/distance. In the context of the attentional guidance and selection stage, we demonstrate that attention spatially focuses on targets around 240ms, following category-based attentional modulations appearing at 180ms after scene onset. Finally, we propose that size constancy mechanisms appear before 200ms post-scene. This is in line with the expectation that a coarse identification of an object, including its size, should be computed before spatially focusing attention onto the target. Together these studies improve our understanding of top-down attentional processes engaged in real-world visual search, and raise some questions which future research could address.
37

Object Individuation in Domestic Chicks (Gallus gallus)

Fontanari, Laura January 2011 (has links)
Object individuation is the process by which organisms establish the number of distinct objects present in an event. The ability of individuating objects was investigated in two/three-day-old chicks (Gallus gallus). A first series of experiments (Exp. 1 - Exp. 6) assessed the role of the property information provided by colour, shape, size or individually distinctive features, as well as spatiotemporal information in object individuation. A second series (Exp. 7 - Exp. 10) aimed at investigating the ability to use property/kind information using imprinting objects and food items (i.e. mealworms) as stimuli of different category. Newborn chicks were exposed (i.e., imprinted) to sets of objects which were different or identical for property and property/kind information, and the chicks’ spontaneous tendency to approach the larger group of imprinting objects and food items was exploited. Each chick underwent a free choice test in which two groups of events were shown: a group comprised two different stimuli (i.e. for property or for kind); the second group was composed by a single stimulus presented twice. Every stimulus in each group of events was sequentially presented and concealed in the same spatial location and the number of events taking place at each location was equalized (Sequential Presentation test). Chicks spontaneously approached the two different objects rather than the single object seen twice. A possible preference for the more varied set of stimuli was excluded by testing chicks in a simultaneous presentation of two different objects Vs. two identical objects (Simultaneous Presentation test). Moreover, use of spatiotemporal information was assessed through simultaneous presentation of three identical objects Vs. two different objects. When increasing the number of presentations of the single stimulus (up to 3 times) and comparing it with two different stimuli presented once each, chicks correctly individuated the larger group of imprinting objects only if objects were all different from one another (i.e. distinctive features had been put on each object). Any role of experience was excluded by presenting chicks with stimuli of a completely novel colour with respect to the original colour of the imprinting stimuli. Results show that chicks are able to use the property information provided by colour, shape, size or individually distinctive features, spatiotemporal information and property/kind information provided by social and food categories for object individuation. The fact that object individuation is precociously available in the young of a vertebrate species suggests it may depend on inborn biological predispositions rather than on experiential or language-related processes.
38

On the fate and consequences of conscious and non-conscious vision

Kaunitz, Lisandro Nicolas January 2011 (has links)
What we consciously see in our everyday life is not an exact copy of the information that our eyes receive from the external world. Our brain actively elaborates and transforms the light that impinges onto the two dimensional surface of our retinas to create complex three dimensional scenes full of colorful objects of different shapes and sizes, motion and depth. Our visual perception is not a passive reception of information: our brain actively decodes and separates the retinal information into relevant and significant objects and compares this information with previous memories. One remarkable example is the ability of the visual system to decode the information that arrives from the eyes into recognizable visual objects and scenes. We are able to recognize objects even under conditions of low illumination or low contrast, when these objects are partially occluded, presented among other objects or when they are defined by textures, for example. In addition to this process of recognition, the visual system generates the conscious sensations of those objects and scenes. Even though we need our eyes to see the world these are not enough by themselves to generate visual perception. The light impinging on our retinas needs further elaboration in higher areas of the brain to generate perception. Several situations in which vision is separated from perception can demonstrate this. For example, we can imagine some object with our eyes closed in our “mind’s eye†or we can generate images in our dreams that are completely independent of external stimulation. Experimentally, it is possible to manipulate visual perception while keeping visual stimulation to the eyes constant. Examples of this are the changes in perception that occur with multi-stable phenomena (i.e, as with the Necker cube) or when subjects are presented with dissimilar images to each eye, a condition called binocular rivalry. Without a functioning visual brain we are not able to properly see. Lesions to the visual cortex produce a wide variety of visual deficits ranging from blindness to achromatopsia (the impossibility of perceiving color), akinetopsia (the impossibility of perceiving motion) and/or visual agnosias (the difficulty in recognizing objects through vision). Also, extensive neuroimaging experiments have shown that visual information causes the activation of wide areas of the brain and the role of many of these areas has been studied in the last two decades. However, currently the neuroscientific study of perception can only be addressed with limited resources. We cannot measure the activity of the 10 billion neurons that constitute our brain. Neither can scientists manipulate the human brain by disrupting, modifying or altering the activity of neuronal circuits. The current imaging methods for studying the brain can only provide incomplete information at different spatial levels of analysis and with different temporal resolutions. Thus, the conclusions that neuroscientists we can extract from these data are only partial attempts to reach to a better understanding of the functioning of the brain. Despite these limitations, few neuroscientists would disagree today with the fact that visual perception has a basis on distributed neuronal circuits in the brain. In the same line of thought, it is agreed that there must be circuits of neurons that code for the conscious perception of those objects. Consciousness has always been considered as one of the major mysteries of life. The study of the origins of sensations and “feelings†from the operations of our brain is for many scientists one of the final challenges for the biological sciences (Koch, 2003). The mystery of consciousness is considered to be at the same level of the mystery of the creation of the universe and the mystery of the origin of life out of inanimate matter. The topic of this thesis is the study of visual consciousness. Considering its complexity, we do not intend to provide a final answer to the explanation of consciousness. Instead, this thesis will focus on the study of vision and it will describe some properties of conscious vision as opposed to unconscious vision. We will explore the fate of unseen vision: the information that reaches the retina but that does not generate any conscious sensation. We will analyze the processing and limits of unseen visual stimuli and we will compare them with conscious processing of the same objects. In doing this we expect to shed light into some of the properties of conscious and unconscious visual perception and on the role that visual awareness might have had in evolution.
39

From perceptual to semantic representations in the human brain

Viganò, Simone January 2019 (has links)
Humans are capable of recognizing a myriad of objects in everyday life. To do that, they have evolved the ability to detect their commonalities and differences, moving from perceptual details to construct more abstract representations that we call concepts, which span entire categories (such as the one of people) or refer to very specific and individual entities (such as our parents). Organizing our knowledge of the world around concepts, rather than around individual experiences, allows us for more rapid access to behavioural relevant information (for instance, how to behave when we encounter a dangerous animal), and to quickly generalize this information to what we never encountered before. In few words, this is what permeates the world with meaning. The present work is about the neural bases of learning novel object concepts, a process that in our species is vastly supported by symbols and language: for this reason, I talk about semantic representations. The word “semantics” generally refers to the study of meaning (and to what a “meaning” ultimately is) as it is conveyed by a symbol; in the specific case of cognitive neuroscience, it deals with the neural mechanisms that allow symbols to re-present the meanings or concepts they refer to in the brain. For instance, we can easily describe what is the meaning of the word “DOG”, pretty much as we can explain what “DEMOCRACY” means. However, although cognitive neuroscience has focused on the neuro-cognitive bases of semantic representations for decades, the neural mechanisms underlying their acquisition remain elusive. How does the human brain change when learning novel concepts using symbols? How does a symbol acquire its meaning in the brain? Does this learning generate novel neural representations and/or does it modify pre-existing ones? What internal representational format (neural code) supports the representation of newly learnt concepts in the human brain? The contribution of this work is three-fold. First, I show how new semantic representations learned by categorizing novel objects (defined through a combination of multisensory perceptual features) memory systems. Second, I show results converging on the idea that brain regions that evolved in lower-level mammals to represent spatial relationships between objects and locations, such as the hippocampal formation and medial prefrontal cortex, in humans are recruited to encode relationships between words and concepts by means of the same neural codes used to represent and navigate the physical environment. Finally, I present preliminary data on the cognitive effects of using symbols during learning novel object concepts, showing how language supports the construction of generalizable semantic representations.
40

The development of number processing and its relation to other parietal functions in early childhood

Chinello, Alessandro January 2010 (has links)
The project has explored the developmental trajectories of several cognitive functions related to different brain regions: parietal cortex (quantity manipulation, finger gnosis, visuo-spatial memory and grasping abilities) and occipito-temporal cortex (face and object processing), in order to investigate their contributions to the acquisition of formal arithmetic in the first year of schooling. We tested preschooler, first grader and adult subjects, using correlational cross-sectional and longitudinal approaches. Results show that anatomical proximity is a strong predictor of behavioural correlations and of segregation between dorsal and ventral streams’ functions. This observation is particularly prominent in children: within parietal functions, there is a progressive separation across functions during development. During preschool age, presymbolic and symbolic number systems follow distinct developmental trajectories that converge during the first year of primary school. Indeed a possible cause of this phenomenon could be due to the refinement of the numerosity acuity during the acquisition of symbolic knowledge for numbers. Among the tested parietal functions, we observe a strong association between the numerical and the finger domain, especially in children. In preschoolers, finger gnosis is strongly associated with non-symbolic quantity processing, while in first graders it links up to symbolic mental arithmetic. This finding may reflect a pre-existing anatomical connection between the cortical regions supporting the quantity and finger-related functions in early childhood. In contrast, first graders exhibit a finger-arithmetic association more influenced by functional factors and cultural-based strategies (e.g. finger counting). Longitudinal data has allowed us to individuate which cognitive functions measured in kindergarteners predicts better the success in mental arithmetic in the first year of school. Results show that finger gnosis, as well as quantity and space–related abilities all concur at shaping the success in mental calculation in first graders. These results are important because, primarily, they are the first to observe a strong relation between visuo-spatial, finger and quantity related abilities in young children, and, secondly, because the longitudinal design provides strong evidence for a causal link between these functions and the success in formal arithmetic. These results suggest that educational programs should include training in each of these cognitive domains in mathematic classes. Finally, specific applications of these findings can be found within the domain of educational neuroscience and for the rehabilitation of children with numerical deficits (dyscalculia).

Page generated in 0.0396 seconds