• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 18
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • 6
  • 2
  • 2
  • Tagged with
  • 122
  • 45
  • 34
  • 29
  • 20
  • 17
  • 13
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The Neurobehavioral Basis of the Parallel Individuation (PI) and Approximation Number System (ANS)

Tang, Jean Ee January 2023 (has links)
Research on numerical cognition proposes that there are two systems for the perception of numerical quantity, a small-number system (1~3) invoking parallel individuation, or “subitizing”, and a large-number system (4+) that is based on Weberian magnitude estimation (Hyde, 2011). Many numerical cognitive neuroscientists have focused on studying how the magnitude of numerosities (small vs. large numbers) and numerical distance (close vs. far differences between numbers) are influential factors when processing numbers and change detection. However, is there a difference when numerosities are increasing or decreasing? The effects of direction on numerical change processing are lesser known. This 128-channel EEG study investigated the neurobehavioral basis of differentiation between small vs. large-number perception and effects of change directionality. During EEG data collection, participants were sequentially presented with stimulus arrays of 1 to 6 dots, with parameters like size and location controlled for, to minimize varying non-numerical visual cues during habituation. Participants were instructed to press a key whenever they detect a change in the number of dots presented. The current study adapts a dot-stimuli numerical change study design from Hyde and Spelke (2009, 2012). In their EEG study, the researchers examined event-related-potential (ERP) differences during the processing of small (1, 2, 3) and large (8, 16, 24) numbers. For this study, we chose to examine a narrower numerical range from 1~6, so that small (1, 2, 3) vs. large (4, 5, 6) contrasts were along a numerical continuum. In contrast to Hyde and Spelke (2009, 2012), where participants passively-viewed the sequential presentation of dot arrays, this study employed an active change detection paradigm, where participants’ reaction time (RT) and accuracy in detecting change in the number of dots were recorded. We investigated the effects of Direction and Size in numerical change detection, where Direction is operationally defined as Decreasing and Increasing change in numeric set size, while Size is divided into Small-to-Small, Large-to-Large and Crossovers. Numerical change conditions were categorized into six groups: “Increasing Small-to-Small” (e.g., 1-to-2, 2-to-3), “Decreasing Small-to-Small” (e.g., 2-to-1, 3-to-2), “Increasing Large-Large” (e.g., 4-to-6, 5-to-6), “Decreasing Large-Large” (e.g., 5-to-4, 6-to-5), Increasing Small-to-Large” (e.g., 2-to-4, 3-to-5, 3-to-6) and “Decreasing Large-to-Small” (e.g., 4-to-2, 5-to-2, 6-to-3), where the last two groups are operationally defined as Crossovers. There was also a “No Change” condition, where the number of dots remain the same for up to five presentations. ERP analyses were conducted for the N1 component (125-200 ms) over the left and right occipital-temporal-parietal (POT) junction and for the P3b component (435-535 ms) over the midline parietal area (Pz). During the No Change condition, results show that the N1 amplitude was modulated by the cardinal values of the habituated numbers 1~6. Within this continuous range, we found N1 amplitudes commensurate with cardinal values in the small range (1, 2, 3), but not in the large range (4, 5, 6), suggesting that numbers in the subitizing range are individuated as objects in working memory. Meanwhile, in the Change condition, there was a significant main effect of Direction on N1 peak latency, where the Increasing condition showed earlier peaks. In the Decreasing Small-to-Small condition, N1 amplitudes were the lowest (even lower than N1 peaks for No Change conditions), while the other five Change conditions all produced higher N1 negativities than No Change conditions. These results imply that when the number of dots get small enough to parallel individuate, instead of encoding items into visual short-term memory, the brain is “off-loading” items from our perceptual load. Intriguingly, although the Decreasing Small-to-Small condition had the lowest N1 negativities, it produced the highest P3b positivity. Distinctions in P3b waveforms reflect a clear categorical break between small vs. large numbers, where easier/small number change conditions have higher amplitudes than harder, large number conditions, suggesting more difficulty with updating the context in the latter. However, in contrast to the earlier N1, there was no main effect of Direction on P3b peak latency, but there was an interaction effect of Direction by Size. Interestingly, there was also a similar interaction effect of Direction by Size for reaction times, with similar trends showing that Decreasing conditions produced shorter reaction times for the Large-to-Large and Crossover conditions, yet this pattern was reversed in the Small-to-Small condition. This lends more support to the implication of the “off-loading” phenomenon when processing decreases of numerosities in the small range (1~3). Meanwhile, when it comes to context-updating at later stages, and a behavioral response is required for this change detection task, the Large-to-Large condition prove to be the most difficult, as there was lower accuracy, longer reaction times, later and lower P3b peaks. N1 and P3b amplitudes are complementary to each other, with the early N1 being more sensitive to Direction, and the later P3b being more sensitive to Size. This suggests that the posterior parietal cortex might encode Direction first, followed by Size. This study proposes a model that is an adaptation to the P3b context-updating model (Donchin, 1981), where the early, sensory N1 interplays with the later, cognitive P3b. These findings suggest a neurobehavioral basis for the differentiation of small vs. large number perception at early stages of processing that is sensitive to encoding vs. off-loading objects from perceptual load and visual short-term memory, as well as a later stage that involve higher-order cognitive processing on the magnitude of set size that is employed in numerical change detection tasks.
62

Comprehension and recall of stories following left temporal lobectomy

Frisk, Virginia January 1988 (has links)
No description available.
63

The effects of two types of frontal lesions on reversal learning and activity level in rats

Davison, Meredith Ann 01 January 1972 (has links)
The purpose of this experiment was to compare traditional frontal pole lesions (FP) with lesions of the median dorsal nucleus projection (MDNP) described by Leonard. First, a comparison was made on the retention of spatial discrimination learning and the new learning of spatial discrimination reversals between these two groups of frontally lesioned rats. It was hypothesized that the most severe deficits in spatial reversal learning would be shown in rats receiving MDNP lesions since this area of the rat cortex appears to be homologous to the frontal cortex of higher species according to Leonard’s results. Second, activity was measured on two post-operative occasions, before and after the reversal learning tasks, in both a familiar and an unfamiliar environment.
64

The effects of anteromedial frontal and caudate lesions on DRL performance in the rat/

Boysen, Sarah Till, January 1984 (has links)
No description available.
65

Beyond the FFA: Understanding Face Representation within the Anterior Temporal Lobes

Collins, Jessica Ann January 2014 (has links)
Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs), within perirhinal cortex, play a necessary role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. The current study assessed the relative sensitivity of the anterior face patch, the OFA, and the FFA, to different aspects of person information. Participants learned to associate a name and occupation label, or a name only, with different facial identities. The sensitivity of the face processing areas to facial identity, occupation, and the amount of information associated with a face was then assessed. The results of a multivoxel pattern analysis (MVPA) revealed that distributed activity patterns in the anterior face patch contained information about facial identity, occupation, and the amount of information associated with a face, with the sensitivity of the anterior face patch to occupation and amount of information being greater than the more posterior face processing regions. When a similar analysis was conducted that included all voxels in the perirhinal cortex, sensitivity to every aspect of person information increased. These results suggest that the human ventral anterior temporal lobes may be critically involved in representing social, categorical, information about individual identities. / Psychology
66

Neurophysiological Differences in Pain Reactivity: Why Some People are Tolerant to Pain

Daugherty, Susan AtLee 11 October 2005 (has links)
Pain is a complex, ubiquitous phenomenon that can be debilitating and costly. Although it is well known that some individuals can easily tolerate pain while others are more intolerant to pain, little is known of the neurophysiological bases of these differences. Because differences in sensory information processing may underlie variability in tolerance to pain and because measures of sensory gating are used to explore differences in sensory information processing, sensory gating among college students (N = 14) who are tolerant or intolerant to pain was investigated. This investigation explored the hypothesis that those who were more tolerant to pain would evidence greater sensory gating. Pain tolerance was first determined using a cold pressor task. Sensory gating was then determined by the amount of attenuation of the amplitude of a second painful, electrical, somatosensory stimulus (S2) in relation to the amplitude of an identical first stimulus (S1) in a paired-stimulus evoked potential (EP ) paradigm. The results obtained showed the intolerant group exhibiting greater physiological reactivity than the tolerant group, indicating that the tolerant group attained greater sensory gating than the intolerant group. / Ph. D.
67

EEG activation patterns in the frontal lobes of stutterers and nonstutterers during working memory tasks

Baird, Brenda Ratcliff 10 November 2005 (has links)
Developmental stuttering is a physiological disorder of speech motor control. Unlike acquired conditions, developmental stuttering is responsive to fluency-inducing conditions involving the manipulation or elimination of auditory feedback. It was hypothesized that stutterers experience interference effects from competing sensory feedback during the working memory interval in which contextual information is held on-line in order to prepare subsequent motor responses. Behavior and EEG activity of stutterers and non stutterers were compared during working memory tasks. Participants were 22 male, right-handed stutterers, mean age 28.2 years, age matched with 22 male, right-handed nonstutterers. Behavioral measures included a written verbal fluency task, an auditory delayed match-to-sample key press task, and a written digit span task. As hypothesized, there were no group differences in verbal fluency. Also as hypothesized, stutterers had higher error scores (more false positives) on the auditory delayed match-to-sample key press task. This suggests increased sensitivity to auditory stimuli and difficulty inhibiting response to stimulation. Groups did not differ in digit span, but there was a trend toward significance (p=.07). If stutterers do experience overlapping or excessive sensory stimulation during the working memory phase of speech motor plan assembly, the EEG of stutterers should evidence differences consistent with excessive or inefficient processing of "extra" sensory stimuli. Monopolar recordings were collected from 19 sites in accordance with the international 10-20 system of electrode placement. EEG was recorded during 60 seconds of resting-eyes-closed and resting-eyes-open~ 60 seconds during a silent backwards-subtraction math task; 120 seconds during an auditory delayed match-to-sample key press task. As hypothesized, stutterers exhibited more theta activity than nonstutterers in frontal regions in all conditions, both in the low theta subband (3-5 Hz) and the high theta subband (5.5-7.5 Hz). Also as hypothesized, stutterers produced more alpha activity in the low alpha subband (8-10 Hz) in frontal regions in all conditions. There were no group differences in the high alpha subband (10.5-13 Hz). There were no hemispheric differences in frontal regions. Increased cortical activity and increased sensitivity to stimuli support the proposed hypothesis that stutterers experience excess sensory stimulation while attempting motor plan assembly, suggestive of stuttering as a disorder of attention. / Ph. D.
68

Caractérisation architecturale haute-résolution des lobes turbiditiques sableux confinés : exemple de la formation des Grès d'Annot (Eocène-Oligocène, SE France). / High-resolution architectural characterization of sand-rich confined turbidite lobes : Examples from the Annot Sandstone Formation (Eocene-Oligocene, SE France)

Etienne, Samuel 13 December 2012 (has links)
La formation Eocène-Oligocène des Grès d’Annot constitue le remplissage gravitaire syntectonique de bassins d’avant-pays complexes, développés au front de l'orogène alpin. Les dépôts relativement distaux de ce système turbiditique s'apparentent à des lobes sous-marins. Ces corps sableux sont caractérisés par une géométrie tabulaire et isopaque à l’échelle pluri-kilométrique. Ce travail met cependant en évidence une extrême complexité dans la répartition des faciès, structures et figures sédimentaires, en lien avec une grande variabilité des processus de transport/dépôt. Ceux-ci sont à l’origine d'objets élémentaires à la géométrie et aux remplissages distincts. Cette forte variabilité sédimentaire implique d'importantes hétérogénéités pouvant influencer la circulation des fluides sur des systèmes réservoirs analogues. A titre de comparaison, une étude complémentaire est effectuée sur un système gravitaire carbonaté (Formation Guwayza, Jurassique moyen, Nord de l’Oman), en contexte de marge passive, afin de discuter de l’importance relative des processus à l’égard du contexte géodynamique sur la variabilité des lobes. / The siliciclastic Annot Sandstones formation (SE France) is composed of a thick series of gravitary deposits and represents the Late Eocene to Early Oligocene northward infill by various types of gravitary deposits of relatively small foreland basins developed in front of the Alpine orogen. This study brings new quantitative data on the terminal deposits of this turbidite system (sand-rich lobes) and focuses on their internal architecture from depositional event scale to elementary object scale. A longitudinal distribution model of elementary objects (from proximal vertically stacked channelized lobes to distal tabular lobes) and associated heterogeneities has been established. Those features have not been accurately described in sand-rich turbidite deposits so far. This high internal variability necessarily implies heterogeneities in terms of petrophysical characteristics (porosity, permeability) and reservoir connection that may have a significant impact on fluid circulation. As a comparison, a similar study on calciturbidites sheet-like lobes from the Middle Jurassic Guweyzah Formation of North Oman is introduced. These results allow reconsidering both sedimentary processes involved in sand-rich lobes and also reservoir models that can be established on field analogues.
69

Study and optimization of 2D matrix arrays for 3D ultrasound imaging / Etude et optimisation de sondes matricielles 2D pour l'imagerie ultrasonore 3D

Diarra, Bakary 11 October 2013 (has links)
L’imagerie échographique en trois dimensions (3D) est une modalité d’imagerie médicale en plein développement. En plus de ses nombreux avantages (faible cout, absence de rayonnement ionisant, portabilité) elle permet de représenter les structures anatomiques dansleur forme réelle qui est toujours 3D. Les sondes à balayage mécaniques, relativement lentes, tendent à être remplacées par des sondes bidimensionnelles ou matricielles qui sont unprolongement dans les deux directions, latérale et azimutale, de la sonde classique 1D. Cetagencement 2D permet un dépointage du faisceau ultrasonore et donc un balayage 3D del’espace. Habituellement, les éléments piézoélectriques d’une sonde 2D sont alignés sur unegrille et régulièrement espacés d’une distance (en anglais le « pitch ») soumise à la loi del’échantillonnage spatial (distance inter-élément inférieure à la demi-longueur d’onde) pour limiter l’impact des lobes de réseau. Cette contrainte physique conduit à une multitude d’éléments de petite taille. L’équivalent en 2D d’une sonde 1D de 128 éléments contient128x128=16 384 éléments. La connexion d’un nombre d’éléments aussi élevé constitue unvéritable défi technique puisque le nombre de canaux dans un échographe actuel n’excède querarement les 256. Les solutions proposées pour contrôler ce type de sonde mettent en oeuvredu multiplexage ou des techniques de réduction du nombre d’éléments, généralement baséessur une sélection aléatoire de ces éléments (« sparse array »). Ces méthodes souffrent dufaible rapport signal à bruit du à la perte d’énergie qui leur est inhérente. Pour limiter cespertes de performances, l’optimisation reste la solution la plus adaptée. La première contribution de cette thèse est une extension du « sparse array » combinéeavec une méthode d’optimisation basée sur l’algorithme de recuit simulé. Cette optimisation permet de réduire le nombre nécessaire d’éléments à connecter en fonction des caractéristiques attendues du faisceau ultrasonore et de limiter la perte d’énergie comparée à la sonde complète de base. La deuxième contribution est une approche complètement nouvelle consistant à adopter un positionnement hors grille des éléments de la sonde matricielle permettant de supprimer les lobes de réseau et de s’affranchir de la condition d’échantillonnage spatial. Cette nouvelles tratégie permet d’utiliser des éléments de taille plus grande conduisant ainsi à un nombre d’éléments nécessaires beaucoup plus faible pour une même surface de sonde. La surface active de la sonde est maximisée, ce qui se traduit par une énergie plus importante et donc unemeilleure sensibilité. Elle permet également de balayer un angle de vue plus important, leslobes de réseau étant très faibles par rapport au lobe principal. Le choix aléatoire de la position des éléments et de leur apodization (ou pondération) reste optimisé par le recuit simulé.Les méthodes proposées sont systématiquement comparées avec la sonde complète dansle cadre de simulations numériques dans des conditions réalistes. Ces simulations démontrent un réel potentiel pour l’imagerie 3D des techniques développées. Une sonde 2D de 8x24=192 éléments a été construite par Vermon (Vermon SA, ToursFrance) pour tester les méthodes de sélection des éléments développées dans un cadreexpérimental. La comparaison entre les simulations et les résultats expérimentaux permettentde valider les méthodes proposées et de prouver leur faisabilité. / 3D Ultrasound imaging is a fast-growing medical imaging modality. In addition to its numerous advantages (low cost, non-ionizing beam, portability) it allows to represent the anatomical structures in their natural form that is always three-dimensional. The relativelyslow mechanical scanning probes tend to be replaced by two-dimensional matrix arrays that are an extension in both lateral and elevation directions of the conventional 1D probe. This2D positioning of the elements allows the ultrasonic beam steering in the whole space. Usually, the piezoelectric elements of a 2D array probe are aligned on a regular grid and spaced out of a distance (the pitch) subject to the space sampling law (inter-element distancemust be shorter than a mid-wavelength) to limit the impact of grating lobes. This physical constraint leads to a multitude of small elements. The equivalent in 2D of a 1D probe of 128elements contains 128x128 = 16,384 elements. Connecting such a high number of elements is a real technical challenge as the number of channels in current ultrasound scanners rarely exceeds 256. The proposed solutions to control this type of probe implement multiplexing or elements number reduction techniques, generally using random selection approaches (« spars earray »). These methods suffer from low signal to noise ratio due to the energy loss linked to the small number of active elements. In order to limit the loss of performance, optimization remains the best solution. The first contribution of this thesis is an extension of the « sparse array » technique combined with an optimization method based on the simulated annealing algorithm. The proposed optimization reduces the required active element number according to the expected characteristics of the ultrasound beam and permits limiting the energy loss compared to the initial dense array probe.The second contribution is a completely new approach adopting a non-grid positioningof the elements to remove the grating lobes and to overstep the spatial sampling constraint. This new strategy allows the use of larger elements leading to a small number of necessaryelements for the same probe surface. The active surface of the array is maximized, whichresults in a greater output energy and thus a higher sensitivity. It also allows a greater scansector as the grating lobes are very small relative to the main lobe. The random choice of the position of the elements and their apodization (or weighting coefficient) is optimized by the simulated annealing.The proposed methods are systematically compared to the dense array by performing simulations under realistic conditions. These simulations show a real potential of the developed techniques for 3D imaging.A 2D probe of 8x24 = 192 elements was manufactured by Vermon (Vermon SA, Tours,France) to test the proposed methods in an experimental setting. The comparison between simulation and experimental results validate the proposed methods and prove their feasibility. / L'ecografia 3D è una modalità di imaging medicale in rapida crescita. Oltre ai vantaggiin termini di prezzo basso, fascio non ionizzante, portabilità, essa permette di rappresentare le strutture anatomiche nella loro forma naturale, che è sempre tridimensionale. Le sonde ascansione meccanica, relativamente lente, tendono ad essere sostituite da quelle bidimensionali che sono una estensione in entrambe le direzioni laterale ed azimutale dellasonda convenzionale 1D. Questo posizionamento 2D degli elementi permette l'orientamentodel fascio ultrasonico in tutto lo spazio. Solitamente, gli elementi piezoelettrici di una sondamatriciale 2D sono allineati su una griglia regolare e separati da una distanza (detta “pitch”) sottoposta alla legge del campionamento spaziale (la distanza inter-elemento deve esseremeno della metà della lunghezza d'onda) per limitare l'impatto dei lobi di rete. Questo vincolo fisico porta ad una moltitudine di piccoli elementi. L'equivalente di una sonda 1D di128 elementi contiene 128x128 = 16.384 elementi in 2D. Il collegamento di un così grandenumero di elementi è una vera sfida tecnica, considerando che il numero di canali negliecografi attuali supera raramente 256. Le soluzioni proposte per controllare questo tipo disonda implementano le tecniche di multiplazione o la riduzione del numero di elementi, utilizzando un metodo di selezione casuale (« sparse array »). Questi metodi soffrono di unbasso rapporto segnale-rumore dovuto alla perdita di energia. Per limitare la perdita di prestazioni, l’ottimizzazione rimane la soluzione migliore. Il primo contributo di questa tesi è un’estensione del metodo dello « sparse array » combinato con un metodo di ottimizzazione basato sull'algoritmo del simulated annealing. Questa ottimizzazione riduce il numero degli elementi attivi richiesto secondo le caratteristiche attese del fascio di ultrasuoni e permette di limitare la perdita di energia.Il secondo contributo è un approccio completamente nuovo, che propone di adottare un posizionamento fuori-griglia degli elementi per rimuovere i lobi secondari e per scavalcare il vincolo del campionamento spaziale. Questa nuova strategia permette l'uso di elementi piùgrandi, riducendo così il numero di elementi necessari per la stessa superficie della sonda. La superficie attiva della sonda è massimizzata, questo si traduce in una maggiore energia equindi una maggiore sensibilità. Questo permette inoltre la scansione di un più grande settore,in quanto i lobi secondari sono molto piccoli rispetto al lobo principale. La scelta casualedella posizione degli elementi e la loro apodizzazione viene ottimizzata dal simulate dannealing. I metodi proposti sono stati sistematicamente confrontati con la sonda completaeseguendo simulazioni in condizioni realistiche. Le simulazioni mostrano un reale potenzialedelle tecniche sviluppate per l'imaging 3D.Una sonda 2D di 8x24 = 192 elementi è stata fabbricata da Vermon (Vermon SA, ToursFrance) per testare i metodi proposti in un ambiente sperimentale. Il confronto tra lesimulazioni e i risultati sperimentali ha permesso di convalidare i metodi proposti edimostrare la loro fattibilità.
70

Techniques de contrôle de la réflexion d’une onde plane à l’aide de l’optique de transformation et la modulation d’impédance de surface - application à l’aplatissement du réflecteur rétro-directif / Reflection control techniques of a plane wave using transformation optics and surface impedance modulation - Application to the flattening of the retro-directive reflector

Haddad, Hassan 27 November 2018 (has links)
Ces dernières années, un intérêt croissant est porté aux réflecteurs rétro-directifs aplatis dans le but de remplacer le réflecteur diédrique conventionnel, trop encombrant pour de nombreuses applications. Dans un premier temps, cette thèse étudie deux techniques différentes permettant de réduire l’épaisseur d’un réflecteur diédrique. L’Optique de transformation modifie la constitution matérielle de son volume intérieur alors que la modulation d’impédance de surface introduit une distribution d’impédance à sa surface. On examine également la possibilité de combiner ces deux techniques pour tirer le meilleur parti de chacune d’elle. La deuxième partie de cette thèse étudie l’origine des réflexions parasites pour les panneaux utilisant la modulation d’impédance de surface et propose de nouvelles règles de conception pour atténuer leurs niveaux. Finalement, une mise en oeuvre pratique est proposée pour une modulation d'impédance de surface généralisée qui utilise des impédances complexes et surpasse les performances de la modulation d’impédance classique. / In recent years, increasing interest incompact reflectors with retrodirective response is perceived since the conventional dihedral reflector is too bulky to be integrated within most applications. First, this thesis investigates two different techniques that might lead to lower profiles of the dihedral reflector. It explores the use of Transformation Optics that modifies the filling volume of such a device and Surface Impedance Modulation that introduces an impedance distribution over its surface. It also inspects the possibility of combining those two techniques to take benefit of their complementary advantages. The second part of this thesis investigates the source of parasitic lobes for surface impedance modulated panels and proposes new design rules to mitigate their levels. Finally, it also proposes a practical implementation for a specific setting of the generalized surface impedance modulation that makes use of complex impedances and outperforms a panel implementing the classical modulation.

Page generated in 0.0557 seconds