• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Two conceptions of the mind

Aguda, Benjamin J. 01 December 2011 (has links)
Since the cognitive revolution during the last century the mind has been conceived of as being computer-like. Like a computer, the brain was assumed to be a physical structure (hardware) upon which a computational mind (software) was built. The mind was seen as a collection of independent programs which each have their own specific tasks, or modules. These modules took sensory input "data" and transduced it into language-like representations which were used in mental computations. Recently, a new conception of the mind has developed, grounded cognition. According to this model, sensory stimulus is saved in the original format in which it was received and recalled using association mechanisms. Rather than representations being language-like they are instead multimodal. The manipulation of these multimodal representations requires processing distributed throughout the brain. A new holistic model for mental architecture has developed in which the concerted activity of the brain's modal systems produces functional systems which are intimately codependent with one another. The purpose of this thesis is to explore both the modular and multimodal theories of mental architecture. Each will be described in detail along with their supporting paradigms, cognitivism and grounded cognition. After my expositions I will offer support for my own position regarding these two theories before suggesting avenues for future research.
2

Mesure des tendances comportementales d'approche/évitement : le rôle des processus sensorimoteurs / Measure of approach/avoidance behavioral tendencies : the role of sensorimotor processes

Rougier, Marine 26 September 2017 (has links)
Ces travaux portent sur les processus à l’œuvre dans la réactivation des tendances comportementales d’approche et d’évitement. Dans cette thèse, nous défendons l’idée que la réactivation de ces tendances comportementales dépend de l’information sensorimotrice présente et de sa similarité avec l’information sensorimotrice passée ayant été associée à des comportements d’approche/évitement. Dans une première partie, nous avons testé si reproduire dans la tâche d’approche/évitement l’information sensorimotrice (en l’occurrence, visuelle) la plus représentative de l’approche/évitement permettait une meilleure réactivation de ces tendances comportementales. A travers huit expériences, nous avons pu montrer que lorsque l’information visuelle de la tâche était représentative, les effets d’approche/évitement étaient plus forts que lorsque cette information visuelle n’était pas présente. Qui plus est, les effets produits lorsque cette information était présente étaient forts et réplicables. Dans les deux parties suivantes, nous avons testé l’hypothèse selon laquelle la réactivation des tendances à l’approche/évitement dépend de spécificités personnelles associées au comportement des individus. A travers sept expériences, nous avons pu montrer que les tendances d’approche/évitement réactivées variaient en fonction de caractéristiques personnelles, théoriquement associées au comportement réel des individus envers des groupes sociaux et envers un produit de consommation. De manière générale, ces travaux sont cohérents avec l’idée que la réactivation des tendances à l’approche/évitement prend sa base sur l’information sensorielle ayant accompagné ces actions passées / This work focuses on the processes underlying approach/avoidance tendencies reactivation. In this thesis, we defend the idea that these tendencies reactivation depends on the present sensorimotor information and its similarity with past sensorimotor information associated with approach/avoidance behaviors. In the first part, we tested if reproducing the most representative sensorimotor (here, visual) approach/avoidance information led to a better reactivation of approach/avoidance tendencies. Through eight experiments, when the visual information displayed in the task was representative of these actions, approach/avoidance effects were stronger than when this visual information was absent. Moreover, the effects we produced (with this visual information in the task) were strong and replicable. In the two next parts, we tested if the reactivation of approach/avoidance tendencies depends on personal characteristics theoretically associated with approach/avoidance behaviors. Through seven experiments, we showed that approach/avoidance tendencies varied as function of these personal characteristics theoretically associated with intergroup behaviors and consumption behaviors. Overall, this work is in line with the idea that approach/avoidance tendencies reactivation rely on sensory information associated with past approach/avoidance actions
3

The Perceptual Basis of Abstract Concepts in Polysemy Networks – An Interdisciplinary Study

Zhao, Tinghao 01 February 2018 (has links)
No description available.
4

Simulation de scènes sonores environnementales : Application à l’analyse sensorielle et l’analyse automatique / Simulation of environmental acoustic scenes : Application to sensory and computational analyses

Lafay, Grégoire 08 December 2016 (has links)
La présente thèse traite de l'analyse de scènes extraites d'environnements sonores, résultat auditif du mélange de sources émettrices distinctes et concomitantes. Ouvrant le champ des sources et des recherches possibles au-delà des domaines plus spécifiques que sont la parole ou la musique, l'environnement sonore est un objet complexe. Son analyse, le processus par lequel le sujet lui donne sens, porte à la fois sur les données perçues et sur le contexte de perception de ces données.Tant dans le domaine de la perception que de l'apprentissage machine, toute expérience suppose un contrôle fin de l'expérimentateur sur les stimuli proposés. Néanmoins, la nature de l'environnement sonore nécessite de se placer dans un cadre écologique, c'est à dire de recourir à des données réelles, enregistrées, plutôt qu'à des stimuli de synthèse. Conscient de cette problématique, nous proposons un modèle permettant de simuler, à partir d'enregistrements de sons isolés, des scènes sonores dont nous maîtrisons les propriétés structurelles -- intensité, densité et diversité des sources. Appuyé sur les connaissances disponibles sur le système auditif humain, le modèle envisage la scène sonore comme un objet composite, une somme de sons sources.Nous investissons à l'aide de cet outil deux champs d'application. Le premier concerne la perception, et la notion d'agrément perçu dans des environnements urbains. L'usage de données simulées nous permet d'apprécier finement l'impact de chaque source sonore sur celui-ci. Le deuxième concerne la détection automatique d'événements sonores et propose une méthodologie d'évaluation des algorithmes mettant à l'épreuve leurs capacités de généralisation. / This thesis deals with environmental scene analysis, the auditory result of mixing separate but concurrent emitting sources. The sound environment is a complex object, which opens the field of possible research beyond the specific areas that are speech or music. For a person to make sense of its sonic environment, the involved process relies on both the perceived data and its context. For each experiment, one must be, as much as possible,in control of the evaluated stimuli, whether the field of investigation is perception or machine learning. Nevertheless, the sound environment needs to be studied in an ecological framework, using real recordings of sounds as stimuli rather than synthetic pure tones. We therefore propose a model of sound scenes allowing us to simulate complex sound environments from isolated sound recordings. The high level structural properties of the simulated scenes -- such as the type of sources, their sound levels or the event density -- are set by the experimenter. Based on knowledge of the human auditory system, the model abstracts the sound environment as a composite object, a sum of soundsources. The usefulness of the proposed model is assessed on two areas of investigation. The first is related to the soundscape perception issue, where the model is used to propose an innovative experimental protocol to study pleasantness perception of urban soundscape. The second tackles the major issue of evaluation in machine listening, for which we consider simulated data in order to powerfully assess the generalization capacities of automatic sound event detection systems.
5

Reality-based brain-computer interaction

Sjölie, Daniel January 2011 (has links)
Recent developments within human-computer interaction (HCI) and cognitive neuroscience have come together to motivate and enable a framework for HCI with a solid basis in brain function and human reality. Human cognition is increasingly considered to be critically related to the development of human capabilities in the everyday environment (reality). At the same time, increasingly powerful computers continuously make the development of complex applications with realistic interaction easier. Advances in cognitive neuroscience and brain-computer interfaces (BCIs) make it possible to use an understanding of how the brain works in realistic environments to interpret brain measurements and adapt interaction in computer-generated virtual environments (VEs). Adaptive and realistic computer applications have great potential for training, rehabilitation and diagnosis. Realistic interaction environments are important to facilitate transfer to everyday reality and to gain ecological validity. The ability to adapt the interaction is very valuable as any training or learning must be done at the right level in order to optimize the development of skills. The use of brain measurements as input to computer applications makes it possible to get direct information about how the brain reacts to aspects of a VE. This provides a basis for the development of realistic and adaptive computer applications that target cognitive skills and abilities. Theories of cognition and brain function provide a basis for how such cognitive skills develop, through internalization of interaction with the current environment. By considering how internalization leads to the neural implementation and continuous adaptation of mental simulations in the brain it is possible to relate designed phenomena in a VE to brain measurements. The work presented in this thesis contributes to a foundation for the development of reality-based brain-computer interaction (RBBCI) applications by combining VR with emerging BCI methods based on an understanding of the human brain in human reality. RBBCI applications can be designed and developed to interact directly with the brain by interpreting brain measurements as responses to deliberate manipulations of a computer-generated reality. As the application adapts to these responses an interaction loop is created that excludes the conscious user. The computer interacts with the brain, through (the virtual) reality. / Den senaste tidens utveckling inom människa-dator-interaktion (MDI) och kognitiv neurovetenskap har samverkat till att motivera och möjliggöra ett ramverk för MDI med en stabil grund i hjärnfunktion och människors verklighet. Mänsklig kognition anses till allt högre grad vara kritisk beroende av hur människors förmågor utvecklas i den vardagliga miljön (verkligheten). Samtidigt har ständigt kraftfullare datorer gjort det allt lättare att utveckla komplexa applikationer med realistisk interaktion. Framsteg inom kognitiv neurovetenskap och hjärna-dator-gränssnitt (brain-computer interface, BCI) gör det möjligt att dra nytta av en förståelse av hur hjärnan fungerar i realistiska miljöer för att tolka hjärnmätningar och anpassa interaktion i datorgenererade virtuella miljöer (virtual environment, VE). Adaptiva och realistiska datorapplikationer har stor potential för träning, rehabilitering och diagnostik. Realistiska interaktionsmiljöer är viktiga för att underlätta överföring (transfer) till vardagen och för att nå ekologisk validitet. Möjligheten att anpassa interaktion är mycket värdefull eftersom träning och lärande måste ske på rätt nivå för att optimera effekten. Genom att använda sig av hjärnmätningar som indata till datorprogram blir det möjligt att få direkt information om hur hjärnan reagerar på olika aspekter av en VE. Detta ger en grund för utveckling av realistiska och adaptiva datorprogram som riktar in sig på kognitiva färdigheter och förmågor. Teorier om kognition och hjärnan ger en bas för att förstå hur sådana kognitiva färdigheter utvecklas genom att interaktion med omgivningen internaliseras. Genom att ta hänsyn till hur internalisering leder till ständig utveckling av mentala simuleringar i hjärnan är det möjligt att relatera designade fenomen i en VE till hjärnmätningar. Det arbete som presenteras i denna avhandling lägger en grund för utveckling av verklighets-baserad hjärna-dator-interaktions (reality-based brain-computer interaction, RBBCI) applikationer genom att kombinera VR med nya BCI metoder, baserat på en förståelse av den mänskliga hjärnan i människans verklighet. RBBCI-program kan designas och utvecklas för att interagera direkt med hjärnan genom att tolka hjärnmätningar som respons på avsiktliga manipulationer av den datorgenererade verkligheten. När programmet anpassar sig till denna respons uppstår en interaktionsloop som exkluderar den medvetna användaren. Datorn interagerar med hjärnan, genom (den virtuella) verkligheten.
6

Experientially grounded language production: Advancing our understanding of semantic processing during lexical selection

Vogt, Anne 05 April 2023 (has links)
Der Prozess der lexikalischen Selektion, d.h. die Auswahl der richtigen Wörter zur Übermittlung einer intendierten Botschaft, ist noch nicht hinreichend verstanden. Insbesondere wurde kaum erforscht, inwiefern Bedeutungsaspekte, welche in sensomotorischen Erfahrungen gründen, diesen Prozess der Sprachproduktion beeinflussen. Die Rolle dieser Bedeutungsaspekte wurde mit zwei Studien untersucht, in denen Probanden Sätze vervollständigten. In Studie 1 wurde der visuelle Eindruck der Satzfragmente manipuliert, so dass die Sätze auf- oder absteigend am Bildschirm erschienen. In Studie 2 mussten die Probanden Kopfbewegungen nach oben oder unten ausführen, während sie die Satzfragmente hörten. Wir untersuchten, ob räumliche Aspekte der produzierten Wörter durch die räumlichen Manipulationen sowie die räumlichen Eigenschaften der präsentierten Satzfragmente beeinflusst werden. Die vertikale visuelle Manipulation in Studie 1 wirkte sich nicht auf die räumlichen Attribute der produzierten Wörter aus. Die Kopfbewegungen in Studie 2 führten zu einem solchen Effekt – bei Kopfbewegungen nach oben waren die Referenten der produzierten Wörter weiter oben im Raum angesiedelt als nach Bewegungen nach unten (und anders herum). Darüber hinaus war dieser Effekt stärker, je ausgeprägter die interozeptive Sensibilität der Probanden war. Zudem beeinflussten die räumlichen Aspekte der Satzfragmente die räumlichen Eigenschaften der produzierten Wörter in beiden Studien. Somit zeigt diese Arbeit, dass in der Erfahrung basierende Bedeutungsanteile, welche entweder in Sprache eingebettet sind oder durch körperliche Aktivität reaktiviert werden, die Auswahl der Wörter beim Sprechen beeinflussen und dass interindividuelle Unterschiede diesen Effekt modulieren. Die Befunde werden in Bezug zu Theorien der Semantik gesetzt. Darüber hinaus wird das Methodenrepertoire erweitert, indem mit Studie 3 ein Ansatz für die Durchführung von Online-Sprachproduktionsexperimenten mit Bildbenennung vorgestellt wird. / The process of lexical selection, i.e. producing the right words to get an intended message across, is not well understood. Specifically, meaning aspects grounded in sensorimotor experiences and their role during lexical selection have not been investigated widely. Here, we investigated the role of experientially grounded meaning aspects with two studies in which participants had to produce a noun to complete sentences which described sceneries. In Study 1, the visual appearance of sentence fragments was manipulated and they seemed to move upwards or downwards on screen. In Study 2, participants moved their head up- or downwards while listening to sentence fragments. We investigated whether the spatial properties of the freely chosen nouns are influenced by the spatial manipulations as well as by the spatial properties of the sentences. The vertical visual manipulation used in Study 1 did not influence the spatial properties of the produced words. However, the body movements in Study 2 influenced participants’ lexical choices, i.e. after up-movements the referents of the produced words were higher up compared to after downward movements (and vice verse). Furthermore, there was an increased effect of movement on the spatial properties of the produced nouns with higher levels of participants’ interoceptive sensibility. Additionally, the spatial properties of the stimulus sentences influenced the spatial properties of the produced words in both studies. Thus, experientially grounded meaning aspects which are either embedded in text or reactivated via bodily manipulations may influence which words we chose when speaking, and interindividual differences may moderate these effects. The findings are related to current theories of semantics. Furthermore, this dissertation enhances the methodological repertoire of language production researchers by showing how language production studies with overt articulation in picture naming tasks can be run online (Study 3).

Page generated in 0.1026 seconds