• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 376
  • 51
  • 40
  • 39
  • 34
  • 28
  • 19
  • 19
  • 11
  • 10
  • 9
  • 8
  • 6
  • 4
  • 3
  • Tagged with
  • 780
  • 780
  • 126
  • 110
  • 89
  • 83
  • 72
  • 71
  • 69
  • 68
  • 67
  • 63
  • 60
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Scene statistics in 3D natural environments

Liu, Yang, 1976- 13 December 2010 (has links)
In this dissertation, we conducted a stereoscopic eye tracking experiment using naturalistic stereo images. We analyzed low level 2D and 3D scene features at binocular fixations and randomly selected places. The results reveal that humans tend to fixate on regions with higher luminance variations, but lower disparity variations. Because of the often observed co-occurrence of luminance and depth changes in natural environments, the dichotomy between luminance features and disparity features inspired us to study the accurate statistics of 2D and 3D scene properties. Using a range map database, we studied the distribution of disparity in natural scenes. The natural disparity distribution has a high peak at zero, and heavier tails that are similar to a Laplace distribution. The relevance of natural disparity distribution to other studies in neurobiology and visual psychophysics are discussed in detail. We also studied luminance, range and disparity statistics in natural scenes using a co-registered luminance-range database. The distributions of bandpass 2D and 3D scene features can be well modeled by generalized Gaussian models. There are positive correlations between bandpass luminance and depth, which can be captured by varying shape parameters in the probability density functions of the generalized Gaussians. In another study on suprathreshold luminance and depth discontinuities, we show that observing a significant luminance edge at a significant depth edge is much more likely than at homogeneous depth surfaces. It is also true that a significant depth edge happens at a significant luminance edge with a greater probability than at homogeneous luminance regions. Again, the dependency between luminance and depth discontinuities can be modeled successfully by generalized Gaussians. We applied our statistical models in 3D natural scenes to stereo correspondence. A Bayesian framework is proposed to incorporate the bandpass disparity prior, and the luminance-disparity dependency in the likelihood function. We compared our algorithm with a classical simulated annealing method based on heuristically defined energy functions. The computed disparity maps show great improvements both perceptually and objectively. / text
42

Investigating Memory for Spatial and Temporal Relations with Eye Movement Monitoring

Rondina II, Renante 26 November 2012 (has links)
By using eye movement monitoring (EMM) techniques, investigators have been able to examine the processes that support relational memory as they occur online. However, EMM studies have only focused on memory for spatial relations, producing a lack of EMM evidence for temporal relations. Thus, in the present study, participants performed a recognition memory task with stimuli that varied in their spatial and temporal relations. They were presented with a sequence of objects in a unique spatial configuration, and were instructed to either detect changes in the spatial or temporal relations between study and test presentations. The results provide novel EMM evidence for an interaction between spatial and temporal memory, and the obligatory effects of relational memory processes on eye movement behaviours. Moreover, the current study was also able to test predictions from the temporal context model (Howard & Kahana, 2002), and found evidence for a temporal contiguity effect.
43

Investigating Memory for Spatial and Temporal Relations with Eye Movement Monitoring

Rondina II, Renante 26 November 2012 (has links)
By using eye movement monitoring (EMM) techniques, investigators have been able to examine the processes that support relational memory as they occur online. However, EMM studies have only focused on memory for spatial relations, producing a lack of EMM evidence for temporal relations. Thus, in the present study, participants performed a recognition memory task with stimuli that varied in their spatial and temporal relations. They were presented with a sequence of objects in a unique spatial configuration, and were instructed to either detect changes in the spatial or temporal relations between study and test presentations. The results provide novel EMM evidence for an interaction between spatial and temporal memory, and the obligatory effects of relational memory processes on eye movement behaviours. Moreover, the current study was also able to test predictions from the temporal context model (Howard & Kahana, 2002), and found evidence for a temporal contiguity effect.
44

Out of this word : the effect of parafoveal orthographic information on central word processing

Dare, Natasha January 2010 (has links)
The aim of this thesis is to investigate the effect of parafoveal information on central word processing. This topic impacts on two controversial areas of research: the allocation of attention during reading, and letter processing during word recognition. Researchers into the role of attention during reading are split into two camps, with some believing that attention is allocated serially to consecutive words and others that it is spread across multiple words in parallel. This debate has been informed by the results of recent experiments that test a key prediction of the parallel processing theory that parafoveal and foveal processing occur concurrently. However, there is a gap in the literature for tightly-controlled experiments to further test this prediction. In contrast, the study of the processing that letters undergo during word recognition has a long history, with many researchers concluding that letter identity is processed only conjointly with letter ‘slot’ position within a word, known as ‘slot-based’ coding. However, recent innovative studies have demonstrated that more word priming is produced from prime letter strings containing letter transpositions than from primes containing letter substitutions, although this work has not been extended to parafoveal letter prime presentations. This thesis will also discuss the neglected subject of how research into these separate topics of text reading and isolated word recognition can be integrated via parafoveal processing. It presents six experiments designed to investigate how our responses to a central word are affected by varying its relationship with simultaneously presented parafoveal information. Experiment 1 introduced the Flanking Letters Lexical Decision task in which a lexical decision was made to words flanked by bigrams either orthographically related or unrelated to the response word; the results indicated that there is parafoveal orthographic priming but did not support the ‘slot-based’ coding theory as letter order was unimportant. Experiments 2-4 involved eye-tracking of participants who read sentences containing a boundary change that allowed the presentation of an orthographically related word in parafoveal vision. Experiment 2 demonstrated that an orthographically related word at position n+1 reduces first-pass fixations on word n, indicating parallel processing of these words. Experiment 4 replicated this result, and also showed that altering the letter identity of word n+1 reduced orthographic priming whereas altering letter order did not, indicating that slot-based coding of letters does not occur during reading. However, Experiment 3 found that an orthographically related word presented at position n-1 did not prime word n, signifying the influence of reading direction on parafoveal processing. Experiment 5 investigated whether the parallel processing that words undergo during text reading conditions our representations of isolated words; lexical decision times to words flanked by bigrams that formed plausible or implausible contexts did not differ. Lastly, one possible cause of the reading disorder dyslexia is under- or over- processing of parafoveal information. Experiment 6 therefore replicated Experiment 1 including a sample of dyslexia sufferers but found no interaction between reading ability and parafoveal processing. Overall, the results of this thesis lead to the conclusion that there is extensive processing of parafoveal information during both reading (indicating parallel processing) and word recognition (contraindicating slot-based coding), and that underpinning both our reading and word recognition processes is the flexibility of our information-gathering mechanisms.
45

Effects of Gender and Gaze Direction on the Visual Exploration of Male and Female Bodies

Palanica, Adam January 2011 (has links)
The present study used eye-tracking to investigate whether a model’s gaze direction influences the way observers look at the entire body of the model and how this interacts with the observer and the model’s gender. Participants viewed individual male and female computer agents during both a free-viewing task and a rating task to evaluate the attractiveness of each character. The results indicated that both male and female participants primarily gazed at the models’ faces. Participants also spent more time scanning the face when rating the attractiveness of each model. Observers tended to scan faces with a direct gaze longer than faces with an averted gaze for both the free-viewing and attractiveness rating tasks. Lastly, participants evaluated models with a direct gaze as more attractive than models with an averted gaze. As these results occurred for pictures of computer agents, and not actual people, this suggests that direct gaze, and faces in general, are powerful for engaging attention. In summary, both task requirements and gaze direction modified face viewing preference.
46

The use of facial features in facial expression discrimination

Neath, Karly January 2012 (has links)
The present four studies are the first to examine the effect of presentation time on accurate facial expression discrimination while concurrently using eye movement monitoring to ensure fixation to specific features during the brief presentation of the entire face. Recent studies using backward masking and evaluating accuracy performance with signal detection methods (A’) have identified a happy-face advantage however differences between other facial expressions of emotion have not been reported. In each study, a specific exposure time before mask (150, 100, 50, or 16.67 ms) and eight different fixation locations were used during the presentation of neutral, disgusted, fearful, happy, and surprised expressions. An effect of emotion was found across all presentation times such that the greatest performance was seen for happiness, followed by neutral, disgust, surprise, and with the lowest performances seen for fear. Fixation to facial features specific to an emotion did not improve performance and did not account for the differences in accuracy performance between emotions. Rather, results suggest that accuracy performance depends on the integration of facial features, and that this varies across emotions and with presentation time.
47

Probing the Representation of Decision Variables Using EEG and Eye Tracking

Morales, Pablo 06 September 2018 (has links)
Value based decisions are among the most common types of decisions made by humans. A considerable body of work has investigated how different types of information guide such decisions, as well as how evaluations of their outcomes retroactively inform the parameters that were used to inform them. Several open questions remain regarding the nature of the underlying representations of decision-relevant information. Of particular relevance is whether or not positive and negative information (i.e. rewards/gains vs. punishments/losses/costs) are treated as categorically distinct, or whether they are represented on a common scale. This question was examined across three different studies utilizing a variety of methods (traditional event-related potentials, multivariate pattern classification, and eye tracking) to obtain a more comprehensive picture of how decision-relevant information is represented A common theme among the three studies was that positive and negative types of information seems to be, at least initially, represented as categorically distinct (whether it be information about gains vs. losses, or value vs. effort). Additionally, integration of different types of information appears to take place during the later phases of the decision period, which may also be when distortions in the representation of value information (ex. loss aversion) may occur. Overall, this body of work advances our understanding of the underpinnings of value based decisions by providing additional insight about how decision-relevant information is represented in a dynamic and flexible manner.
48

Machine learning-based human observer analysis of video sequences

Al-Raisi, Seema F. A. R. January 2017 (has links)
The research contributes to the field of video analysis by proposing novel approaches to automatically generating human observer performance patterns that can be effectively used in advancing the modern video analytic and forensic algorithms. Eye tracker and eye movement analysis technology are employed in medical research, psychology, cognitive science and advertising. The data collected on human eye movement from the eye tracker can be analyzed using the machine and statistical learning approaches. Therefore, the study attempts to understand the visual attention pattern of people when observing a captured CCTV footage. It intends to prove whether the eye gaze of the observer which determines their behaviour is dependent on the given instructions or the knowledge they learn from the surveillance task. The research attempts to understand whether the attention of the observer on human objects is differently identified and tracked considering the different areas of the body of the tracked object. It attempts to know whether pattern analysis and machine learning can effectively replace the current conceptual and statistical approaches to the analysis of eye-tracking data captured within a CCTV surveillance task. A pilot study was employed that took around 30 minutes for each participant. It involved observing 13 different pre-recorded CCTV clips of public space. The participants are provided with a clear written description of the targets they should find in each video. The study included a total of 24 participants with varying levels of experience in analyzing CCTV video. A Tobii eye tracking system was employed to record the eye movements of the participants. The data captured by the eye tracking sensor is analyzed using statistical data analysis approaches like SPSS and machine learning algorithms using WEKA. The research concluded the existence of differences in behavioural patterns which could be used to classify participants of study is appropriate machine learning algorithms are employed. The research conducted on video analytics was perceived to be limited to few iii projects where the human object being observed was viewed as one object, and hence the detailed analysis of human observer attention pattern based on human body part articulation has not been investigated. All previous attempts in human observer visual attention pattern analysis on CCTV video analytics and forensics either used conceptual or statistical approaches. These methods were limited with regards to making predictions and the detection of hidden patterns. A novel approach to articulating human objects to be identified and tracked in a visual surveillance task led to constrained results, which demanded the use of advanced machine learning algorithms for classification of participants The research conducted within the context of this thesis resulted in several practical data collection and analysis challenges during formal CCTV operator based surveillance tasks. These made it difficult to obtain the appropriate cooperation from the expert operators of CCTV for data collection. Therefore, if expert operators were employed in the study rather than novice operator, a more discriminative and accurate classification would have been achieved. Machine learning approaches like ensemble learning and tree based algorithms can be applied in cases where a more detailed analysis of the human behaviour is needed. Traditional machine learning approaches are challenged by recent advances in the field of convolutional neural networks and deep learning. Therefore, future research can replace the traditional machine learning approaches employed in this study, with convolutional neural networks. The current research was limited to 13 different videos with different descriptions given to the participants for identifying and tracking different individuals. The research can be expanded to include any complicated demands with regards to changes in the analysis process.
49

Eléments de description de l'articulation des registres visuels (eye-tracking) et verbaux dans le maintien de l'interaction schizophrénique / Description of elements of the articulation visual register (eye-tracking) and verbal register in maintaining the schizophrenic interaction

Padroni, Stéphanie 27 April 2015 (has links)
La recherche présentée ici constitue l’une des applications du projet de recherche « InterHumain », et son objectif principal est double : développer les connaissances dont on dispose actuellement sur le fonctionnement de l’interaction en général ; et, décrire avec plus de précision les capacités interactionnelles des schizophrènes dans le but de contribuer à l’élaboration de techniques de prise en charge adaptées à leurs troubles cognitifs. Afin de concevoir un modèle de l’interaction plurimodal, nous avons analysé des interactions en face à face entre un expérimentateur (psychologue) et un schizophrène. Cela nous a amené à comparer certaines propriétés des compétences interactionnelles des sujets « normaux » à celles des sujets schizophrènes afin d’identifier les capacités et les difficultés qu’ils manifestent au niveau du langage et des mouvements oculaires. Nous centrons les analyses dans ce manuscrit sur deux aspects de l’interaction : la séquentialité du discours et les saccades oculaires. Ce modèle sera éprouvé au moyen du système faceLAB5. Il s’agit d’un système d’enregistrement et de suivi des mouvements oculaires (« eye-tracking ») que nous avons doublé afin d’obtenir des données sur les deux interlocuteurs en situation d’interaction. De plus, les résultats seront mis en perspective avec les données issues de bilans neuropsychologiques afin, notamment, de déterminer le rôle que pourrait jouer le lobe frontal dans le maintien de l’interaction. Les principaux résultats sont en accord avec ceux issus de nombreuses études antérieures utilisant un seul système d’ « eye-tracking » antérieures, notamment que les patients schizophrènes produisent plus de mouvements de saccades oculaires que les participants témoins. Mais le dispositif, tel que nous l’avons conçu, permet également l’analyse du fonctionnement de l’interlocuteur donc de l’expérimentateur. Celui-ci manifeste une baisse de sa production de saccades oculaires lorsqu’il est en interaction avec les patients schizophrènes. De plus, les résultats aux tests neuropsychologiques montrent que malgré certaines déficiences déjà identifiées chez les patients schizophrènes, certaines capacités cognitives semblent préservées. Cette observation pourraient être le point d’ancrage d’une restauration des capacités cognitives actuellement en déclins voir déficitaires chez certains patients schizophrènes, par la mise au point de thérapies spécifiques et adaptées. Cela leur permettrait d’exploiter au mieux toutes leurs capacités cognitives, soit au quotidien, soit en vue d’une insertion sociale et professionnelle à long terme. / The research presented here is one of the application of the research project “inter-human” and it has a twofold objective: to develop the knowledge currently available on the functioning of the interaction in general; and to describe more accurately interactional skills of schizophrenic patients in order to contribute to the development of care techniques appropriate to their cognitive disorders. In order to design a model of multimodal interaction, we analyzed face to face interactions between an experimenter (psychologist) and a schizophrenic patient. This led us to compare some properties of interactional skills of “normal” subjects to those of schizophrenic patients in order to identify the capabilities and challenges they manifest via language and eye movements. In this thesis, we focus the analyzes on two aspects of interaction: sequentiality of speech and saccadic eye movements. This model will be tested using the faceLAB5 system. This system is a recording and monitoring system of eye movements ("eye-tracking") that we doubled to get data on both parties in a situation of interaction. In addition, the results will be put into perspective with data taken from neuropsychological evaluations, in order to identify the possible role of the frontal lobe in maintaining the interaction. The main results are consistent with those from many other studies that used an earlier single “eye-tracking” system, in particular that schizophrenic patients produce more saccadic eye movements than “normal” participants. But designed as it was, the device also allows the analysis of the behavior of the interlocutor, that is to say the experimenter. This shows a decrease in production of saccadic eye movements from the experimenter when interacting with schizophrenic patients. In addition, the results on neuropsychological tests show that despite some disabilities already identified in schizophrenic patients, some cognitive abilities seem to be preserved.This observation could be the cornerstone of the restoration of cognitive capacities that are declining or are deficient in some schizophrenia patients, by the development of specific and appropriate therapies. This would allow them to exploit all their cognitive abilities, either daily or in a view to a social and professional long term integration.
50

Development Of a Multisensorial System For Emotions Recognition

FLOR, H. R. 17 March 2017 (has links)
Made available in DSpace on 2018-08-02T00:00:40Z (GMT). No. of bitstreams: 1 tese_10810_Hamilton Rivera Flor20171019-95619.pdf: 4725252 bytes, checksum: 16042ed4abfc5b07268db9f41baa2a83 (MD5) Previous issue date: 2017-03-17 / Automated reading and analysis of human emotion has the potential to be a powerful tool to develop a wide variety of applications, such as human-computer interaction systems, but, at the same time, this is a very difficult issue because the human communication is very complex. Humans employ multiple sensory systems in emotion recognition. At the same way, an emotionally intelligent machine requires multiples sensors to be able to create an affective interaction with users. Thus, this Master thesis proposes the development of a multisensorial system for automatic emotion recognition. The multisensorial system is composed of three sensors, which allowed exploring different emotional aspects, as the eye tracking, using the IR-PCR technique, helped conducting studies about visual social attention; the Kinect, in conjunction with the FACS-AU system technique, allowed developing a tool for facial expression recognition; and the thermal camera, using the FT-RoI technique, was employed for detecting facial thermal variation. When performing the multisensorial integration of the system, it was possible to obtain a more complete and varied analysis of the emotional aspects, allowing evaluate focal attention, valence comprehension, valence expressions, facial expression, valence recognition and arousal recognition. Experiments were performed with sixteen healthy adult volunteers and 105 healthy children volunteers and the results were the developed system, which was able to detect eye gaze, recognize facial expression and estimate the valence and arousal for emotion recognition, This system also presents the potential to analyzed emotions of people by facial features using contactless sensors in semi-structured environments, such as clinics, laboratories, or classrooms. This system also presents the potential to become an embedded tool in robots to endow these machines with an emotional intelligence for a more natural interaction with humans. Keywords: emotion recognition, eye tracking, facial expression, facial thermal variation, integration multisensorial

Page generated in 0.0793 seconds