• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 378
  • 51
  • 40
  • 39
  • 34
  • 28
  • 19
  • 19
  • 11
  • 10
  • 9
  • 8
  • 6
  • 4
  • 4
  • Tagged with
  • 786
  • 786
  • 126
  • 110
  • 89
  • 83
  • 74
  • 72
  • 69
  • 69
  • 68
  • 63
  • 62
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Reading second language subtitles : a case study of South African viewers reading in their native language and L2-English / Esté Hefer

Hefer, Esté January 2011 (has links)
Most South African subtitles are produced and broadcast in English despite the fact that English is the first language of only 8.2% of the entire population (Statistics South Africa, 2004). Therefore, current English subtitles are predominantly received as second language text. This poses questions as to how people perceive these subtitles, and if and how their reading of English second language (L2) subtitles differs from their reading of L1 (non-English) subtitles. In recent years, eye tracking has proven to be a valuable method in observing and measuring the eye movements of people watching and reading subtitles. In order to explain the use of eye tracking and in order to answer the question at hand, this study comprises a literature review and an empirical study. The literature review gives an in-depth account of previous studies that used eye tracking to study reading and elaborates on the parameters used to account for various findings. The two empirical components of this study examined the accessibility and effectiveness of English L2 subtitles by presenting native speakers of Afrikaans and Sesotho with subtitles displayed (a) in their native language, Afrikaans or Sesotho, and (b) in L2 English, while monitoring their eye movements with an SMI iViewX™ Hi-Speed eye tracker and comparing the data with that of English L1 speakers reading English subtitles. Participants were also given static text to read (accompanied by a corresponding comprehension test) in order to see if there was a relation between participants’ first and second language reading of static text and that of subtitling. Additionally, participants were given a questionnaire on their reading behaviour, reading preferences, access to subtitled television programming and reading of subtitles in order to find explanations for occurrences in the data. The initial hypothesis was that there would be a difference in L1 and L2 subtitle reading and attention allocation as measured by key eye-tracking parameters. Using ANOVAs, statistically significant differences were indeed found, but the differences were much more significant for the Sesotho L1 than the Afrikaans L1 speakers. After excluding possible confounding factors that were analysed in attempt to refute this hypothesis, the conclusion was that participants inherently read L1 and L2 subtitles differently. The hypothesis is therefore supported. However, the difference in L1 and L2 subtitle reading was not the only significant finding – the Sesotho L1 speakers’ reading data revealed a greater underlying issue, namely literacy. The problem of low literacy levels can be attributed to the participants’ socioeconomic background and history, and needs to be addressed urgently. Recommendations for future research include that the current study be broadened in terms of scope, sampling size, representativeness and experimental material; that the focus be shifted to the rest of the languages spoken in South Africa for which the users do not have a shared sense of bilingualism and for which the L1 skills and levels of L1 literacy vary; and to further explore the relation between the reading of static text and subtitle reading in order to ensure adequate subtitle reading in terms of proportional attention allocation. However, the issue of low literacy levels will have to be addressed urgently; only then will the South African viewing public be able to gain full access to any form of broadcast communicative material or media, and only then will they be able to benefit from subtitling and all that it offers. / North-West University (South Africa). Vaal Triangle Campus.
632

Reading second language subtitles : a case study of South African viewers reading in their native language and L2-English / Esté Hefer

Hefer, Esté January 2011 (has links)
Most South African subtitles are produced and broadcast in English despite the fact that English is the first language of only 8.2% of the entire population (Statistics South Africa, 2004). Therefore, current English subtitles are predominantly received as second language text. This poses questions as to how people perceive these subtitles, and if and how their reading of English second language (L2) subtitles differs from their reading of L1 (non-English) subtitles. In recent years, eye tracking has proven to be a valuable method in observing and measuring the eye movements of people watching and reading subtitles. In order to explain the use of eye tracking and in order to answer the question at hand, this study comprises a literature review and an empirical study. The literature review gives an in-depth account of previous studies that used eye tracking to study reading and elaborates on the parameters used to account for various findings. The two empirical components of this study examined the accessibility and effectiveness of English L2 subtitles by presenting native speakers of Afrikaans and Sesotho with subtitles displayed (a) in their native language, Afrikaans or Sesotho, and (b) in L2 English, while monitoring their eye movements with an SMI iViewX™ Hi-Speed eye tracker and comparing the data with that of English L1 speakers reading English subtitles. Participants were also given static text to read (accompanied by a corresponding comprehension test) in order to see if there was a relation between participants’ first and second language reading of static text and that of subtitling. Additionally, participants were given a questionnaire on their reading behaviour, reading preferences, access to subtitled television programming and reading of subtitles in order to find explanations for occurrences in the data. The initial hypothesis was that there would be a difference in L1 and L2 subtitle reading and attention allocation as measured by key eye-tracking parameters. Using ANOVAs, statistically significant differences were indeed found, but the differences were much more significant for the Sesotho L1 than the Afrikaans L1 speakers. After excluding possible confounding factors that were analysed in attempt to refute this hypothesis, the conclusion was that participants inherently read L1 and L2 subtitles differently. The hypothesis is therefore supported. However, the difference in L1 and L2 subtitle reading was not the only significant finding – the Sesotho L1 speakers’ reading data revealed a greater underlying issue, namely literacy. The problem of low literacy levels can be attributed to the participants’ socioeconomic background and history, and needs to be addressed urgently. Recommendations for future research include that the current study be broadened in terms of scope, sampling size, representativeness and experimental material; that the focus be shifted to the rest of the languages spoken in South Africa for which the users do not have a shared sense of bilingualism and for which the L1 skills and levels of L1 literacy vary; and to further explore the relation between the reading of static text and subtitle reading in order to ensure adequate subtitle reading in terms of proportional attention allocation. However, the issue of low literacy levels will have to be addressed urgently; only then will the South African viewing public be able to gain full access to any form of broadcast communicative material or media, and only then will they be able to benefit from subtitling and all that it offers. / North-West University (South Africa). Vaal Triangle Campus.
633

An Embodied Account of Action Prediction

Elsner, Claudia January 2015 (has links)
Being able to generate predictions about what is going to happen next while observing other people’s actions plays a crucial role in our daily lives. Different theoretical explanations for the underlying processes of humans’ action prediction abilities have been suggested. Whereas an embodied account posits that predictive gaze relies on embodied simulations in the observer’s motor system, other accounts do not assume a causal role of the motor system for action prediction. The general aim of this thesis was to augment current knowledge about the functional mechanisms behind humans’ action prediction abilities. In particular, the present thesis outlines and tests an embodied account of action prediction. The second aim of this thesis was to extend prior action prediction studies by exploring infants’ online gaze during observation of social interactions. The thesis reports 3 eye-tracking studies that were designed to measure adults’ and infants’ predictive eye movements during observation of different manual and social actions. The first two studies used point-light displays of manual reaching actions as stimuli to isolate human motion information. Additionally, Study II used transcranial magnetic stimulation (TMS) to directly modify motor cortex activity. Study I showed that kinematic information from biological motion can be used to anticipate the goal of other people’s point-light actions and that the presence of biological motion is sufficient for anticipation to occur. Study II demonstrated that TMS-induced temporary lesions in the primary motor cortex selectively affected observers’ gaze latencies. Study III examined 12-month-olds’ online gaze during observation of a give-and-take interaction between two individuals. The third study showed that already at one year of age infants shift their gaze from a passing hand to a receiving hand faster when the receiving hand forms a give-me gesture compared to an inverted hand shape. The reported results from this thesis make two major contributions. First, Studies I and II provide evidence for an embodied account of action prediction by demonstrating a direct connection between anticipatory eye movements and motor cortex activity. These findings support the interpretation that predictive eye movements are driven by a recruitment of the observer’s own motor system. Second, Study III implicates that properties of social action goals influence infants’ online gaze during action observation. It further suggests that at one year of age infants begin to show sensitivity to social goals within the context of give-and-take interactions while observing from a third-party perspective.
634

Importance Prioritised Image Coding in JPEG 2000

Nguyen, Anthony Ngoc January 2005 (has links)
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
635

Dépression et Stimulation Magnétique Transcrânienne : à la Recherche de biomarqueurs (Oculométrie et Excitabilité Corticale) / Depression and Transcranial Magnetic Stimulation : looking for biomarkers (Eye-Tracking and Cortical Excitability)

Beynel, Lysianne 08 December 2015 (has links)
Le but de cette thèse était la recherche de biomarqueurs des troubles de l'humeur (dépression unipolaire et troubles bipolaires). Compte tenu de l'étiologie de ces troubles (hypométabolisme du cortex préfrontal dorso-latéral et déficit de la neurotransmission GABA/glutamatergique), nous avons choisi d'étudier deux biomarqueurs : la performance saccadique et l'excitabilité corticale. Nos résultats montrent que les performances saccadiques (antisaccades) permettent (i) de discriminer les patients présentant des troubles de l'humeur de sujets sains, (ii) d'objectiver l'amélioration thymique des patients suite à un traitement, et (iii) d'évaluer l'effet neuromodulateur à court-terme d'une séance de stimulation magnétique transcrânienne répétée. Concernant les mesures d'excitabilité corticale, aucune différence liée à l'amélioration thymique des patients, ni de différences entre patients et contrôles ne ressortent significativement. Nous avons suggéré que le non-contrôle du « State-Dependency » (i.e., de l'« état neurocognitif » des sujets pendant les stimulations) puisse être l'une des causes de l'absence de résultats, et validé cette hypothèse en manipulant les registres cognitifs et émotionnels des sujets.Le second aspect de notre travail de thèse avait trait à l'étude de l'efficacité de la stimulation magnétique transcrânienne répétée (rTMS) comme alternative thérapeutique non médicamenteuse des troubles de l'humeur. Si la littérature s'accorde sur une efficacité significative mais modérée de la rTMS comme traitement, nos données n'ont pas mis en évidence de supériorité du traitement actif par rapport au traitement placebo dans le cas de la neurostimulation iTBS. Une des raisons de ce manque d'efficacité du traitement actif pourrait être liée à des questions d'ordre méthodologique, comme le choix des paramètres de stimulation. Plus généralement, cette absence de résultats incite à questionner le postulat théorique basant l'étude de la réactivité du CPFDL ou sa neuromodulation sur les propriétés du cortex moteur. Notre expérience, étudiant la réactivité de différentes zones corticales par couplage TMS-EEG, va dans ce sens en montrant que la réactivité du cortex moteur diffère de celle des autres cortex. Le couplage TMS-EEG devrait permettre de mieux comprendre l'impact de la neuromodulation rTMS sur la cible corticale visée, et donc d'adapter les paramètres de stimulations aux aires cérébrales stimulées, permettant à terme de traiter plus efficacement les troubles de l'humeur. / The aim of this doctoral thesis was to develop biomarkers for mood disorders (unipolar major depression and bipolar disorders). Considering mood disorders' etiology (Dorso lateral prefrontal cortex hypometabolism and GABA/glutamate neurotransmission deficits), we decided to study two biomarkers: saccadic performance and cortical excitability. Our results showed that saccadic performance (notably Antisaccades) allows (i) discriminating bipolar patients from healthy subjects, (ii) ascertaining patients' mood improvement, and (iii) evaluating the short-term neuromodulation induced by repetitive transcranial magnetic stimulation.Regarding cortical excitability measurements, our results did not reveal any differences neither between patients and healthy subjects, nor between Responders and non Responders to a treatment (Ketamine injection or rTMS). We suggested that the null results could be explained by the lack of control of State-Dependency. This assumption was tested and validated through the manipulation of the subjects' cognitive and emotional states.A second aim of this doctoral thesis was to study the efficacy of rTMS, a non pharmacological therapeutic alternative, as a treatment for mood disorders. Meta-analyses showed that anti depressant effect of rTMS seems to be significant but still moderate. In our experiment, mood improvement did not differ between active and sham rTMS. Basic methodological reasons such as stimulation parameters could explain this lack of efficacy. Overall, one could wonder about the validity of the theoretical postulate of rTMS, drawn upon motor cortex reactivity. This postulate inferred that both cortical reactivity of motor cortex and DLPFC are similar. Using TMS-EEG coupling, we studied the reactivity of these cortices, to TMS pulses, which revealed that motor cortex and DLPFC reactivities should not be assimilated. This result calls into question the relevance of the rTMS theoretical postulate. Coupling TMS and EEG should allow a better understanding of the impact of rTMS neuromodulatory effect over the targeted area, and thus to a better adaption of the stimulation parameters, which could lead to an improvement of rTMS efficacy as a treatment for mood disorders.
636

Méthodologie de traitement conjoint des signaux EEG et oculométriques : applications aux tâches d'exploration visuelle libre / Methodology for EEG signal and eye tracking joint processing : applications on free visual exploration tasks

Kristensen, Emmanuelle 12 June 2017 (has links)
Nos travaux se sont articulés autour du problème de recouvrement temporel rencontré lors de l'estimation des potentiels évoqués. Il constitue, plus particulièrement, une limitation majeure pour l'estimation des potentiels évoqués par les fixations ou saccades oculaires lors d'une expérience en enregistrement conjoint EEG et oculométrie. En effet, la méthode habituellement utilisée pour estimer ces potentiels évoqués, la méthode par simple moyennage du signal synchronisé sur l'évènement d'intérêt, suppose qu'il y a un seul potentiel évoqué par essai. Or selon les intervalles inter-stimuli, cette hypothèse n'est pas toujours vérifiée. Ceci est d'autant plus vrai dans le contexte des potentiels évoqués par fixations ou saccades oculaires, les intervalles entre ceux-ci n'étant pas contrôlés par l'expérimentateur et pouvant être plus courts que les latences des potentiels d'intérêt. Le fait que cette hypothèse ne soit pas vérifiée donne une estimation biaisée du potentiel évoqué du fait des recouvrements entre les potentiels évoqués.Nous avons donc utilisé le Modèle Linéaire Général (GLM), méthode de régression linéaire bien connue, pour estimer les potentiels évoqués par les mouvements oculaires afin de répondre à ce problème de recouvrement. Tout d'abord, nous avons introduit, dans ce modèle, un terme de régularisation au sens de Tikhonov dans l'optique d'améliorer le rapport signal sur bruit de l'estimation pour un faible nombre d'essais. Nous avons ensuite comparé le GLM à l'algorithme ADJAR dans un contexte d'enregistrement conjoint EEG et oculométrie lors d'une tâche d'exploration visuelle de scènes naturelles. L'algorithme ADJAR ("ADJAcent Response") est un algorithme classique d'estimation itérative des recouvrements temporels développé en 1993 par M. Woldorff. Les résultats ont montré que le GLM était un modèle plus flexible et robuste que l'algorithme ADJAR pour l'estimation des potentiels évoqués par les fixations oculaires. Puis, deux configurations du GLM ont été comparées pour l'estimation du potentiel évoqué à l'apparition du stimulus et du potentiel évoqué par les fixations au début de l'exploration. Toutes deux prenaient en compte les recouvrements entre potentiels évoqués mais l'une distinguait également le potentiel évoqué par la première fixation de l'exploration du potentiel évoqué par les fixations suivantes. Il est apparu que le choix de la configuration du GLM était un compromis entre la qualité de l'estimation des potentiels et les hypothèses émises sur les processus cognitifs sous-jacents.Enfin, nous avons conduit de bout en bout une expérience d'envergure en enregistrement conjoint EEG et oculométrie portant sur l'exploration des expressions faciales émotionnelles naturelles statiques et dynamiques. Nous avons présenté les premiers résultats pour la modalité statique. Après avoir discuté de la méthode d'estimation des potentiels évoqués selon l'impact des mouvements oculaires sur leur fenêtre de latence, nous avons étudié l'effet du type d'émotion. Nous avons trouvé des modulations du potentiel différentiel EPN (Early Posterior Negativity), entre 230 et 350 ms après l'apparition du stimulus et du potentiel LPP (Late Positivity Potential), entre 400 et 600 ms après l'apparition du stimulus. Nous avons également observé des variations du potentiel évoqué par les fixations oculaires. Pour le potentiel LPP, qui est un marqueur de la reconnaissance consciente de l'émotion, nous avons montré qu'il était important de dissocier l'information qui est immédiatement encodée à l'apparition du stimulus émotionnel, de celle qui est apportée à l'issue de la première fixation. Cela met en évidence un motif d'activation différencié pour les stimuli émotionnels à valence négative ou à valence positive. Cette différenciation est en accord avec l'hypothèse d'un traitement plus rapide des stimuli émotionnels à valence négative que des stimuli émotionnels à valence positive. / Our research focuses on the issue of overlapping for evoked potential estimation. More specifically, this issue is a significant limitation for Eye-Fixation Related Potentials and Eye-Saccade Related Potentials estimations during a joint EEG and eye-tracking recording. Indeed, the usual estimation, by averaging the signal time-locked to the event of interest, is based on the assumption that a single evoked potential occurs during a trial. However, depending on the inter-stimulus intervals, this assumption is not always verified. This is especially the case in the context of Eye-Fixation Related Potentials and Eye-Saccade Related Potentials, given the fact that the intervals between fixations (or saccades) are not controlled by the experimenter and can be shorter than the latencies of the potentials of interest.The fact that this assumption is not verified gives a distorted estimate of the evoked potential due to overlaps between the evoked potentials.We have therefore used the Linear Model (GLM), a well-known linear regression method, to estimate the potentials evoked by ocular movements in order to take into account overlaps. First, we decided to introduce a term of Tikhonov regularization into this model in order to improve the signal-to-noise ratio of the estimate for a small number of trials. Then, we compared the GLM to the ADJAR algorithm in a context of joint EEG and eye-tracking recording during a task of visual exploration of natural scenes. The ADJAR ("ADJAcent Response") algorithm is an algorithm for iterative estimation of temporal overlaps developed in 1993 by M. Woldorff. The results showed that the GLM model was more flexible and robust than the ADJAR algorithm in estimating Eye-Fixation Related Potentials. Further, two GLM configurations were compared in their estimation of evoked potential at the onset of the stimulus and the eye-fixation related potential at the beginning of the testing. Both configurations took into account the overlaps between evoked potentials, but one additionally distinguished the potential evoked by the first fixation of the exploration from the potential evoked by the following fixations. It became clear that the choice of the GLM configuration was a compromise between the estimation quality of the potentials and the assumptions about the underlying cognitive processes.Finally, we conducted an extensive joint EEG and eye-tracking experiment on the exploration of static and dynamic natural emotional facial expressions. We presented the first results for the static modality. After discussing the estimation method of the evoked potentials according to the impact of the ocular movements on their latency window, we studied the influence of the type of emotion. We found modulations of the differential EPN (Early Posterior Negativity) potential, between 230 and 350 ms after the stimulus onset and the Late Positivity Potential (LPP) , between 400 and 600 ms after the stimulus onset. We also observed variations for the Eye-Fixation Related Potentials. Regarding the LPP component, a marker of conscious recognition of emotion, we have shown that it is important to dissociate information that is immediately encoded at the onset of the emotional stimulus from information encoded at the first fixations. This shows a differentiated pattern of activation according to the emotional stimulus valence. This differentiation is in agreement with the hypothesis of a faster treatment of negative emotional stimuli than of positive emotional stimuli.
637

Pupil Tracking and Control of a Laser Based Power System for a Vision Restoring Retinal Implant

Mailhot, Nathaniel 17 January 2019 (has links)
For elderly Canadians, the prevalence of vision impairment caused by degenerative retinal pathologies, such as age-related macular degeneration and retinitis pigmentosa, is at an occurrence rate of 14 percent, and on the rise. It has been shown that visual function can be restored by electrically stimulating intact retinal tissue with an array of micro-electrodes with suitable signals. Commercial retinal implants carrying such a micro-electrode array achieve this, but to date must receive power and data over copper wire cable passing through a permanent surgical incision in the eye wall (sclera). This project is defined by a collaboration with iBIONICS, who are developing retinal implants for treatment of such conditions. iBIONICS has developed the Diamond Eye retinal implant, along with several technology sub-systems to form a comprehensive and viable medical solution. Notably, the Diamond Eye system can be powered wirelessly, with no need for a permanent surgical incision. The thesis work is focused on the formulation, simulation and hardware demonstration of a powering system, mounted on glasses frame, for a retinal implant. The system includes a Micro-Electro-Mechanical System (MEMS) mirror that directs a laser beam to the implant through the pupil opening. The work presented here is built on two main components: an iterative predictor-corrector algorithm (Kalman filter) that estimates pupil coordinates from measurements provided by an image-based eye tracking algorithm; and an misalignment compensation algorithm that maps eye pupil coordinates into mirror coordinates, and compensates for misalignment caused by rigid body motions of the glasses lens mirror and the MEMS mirror with respect to the eye. Pupil tracker and misalignment compensation control performance are illustrated through simulated scenarios. The project also involves the development of a hardware prototype that is used to test algorithms and related software.
638

Computational Methods for Perceptual Training in Radiology

January 2012 (has links)
abstract: Medical images constitute a special class of images that are captured to allow diagnosis of disease, and their "correct" interpretation is vitally important. Because they are not "natural" images, radiologists must be trained to visually interpret them. This training process includes implicit perceptual learning that is gradually acquired over an extended period of exposure to medical images. This dissertation proposes novel computational methods for evaluating and facilitating perceptual training in radiologists. Part 1 of this dissertation proposes an eye-tracking-based metric for measuring the training progress of individual radiologists. Six metrics were identified as potentially useful: time to complete task, fixation count, fixation duration, consciously viewed regions, subconsciously viewed regions, and saccadic length. Part 2 of this dissertation proposes an eye-tracking-based entropy metric for tracking the rise and fall in the interest level of radiologists, as they scan chest radiographs. The results showed that entropy was significantly lower when radiologists were fixating on abnormal regions. Part 3 of this dissertation develops a method that allows extraction of Gabor-based feature vectors from corresponding anatomical regions of "normal" chest radiographs, despite anatomical variations across populations. These feature vectors are then used to develop and compare transductive and inductive computational methods for generating overlay maps that show atypical regions within test radiographs. The results show that the transductive methods produced much better maps than the inductive methods for 20 ground-truthed test radiographs. Part 4 of this dissertation uses an Extended Fuzzy C-Means (EFCM) based instance selection method to reduce the computational cost of transductive methods. The results showed that EFCM substantially reduced the computational cost without a substantial drop in performance. The dissertation then proposes a novel Variance Based Instance Selection (VBIS) method that also reduces the computational cost, but allows for incremental incorporation of new informative radiographs, as they are encountered. Part 5 of this dissertation develops and demonstrates a novel semi-transductive framework that combines the superior performance of transductive methods with the reduced computational cost of inductive methods. The results showed that the semi-transductive approach provided both an effective and efficient framework for detection of atypical regions in chest radiographs. / Dissertation/Thesis / Ph.D. Computer Science 2012
639

Comparing the meaning of the learnibility principle for children and adults

Chimbo, Bester 06 1900 (has links)
The learnability principle relates to improving usability of software, performance and productivity. It was formulated mainly for the adult user group. Children represent an important user group, but fewer guidelines exist for their educational and entertainment applications. This study compares these groups, addressing the question: “Does learnability of software interfaces have a different meaning for children and adults?”. A literature survey conducted on learnability and learning processes considered the meaning of learnability across generations. Users learning software systems were observed in a usability laboratory where eye tracking data could also be recorded. Insights emerged, from data analysis, showing different tactics when children and adults approached unfamiliar software and revealing aspects of interfaces they approached differently. The findings will help designers distinguish varying needs of users and improve learnability. An additional subprinciple of learnability, „engageability‟, is proposed. Factors that make products engaging for children are different from those engaging adults. / Computing / M. Sc. (Information Systems)
640

Etude des processus attentionnels mis en jeu lors de l'exploration de scènes naturelles : enregistrement conjoint des mouvements oculaires et de l'activité EEG / The study of attentional processes involved during the exploration of natural scenes : joint registration of eye movements and EEG activity

Queste, Hélène 27 February 2014 (has links)
Dans la vie de tous les jours, lorsque nous regardons le monde qui nous entoure, nous bougeons constamment nos yeux. Notre regard se porte successivement sur différents endroits du champ visuel afin de capter l'information visuelle. Ainsi, nos yeux se stabilisent sur deux à trois régions différentes par seconde pendant des périodes appelées fixations. Entre deux fixations, nous réalisons des mouvements rapides des yeux pour déplacer notre regard vers une autre région ; on parle de saccades oculaires. Ces mouvements oculaires sont étroitement liés à l'attention. Quels sont les processus attentionnels mis en jeu lors de l'exploration de scènes ? Comment les facteurs liés à la scène ou à la consigne donnée pour l'exploration modifient-ils les paramètres des mouvements oculaires ? Comment ces modifications évoluent-elles au cours de l'exploration ? Dans cette thèse, nous proposons d'analyser conjointement les données oculométriques et électroencéphalographiques (EEG) pour mieux comprendre les processus attentionnels impliqués dans le traitement de l'information visuelle acquise pendant l'exploration de scènes. Nous étudions à la fois l'influence de facteurs de bas niveau, c'est-à-dire l'information visuelle contenue dans la scène et de haut niveau, c'est-à-dire la consigne donnée aux observateurs. Dans une première étude, nous avons considéré les facteurs de haut niveau à travers la modulation de la tâche à réaliser pour l'exploration des scènes. Nous avons choisi quatre tâches : l'exploration libre, la catégorisation, la recherche visuelle et l'organisation spatiale. Ces tâches ont été choisies car elles impliquent des traitements de l'information visuelle de nature différente et peuvent être classées en fonction de leur niveau de difficulté ou de demande attentionnelle. Dans une seconde étude, nous nous sommes focalisées plus particulièrement sur la recherche visuelle et l'influence de la contrainte temporelle. Enfin, dans une troisième étude, nous considérons les facteurs de bas niveau à travers l'influence d'un distracteur visuel perturbant l'exploration libre. Pour les deux premières études, nous avons enregistré conjointement les mouvements oculaires et les signaux EEG d'un grand nombre de sujets. L'analyse conjointe des signaux EEG et oculométriques permet de tirer profit des deux méthodes. L'oculométrie permet d'accéder aux mouvements oculaires et donc au déploiement de l'attention visuelle sur la scène. Elle permet de connaitre à quel moment et quels endroits de la scène sont regardés. L'EEG permet, avec une grande résolution temporelle, de mettre en avant des différences dans les processus attentionnels selon la condition expérimentale. Ainsi, nous avons montré des différences entre les tâches au niveau des potentiels évoqués par l'apparition de la scène et pour les fixations au cours de l'exploration. De plus, nous avons mis en évidence un lien fort entre le niveau global de l'activité EEG observée sur les régions frontales et les durées de fixation mais aussi des marqueurs de résolution de la tâche au niveau des potentiels évoqués liés à des fixations d'intérêt. L'analyse conjointe des données EEG et oculométriques permet donc de rendre compte des différences de traitement liées à différentes demandes attentionnelles. / In everyday life, when we explored the word, we moved continually our eyes. We focus your gaze successively on different location of the visual field, in order to get the visual information. In this way, our eyes became stable on two or three different regions per second, during period called fixation. Between two fixations, we make fast movements of the eyes to move our gaze to another position; it was called saccade. Eye movements are closely linked to attention. What are the attentional processes involved during scene exploration? How factors related to the scene or the task modify the parameters of eye movements? How these changes evolve during the exploration? In the thesis, we proposed to jointly analyze eye movements and electroencephalographic (EEG) data to better understand attentional processes involved during the processing of the visual information acquired during the exploration of scenes. We focused on low and high level factors. Low level factors corresponded to the visual information included in the scene and high level factors corresponded to the instruction give to observers. In a first study, we considered high level factors by manipulating the instructions for observers. We chose four tasks: free-exploration, categorization, visual search and spatial organization. These tasks were chosen because they involved different visual information processing and can be classified by level of difficulty or attentional demands. In a second study, we focused on the visual search task and on the influence of a time constraint. Finally, in a third study, we considered low level factors by analyzing the influence of a distractor disturbing the free-exploration of scenes. For the two first experiments, we jointly recorded eye movements and EEG signals of a large number of observers. The joint analysis of EEG and eye movement data takes advantage of the two methods. Eye tracking allowed to access to eye movements parameters and therefore to the visual attention deployment. It allowed knowing when and where the regions of the scene were gazed at. EEG allowed to access to differences on attentional processes depending on the experimental condition, with a high temporal resolution. We found differences between tasks for evoked potentials elicited by the scene onset and by fixations along the exploration. Furthermore, we demonstrated a strong link between the global EEG activity observed over frontal regions and fixation durations but also markers of the solving of the task on evoked potentials elicited by fixations of interest. Therefore, joint analysis of EEG and eye movement data allowed to report different processes related to attentional demanding.

Page generated in 0.0201 seconds