• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assessing text and web accessibility for people with autism spectrum disorder

Yaneva, Victoria January 2016 (has links)
People with Autism Spectrum Disorder experience difficulties with reading comprehension and information processing, which affect their school performance, employability and social inclusion. The main goal of this work is to investigate new ways to evaluate and improve text and web accessibility for adults with autism. The first stage of this research involved using eye-tracking technology and comprehension testing to collect data from a group of participants with autism and a control group of participants without autism. This series of studies resulted in the development of the ASD corpus, which is the first multimodal corpus of text and gaze data obtained from participants with and without autism. We modelled text complexity and sentence complexity using sets of features matched to the reading difficulties people with autism experience. For document-level classification we trained a readability classifier on a generic corpus with known readability levels (easy, medium and difficult) and then used the ASD corpus to evaluate with unseen user-assessed data. For sentence-level classification, we used for the first time gaze data and comprehension testing to define a gold standard of easy and difficult sentences, which we then used as training and evaluation sets for sentence-level classification. The results showed that both classifiers outperformed other measures of complexity and were more accurate predictors of the comprehension of people with autism. We conducted a series of experiments evaluating easy-to-read documents for people with cognitive disabilities. Easy-to-read documents are written in an accessible way, following specific writing guidelines and containing both text and images. We focused mainly on the image component of these documents, a topic which has been significantly under-studied compared to the text component; we were also motivated by the fact that people with autism are very strong visual thinkers and that therefore image insertion could be a way to use their strengths in visual thinking to compensate for their difficulties in reading. We investigated the effects images in text have on attention, comprehension, memorisation and user preferences in people with autism (all of these phenomena were investigated both objectively and subjectively). The results of these experiments were synthesised in a set of guidelines for improving text accessibility for people with autism. Finally, we evaluated the accessibility of web pages with different levels of visual complexity. We provide evidence of existing barriers to finding relevant information on web pages that people with autism face and we explore their subjective experiences with searching the web through survey questions.
2

Posture and Space in Virtual Characters : application to Ambient Interaction and Affective Interaction / Posture et Espace chez les Personnages Virtuels : application à l'Interaction Ambiante et Affective

Tan, Ning 31 January 2012 (has links)
La communication multimodale, qui est primordiale dans les relations interpersonnelles, reste encore très limitée dans les interfaces homme-machine actuelles. Parmi les différentes modalités qui ont été adoptées par les recherches en interaction homme-machine, la modalité posturale a été moins explorée que d’autres modalités comme la parole ou les expressions faciales. Les postures corporelles sont pourtant indispensables pour interpréter et situer l’interaction entre deux personnes, que ce soit en termes de contexte spatial ou de contexte social. Les briques de base que sont les dispositifs et les modalités d’interaction (par exemple la kinect ou les avatars) sont pourtant disponibles. Il manque cependant des modèles informatiques reliant ces médias et modalités aux fonctions de communication pertinentes dans les interactions interpersonnelles comme celles liées à l’espace ou aux émotions.L’objectif de cette thèse est de concevoir un premier modèle informatique permettant d’exploiter les postures dans les interactions homme-machine. Cela soulève plusieurs questions de recherche : comment représenter symboliquement des postures exprimées durant des interactions interpersonnelles ? Comment spécifier les comportements posturaux de personnages virtuels ? Quelles doivent être les caractéristiques d’un modèle d’interaction corporel permettant des interactions entre un personnage virtuel et un utilisateur dans différents contextes physiques ou sociaux ?L’approche proposée consiste dans un premier temps à prendre comme point de départ des corpus vidéo filmés dans différentes situations. Nous avons défini un schéma de codage pour annoter manuellement les informations posturales à différents niveaux d’abstraction et pour les différentes parties du corps. Ces représentations symboliques ont été exploitées pour effectuer des analyses des relations spatiales et temporelles entre les postures exprimées par deux interlocuteurs.Ces représentations symboliques de postures ont été utilisées dans un deuxième temps pour simuler des expressions corporelles de personnages virtuels. Nous nous sommes intéressés à un composant des émotions particulièrement pertinent pour les études sur les postures : les tendances à l’action. Des animations impliquant deux personnages virtuels ont été ainsi conçues puis évaluées perceptivement.Enfin, dans un troisième temps, les expressions corporelles d’un personnage virtuel ont été conçues dans une application mixte faisant intervenir un personnage virtuel et un utilisateur dans un cadre d’interaction ambiante. Les postures et gestes du personnage virtuel ont été utilisées pour aider l’utilisateur à localiser des objets du monde réel.Cette thèse ouvre des perspectives sur des études plus spécifiques de l’interaction corporelle nécessitant par exemple des annotations automatiques via des dispositifs de capture de mouvement ou la prise en compte des différences individuelles dans l’expression posturale. / Multimodal communication is key to smooth interactions between people. However, multimodality remains limited in current human-computer interfaces. For example, posture is less explored than other modalities, such as speech and facial expressions. The postural expressions of others have a huge impact on how we situate and interpret an interaction. Devices and interfaces for representing full-body interaction are available (e.g., Kinect and full-body avatars), but systems still lack computational models relating these modalities to spatial and emotional communicative functions.The goal of this thesis is to lay the foundation for computational models that enable better use of posture in human-computer interaction. This necessitates addressing several research questions: How can we symbolically represent postures used in interpersonal communication? How can these representations inform the design of virtual characters' postural expressions? What are the requirements of a model of postural interaction for application to interactive virtual characters? How can this model be applied in different spatial and social contexts?In our approach, we start with the manual annotation of video corpora featuring postural expressions. We define a coding scheme for the manual annotation of posture at several levels of abstraction and for different body parts. These representations were used for analyzing the spatial and temporal relations between postures displayed by two human interlocutors during spontaneous conversations.Next, representations were used to inform the design of postural expressions displayed by virtual characters. For studying postural expressions, we selected one promising, relevant component of emotions: the action tendency. Animations were designed featuring action tendencies in a female character. These animations were used as a social context in perception tests.Finally, postural expressions were designed for a virtual character used in an ambient interaction system. These postural and spatial behaviors were used to help users locate real objects in an intelligent room (iRoom). The impact of these bodily expressions on the user¡¯s performance, subjective perception and behavior was evaluated in a user studyFurther studies of bodily interaction are called for involving, for example, motion-capture techniques, integration with other spatial modalities such as gaze, and consideration of individual differences in bodily interaction.
3

Posture and Space in Virtual Characters : application to Ambient Interaction and Affective Interaction

Tan, Ning 31 January 2012 (has links) (PDF)
Multimodal communication is key to smooth interactions between people. However, multimodality remains limited in current human-computer interfaces. For example, posture is less explored than other modalities, such as speech and facial expressions. The postural expressions of others have a huge impact on how we situate and interpret an interaction. Devices and interfaces for representing full-body interaction are available (e.g., Kinect and full-body avatars), but systems still lack computational models relating these modalities to spatial and emotional communicative functions.The goal of this thesis is to lay the foundation for computational models that enable better use of posture in human-computer interaction. This necessitates addressing several research questions: How can we symbolically represent postures used in interpersonal communication? How can these representations inform the design of virtual characters' postural expressions? What are the requirements of a model of postural interaction for application to interactive virtual characters? How can this model be applied in different spatial and social contexts?In our approach, we start with the manual annotation of video corpora featuring postural expressions. We define a coding scheme for the manual annotation of posture at several levels of abstraction and for different body parts. These representations were used for analyzing the spatial and temporal relations between postures displayed by two human interlocutors during spontaneous conversations.Next, representations were used to inform the design of postural expressions displayed by virtual characters. For studying postural expressions, we selected one promising, relevant component of emotions: the action tendency. Animations were designed featuring action tendencies in a female character. These animations were used as a social context in perception tests.Finally, postural expressions were designed for a virtual character used in an ambient interaction system. These postural and spatial behaviors were used to help users locate real objects in an intelligent room (iRoom). The impact of these bodily expressions on the user¡¯s performance, subjective perception and behavior was evaluated in a user studyFurther studies of bodily interaction are called for involving, for example, motion-capture techniques, integration with other spatial modalities such as gaze, and consideration of individual differences in bodily interaction.
4

Gestikulace a eventuálnost: mezijazyková studie / Gesture and eventuality: a cross-linguistic study

Jehlička, Jakub January 2021 (has links)
Mluvčí typologicky odlišných jazyků volí různé strategie při popisu stejné události v závislosti na dostupných jazykově-specifických gramatických prostředcích. Tyto strategie se projevují např. různými způsoby konceptualizace událostních rámců bě- hem jazykového vyjádření, ale v nejazykové kognici. Jedním z jevů, které byly v této souvislosti zaznemány, jsou jazykově specifické způsoby gestikulace doprovázející mluvené popisy událostí, které reflektují (či manifestují) tělesně ukotvená konceptuál- ní schémata, na nichž naše vnímání událostí stojí. Tématem této práce je multimodální konstruování (construal) událostí v češtině a v angličtině. Konkrétně se práce zaměřuje na spojitosti mezi formálními rysy gest (způsob pohybu a jeho zakončení) a sémantické rysy, které konstituují tzv. aspektuální kontury událostí (konstruování časového a kvalitativního průběhu události). První část prezentovaného výzkumu tvoří analýza materiálu z českého a anglického multimodální korpusu. Oba použité korpusy obsahují nahrávky spontánních pro- jevů v interakcích zachycených během pracovních jednání v akademickém prostředí. Kvantitativní analýzy (metoda tzv. klasifikačních stromů a náhodných lesů) ukázala, že a) v angličtině je významným prediktorem výskytu gest s rysem ukončenosti aktion- sartová kategorie achievement...

Page generated in 0.0408 seconds