Spelling suggestions: "subject:"gaze detection"" "subject:"maze detection""
1 |
Living with Lipoedema : Designing Objects for the Body and Mind through First-Person MethodsHettich, Sophia Anna Maria January 2022 (has links)
This project follows a Research through Design approach and through autobiographical design explores the question of how Interaction Design can support Lipoedema patients, by helping them cope with their body image in everyday life. Building on the concept of self-management for people with chronic medical conditions and a conscious connection between body and mind, I created a set of artefacts. The set of artefacts was connected to specific a interaction for each artefact, giving them a more meaningful purpose. Through living with these three artefacts, I was able to identify tensions revolving around themes of self-acceptance, discomfort and vulnerability. These are important when designing, not only for people diagnosed with Lipoedema, but also for any user group struggling with similar issues, such as body image. / Detta projekt följer ett forskning-genom-design tillvägagångssätt och genom autobiografisk design utforskas frågan om hur interaktionsdesign kan stötta Lipödem patienter genom att hjälpa dom förbättra sin kroppsbild i vardagen. Genom att bygga på själv-hanterings konceptet för människor med kroniska sjukdomar och med en medveten koppling mellan kropp och sinne, skapade jag en uppsättning av artefakter. Vardera artefakt var kopplade till specifika interaktioner för att på så sätt ge dom en djupare betydelse. Genom att leva med dessa tre artefakter, kunde jag utforska teman rörandes självacceptans, obehag och sårbarhet. Dessa är framför allt viktiga när man designar för människor med Lipödem men också för andra grupper av människor som kämpar med liknande problem, såsom kroppsbild.
|
2 |
Multi-Object modelling of the face / Modélisation Multi-Objet du visageSalam, Hanan 20 December 2013 (has links)
Cette thèse traite la problématique liée à la modélisation du visage dans le but de l’analyse faciale.Dans la première partie de cette thèse, nous avons proposé le Modèle Actif d’Apparence Multi-Objet. La spécificité du modèle proposé est que les différentes parties du visage sont traités comme des objets distincts et les mouvements oculaires (du regard et clignotement) sont extrinsèquement paramétrées.La deuxième partie de la thèse porte sur l'utilisation de la modélisation de visage dans le contexte de la reconnaissance des émotions.Premièrement, nous avons proposé un système de reconnaissance des expressions faciales sous la forme d’Action Units. Notre contribution porte principalement sur l'extraction des descripteurs de visage. Pour cela nous avons utilisé les modèles AAM locaux.Le second système concerne la reconnaissance multimodale des quatre dimensions affectives :. Nous avons proposé un système qui fusionne des caractéristiques audio, contextuelles et visuelles pour donner en sortie les quatre dimensions émotionnelles. Nous contribuons à ce système en trouvant une localisation précise des traits du visage. En conséquence, nous proposons l’AAM Multi-Modèle. Ce modèle combine un modèle global extrinsèque du visage et un modèle local de la bouche. / The work in this thesis deals with the problematic of face modeling for the purpose of facial analysis.In the first part of this thesis, we proposed the Multi-Object Facial Actions Active Appearance Model (AAM). The specificity of the proposed model is that different parts of the face are treated as separate objects and eye movements (gaze and blink) are extrinsically parameterized. This increases the generalization capabilities of classical AAM.The second part of the thesis concerns the use of face modeling in the context of expression and emotion recognition. First we have proposed a system for the recognition of facial expressions in the form of Action Units (AU). Our contribution concerned mainly the extraction of AAM features of which we have opted for the use of local models.The second system concerns multi-modal recognition of four continuously valued affective dimensions. We have proposed a system that fuses audio, context and visual features and gives as output the four emotional dimensions. We contribute to the system by finding the precise localization of the facial features. Accordingly, we propose the Multi-Local AAM. This model combines extrinsically a global model of the face and a local one of the mouth through the computation of projection errors on the same global AAM.
|
3 |
Vision-Based Techniques for Cognitive and Motor Skill AssessmentsFloyd, Beatrice K. 24 August 2012 (has links)
No description available.
|
Page generated in 0.0712 seconds