• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

The systematic analysis and innovative design of the essential cultural elements with Peking Opera Painted Faces (POPF)

Wang, Ding January 2016 (has links)
Peking Opera (‘Jingju’) is one of the most iconic traditional theatres in China, marketed as a global signifier of Chinese theatre and national identity. The research considers current recognised illustrations of Peking Opera Painted Faces (POPF). Through both new cultural-based product design solutions and design inspired visual communication solutions, the purpose of the new design is to apply the semantic features of Chinese Traditional POPF to the modern design, and establish close contact with all aspects of social life. Also to promote a series of developable plans including product design, interaction design, system design and service design in China and Western countries proceeding from POPF, along with the integration of other elements of traditional Chinese cultures and arts. *POPF is short for Peking Opera Painted Faces.
292

Animação em tempo real de rugas faciais explorando as modernas GPUs / Real time animation of facial wrinkles exploring the modern GPUs

Reis, Clausius Duque Gonçalves 16 August 2018 (has links)
Orientadores: José Mario De Martino, Harlen Costa Batagelo / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-16T06:00:13Z (GMT). No. of bitstreams: 1 Reis_ClausiusDuqueGoncalves_M.pdf: 10846415 bytes, checksum: 37318c89ce60ddb5bce4efd57c86abd9 (MD5) Previous issue date: 2010 / Resumo: A modelagem e animação de rugas faciais têm sido tarefas desafiadoras, devido à variedade de conformações e sutilezas de detalhes que as rugas podem exibir. Neste trabalho, são descritos dois métodos de apresentação de rugas em tempo real, utilizando as modernas GPUs. Ambos os métodos são baseados no uso de shaders em GPU e em uma abordagem de normal mapping para aplicar rugas sobre modelos virtuais. O primeiro método utiliza áreas de influência descritas por mapas de textura para calcular a exibição de rugas sobre o modelo, controlados por um "Vetor de Ativação", que informa ao shader a visibilidade das rugas em cada uma das áreas de influência. O segundo método apresenta rugas nos modelos faciais, utilizando as informações de deslocamento dos vértices em direções pré-definidas, informadas através de um "Vetor Direção de Rugas", que informa o sentido que o deslocamento de um vértice causa o surgimento de rugas / Abstract: The modeling and animation of facial wrinkles have been challenging tasks, due to the variety of conformations and detail subtleness that the wrinkles can display. In this paper, we describe two methods to present wrinkles in real time, using modern GPUs. Both methods are based on the use of GPU shaders and a normal mapping approach to apply wrinkles on virtual models. The first method uses influence areas described by texture maps to calculate the display of wrinkles on the model, controlled by an "Activation Vector", which tells the shader the appearance of wrinkles in each area of influence. The second method presents wrinkles on facial models using the vertex displacement information in predetermined directions, informed by a "Wrinkles Direction Vector", informing the direction that the displacement of a vertex causes the presentation of wrinkles / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica e de Computação
293

Social Agent: Facial Expression Driver for an e-Nose

Widmark, Jörgen January 2003 (has links)
This thesis describes that it is possible to drive synthetic emotions of an interface agent with an electronic nose system developed at AASS. The e-Nose can be used for quality control, and the detected distortion from a known smell sensation prototype is interpreted to a 3D-representation of emotional states, which in turn points to a set of pre-defined muscle contractions. This extension of a rule based motivation system, which we call Facial Expression Driver, is incorporated to a model for sensor fusion with active perception, to provide a general design for a more complex system with additional senses. To be consistent with the biologically inspired sensor fusion model a muscle based animated facial model was chosen as a test bed for the expression of current emotion. The social agent’s facial expressions demonstrate its tolerance to the detected distortion in order to manipulate the user to restore the system to functional balance. Only a few of the known projects use chemically based sensing to drive a face in real-time, whether they are virtual characters or animatronics. This work may inspire a future android implementation of a head with electro active polymers as synthetic facial muscles.
294

Design and evaluation of an avatar-mediated system for child interview training

Johansson, David January 2015 (has links)
There is an apparent problem with children being abused in different ways in their everyday life and the lack of education related to these issues among working adults in the vicinity of these children, for example as social workers or teachers. There are formal courses in child interview training that teach participants how to talk to children in a correct manner. Avatar-mediation enables new methods of practicing this communication without having to involve a real child or role play face-to-face with another adult. In this study it was explored how a system could be designed in order to enable educational practice sessions where a child interview expert can be mediated through avatars in the form of virtual children. Prototypes were developed in order to evaluate the feasibility of the scenario regarding methods for controlling the avatar and how the avatar was perceived by the participants. It was found that there is a clear value in the educational approach of using avatar-mediation. From the perspective of the interactor it was found that using a circular radial interface for graphical representation of different emotions was possible to control a video-based avatar while simultaneously having a conversation with the participant. The results of the study include a proposed design of an interface, description of underlying system functionality and suggestions on how avatar behavior can be characterized in order to achieve a high level of presence for the participant.
295

Multi-modal expression recognition

Chandrapati, Srivardhan January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Akira T. Tokuhiro / Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.
296

Robust recognition of facial expressions on noise degraded facial images

Sheikh, Munaf January 2011 (has links)
Magister Scientiae - MSc / We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images. / South Africa
297

Les visages des sportifs : analyse des expressions faciales et des sous-rôles sociomoteurs par des observateurs selectionnés / The faces of sportsmen : analysis of facial expressions and sociomotor sub-roles by selected observers

Lecroisey, Loïc 15 November 2017 (has links)
Parlebas (1970) affirmait déjà, il y a plus de quarante ans, « l’affectivité est la clef des conduites motrices ». De nombreux travaux se sont concentrés sur la verbalisation des émotions en contexte sportif. A priori ou a posteriori, c’est le sportif qui détermine ce qu’il ressent. Est-il possible de proposer une approche qui viendrait s’adjoindre à celles existantes afin de s’intéresser aux émotions qui naissent durant l’action motrice? L’objet de ce travail porte sur la mise en œuvre d’une méthodologie d’observation qui permet de préserver le caractère écologique de l’émotion sportive. Nous l’envisageons à partir du décryptage des mimiques émotionnelles et des actions du joueur. À l’appui des travaux de Frijda et Tcherkassof (1997), Tcherkassof (2008), Parlebas (1999), Collard (2004), Oboeuf (2010) nous décryptons les émotions et les sous-rôles moteurs actualisés par le joueur en utilisant les expressions faciales et les tendances à l’action. Dans un premier temps, nous créons un test de reconnaissance des expressions faciales sportives. Nous le soumettons à des étudiants STAPS afin d’objectiver des capacités de décodage des visages. Nous constatons que la spécialité sportive a un impact sur la performance. Les résultats suggèrent que les spécialistes des activités de coopération sont de bons décodeurs. Les spécialistes de l’opposition « avec agressivité motrice limitée » sont assez bons. Les combattants sont de piètres décodeurs malgré qu’ils reconnaissent parfaitement la peur. Les spécialistes des activités psychomotrices sont plutôt mauvais mais peu en lien avec cette habileté de décodage. Dans un second temps, nous utilisons les excellents décodeurs sélectionnés grâce à notre test afin qu’ils analysent des vidéos de sportifs en action motrice. Grâce à une caméra embarquée, les visages de chaque joueur peuvent être recensés et analysés dans deux jeux sociomoteurs : la balle assise et l’ours et son gardien. Les observateurs formés et sélectionnés retranscrivent dans une grille d’analyse situationnelle les expressions faciales prototypiques d’une émotion et les sous-rôles sociomoteurs que le joueur actualise (Oboeuf, 2010). Cette grille est un ludogramme émotionnel (Parlebas, 1999). Les seconds types de résultats ainsi recueillis nous invitent à penser qu’il y a des émotions typiques de certaines actions. Lorsqu’elle précède un sous-rôle, la colère est celle de la frappe ou du tir puissant tandis que la peur est celle de l’esquive. Lorsqu’elle succède un replacement ou une interaction de marque favorable, la joie est un retour sur l’objectif du jeu et la douleur est consentie par le contrat ludique. En tant que processus, l’émotion permet au sportif de choisir la conduite motrice qu’il doit mettre en œuvre. En tant que résultat de l’action, l’émotion est une information sur l’atteinte du but du jeu. L’ensemble de nos résultats nous invite à valider cette méthodologie. Il sera désormais nécessaire de reproduire ce type d’étude dans de nombreux sports. / More than forty years ago, Parlebas (1970) affirmed: « affectivity is the key to motor skill ». Many studies have concentrated on the verbalization of emotions in sports context? Before the game or after the game, the sportsman determines what he feels. Is it possible to suggest an approach that would come in line with existing ones in order to be interested in the emotions that arises during motor action? The aim of this work is to implement an observational methodology that preserves the ecological character of sport’s emotion. We consider it from the deciphering of the emotional mimics and the actions of the player. In support of the work of Frijda and Tcherkassof (1997), Tcherkassof (2008), Parlebas (1999), Collard (2004), Oboeuf (2010), we decrypt the emotions and the motor sub-role of player using facial expressions and tendency to action. As a first step, we create a test for recognition of sport’s facial expressions. We submit it to sport students in order to objectify capacities of decoding the faces. We see that the sporting specialty has an impact on performance. The results suggest that specialists in cooperative activities are good decoders. The opposition specialists "with limited motor aggressiveness" are quite good. The fighters are poor decoders despite they fully recognize the fear. Specialists in psychomotor activities are rather bad but not very much related to this skill of decoding. In a second step, we use the excellent decoders selected by our test so that they analyze videos of sportsmen in motor action. Thanks to an embedded camera, the faces of each player can be recorded and analyzed in two sociomotor games: the seated ball and the bear and his keeper. In a situational analysis grid, trained and selected observers retranscribe the prototypical facial expressions of an emotion and the sociomotor sub-roles that the player updates (Oboeuf, 2010). This grid is an emotional ludogram (Parlebas, 1999). The second type of results collected thought that method invite us to think that there are typical emotions inherent to certain actions. Before a sub-role, anger is related to powerful striking or shooting, while fear is interconnected with dodging. Following a replacement or a favorable brand interaction, the joy is a return on the game’s aim and the pain is consented by the play contract. As a process, the emotion allows the athlete to choose the driving behavior that he must implement. As a result of action, emotions are information about reaching the goal of the game. All of our results invite us to validate this methodology. It will now be necessary to repeat this type of study in many sports.
298

Perception of Trustworthiness and Valence of Emotional Expressions in Virtual Characters / Perception av Pålitlighet och Valens av Känslouttryck i Virtuella Karaktärer

Blomqvist, Niklas January 2016 (has links)
Knowledge on how to design trustworthy virtual characters are of importance when these are becoming more and more common interaction partners. In this study, a closer look at the suggested relationship from previous research between valence and trustworthiness is investigated by constructing virtual characters with different non-verbal behaviours and letting participants rate them in a pre-study. A second question of how perception of trustworthiness is based for virtual characters is investigated by letting participants play a trust game with life-sized virtual characters on a big 4k-screen. Results indicated that valence is not necessarily a factor influencing trustworthiness and that positive valence together with mutual gaze is not enough to provide a clearly trustworthy virtual character. Results also indicated that perception of trustworthiness is not based solely on a virtual character's previous decisions of trust in a longer interaction but also on its non-verbal behaviour. The outcome of this study will help when constructing virtual characters in different scenarios, especially when the goal is to make them as trustworthy as possible. The study also gives insight into tools and software that can be used when creating virtual characters and setting up scenarios of trust.
299

Facial expressions of pain in cats : the development and validation of the Feline Grimace Scale

Cayetano Evangelista, Marina 08 1900 (has links)
L’évaluation de la douleur chez le chat est souvent un défi en raison de leur nature discrète et les changements de comportement potentiels dans des situations inhabituelles et stressantes, telles que l'environnement vétérinaire. Différents outils d’évaluation de la douleur (c.-à.-d. des échelles de douleur) basés sur l'observation des comportements ont été proposés pour les chats; cependant, la majorité de ces outils manque de tests de validité, de fiabilité et/ou de généralisabilité. De plus, les échelles de douleur sont peu utilisées dans la pratique clinique. Des outils simples, pratiques et fiables tels que les échelles de grimace (instruments d'évaluation de la douleur basés sur l'expression faciale), ont le potentiel de changer ce scénario. Elles ont été développées pour plusieurs espèces, excluant le chat. L'objectif général de cette thèse était de développer un nouvel instrument basé sur l'expression faciale pour l'évaluation de la douleur aiguë chez les chats, la « Feline Grimace Scale » (FGS) et d'explorer ses applications et ses limitations. Nos hypothèses étaient que la FGS permettrait l’identification de la douleur chez les chats avec précision (dans différentes conditions telles que la douleur d’origine naturelle et postopératoire); elle serait valide et fiable (parmi différents évaluateurs); elle serait capable de détecter la réponse aux analgésiques; et finalement, elle pourrait être appliquée en temps réel dans le contexte clinique. La FGS a été développée et validée en utilisant une approche psychométrique pour détecter la douleur aiguë chez les chats. Cette échelle discriminait entre les chats en douleur de ceux qui ne le sont pas; détectait la réponse à différents analgésiques; et corrélait fortement avec un autre système de notation de la douleur. Une bonne fiabilité inter et intra-observateur a été démontrée, non seulement parmi les vétérinaires, mais aussi parmi les propriétaires de chats, les étudiants vétérinaires et les techniciens en santé animale. L’utilisation de la FGS en temps réel était aussi réalisable. D’autre part, nos résultats suggèrent que le genre de l'évaluateur influencerait l'évaluation de la douleur, car les évaluatrices attribuaient des scores plus élevés que les évaluateurs. La FGS est un outil valide, fiable et pratique pour l'utilisation potentielle en recherche ou en clinique; en temps réel ou par l’évaluation des images. Elle pourrait être aussi applicable dans une large gamme de conditions douloureuses et par des évaluateurs avec différents niveaux d'expertise, et potentiellement aussi à la maison (par les propriétaires de chats). Cela représente un progrès substantiel dans l’identification et la gestion de la douleur féline, vers les plus hautes exigences en matière de soins vétérinaires. / Pain assessment in cats is challenging due to a number of reasons, including their discrete nature and potential behavioral changes in unfamiliar and stressful situations, such as the veterinary environment. Different pain assessing instruments (i.e. pain scales) that rely on the observation of behaviors have been proposed for cats; however, the majority lack validity, reliability and/or generalizability testing. Additionally, the adherence to their use in clinical practice is low and warrants improvement. Simple, practical and reliable tools such as grimace scales (facial expression-based pain assessment instruments), have the potential of changing this scenario. They have been developed for several species, among which the cat was not included. The overall aim of this thesis was to develop a novel facial expression-based instrument for acute pain assessment in cats, the Feline Grimace Scale (FGS) and to explore its applications and limitations. Our hypotheses were that the FGS would be able to accurately identify pain in cats (in different conditions such as naturally-occurring or spontaneous and postoperative pain); it would be valid and reliable (among different raters); it would be able to detect the response to analgesics; and its application in real-time in the clinical context would be feasible. The FGS was developed and validated using a comprehensive psychometric approach to detect acute pain in cats. It has demonstrated a high discriminative ability between painful and non-painful cats; it is capable of detecting the response to different analgesic drugs and it is strongly correlated with another pain scoring system. Furthermore, it demonstrated good inter- and intra-rater reliability, not only among veterinarians, but also among cat owners, veterinary students and nurses (technicians). Real-time scoring using the FGS was proven feasible. On the other hand, our results suggested that the rater gender may influence pain assessment, as female raters assigned higher scores than males. The FGS is a valid, reliable and practical tool potentially for both research and clinical use in real-time or using image assessment; that may be applicable in a wide range of painful conditions, by raters with different degree of expertise, and potentially at home (by cat owners). This represents a substantial progress in feline pain management, towards the highest standards in veterinary care.
300

Machine Learning Algorithms for Efficient Acquisition and Ethical Use of Personal Information in Decision Making

Tkachenko, Yegor January 2022 (has links)
Across three chapters of this doctoral dissertation, I explore how machine learning algorithms can be used to efficiently acquire personal information and responsibly use it in decision making, in marketing and beyond. In the first chapter, I show that machine learning on consumer facial images can reveal a variety of personal information. I provide evidence that such information can be profitably used by marketers. I also investigate the mechanism behind how facial images reveal personal information. In the second chapter, I propose a new self-supervised deep reinforcement learning approach to question prioritization and questionnaire shortening and show it is competitive against benchmark methods. I use the proposed method to show that typical consumer data sets can be reconstructed well based on relatively small select subsets of their columns. The reconstruction quality grows logarithmically in the relative size of the column subset, implying diminishing returns on measurement. Thus, many long questionnaires could be shortened with minimal information loss, increasing the consumer research efficiency and enabling previously impossible multi-scale omnibus studies. In the third chapter, I present a method to speed up ranking under constraints for live ethical content recommendations by predicting, rather than finding exactly, the solution to the underlying time-intensive optimization problem. The approach enables solving larger-than-previously-reported constrained content-ranking problems in real time, within 50 milliseconds, as required to avoid the perception of latency by the users. The approach could also help speed up general assignment and matching tasks.

Page generated in 0.1103 seconds