• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Determinants of Effort and Associated Cardiovascular Response to a Behavioral Restraint Challenge

Agtarap, Stephanie 12 1900 (has links)
This study directly tested implications of motivation intensity theory on effort to restrain against a behavioral urge or impulse (i.e. restraint intensity). Two factors were manipulated—magnitude of an urge and the importance of successfully resisting it—with cardiovascular (CV) responses related to active coping measured. Male and female undergraduate students were presented with a mildly- or strongly evocative film clip with instructions to refrain from showing any facial response. Success was made more or less important through coordinated manipulations of outcome expectancy, ego-involvement, and performance assessment. As expected, systolic blood pressure responses assessed during the performance period were proportional to the evocativeness of the clip when importance was high, but low regardless of evocativeness when importance was low. These findings support a new conceptual analysis concerned with the determinants and CV correlates of restraint intensity. Implications of the study and associations with current self-regulatory literature are discussed.
152

Facial Expression Decoding Deficits Among Psychiatric Patients: Attention, Encoding, and Processing

Hoag, David Nelson 05 1900 (has links)
Psychiatric patients, particularly schizophrenics, tend to be less accurate decoders of facial expressions than normals. The involvement of three basic information processing stages in this deficit was investigated: attention; encoding; and processing. Psychiatric inpatients, classified by diagnosis and severity of pathology, and nonpatient controls were administered seven facial cue decoding tasks. Orientation of attention was assessed through rate of diversion of gaze from the stimuli. Encoding was assessed using simple tasks, requiring one contrast of two facial stimuli and selection from two response alternatives. Processing was assessed using a more complex task, requiring several contrasts between stimulus faces and selection from numerous response alternatives. Residualized error scores were used to statistically control for effects of attention on task performance. Processing task performance was evaluated using ANCOVA to control for effects of encoding. Schizophrenics were characterized by generalized information processing deficit while affective disorder subjects evidenced impairment only in attending. Attention impairments in both groups were related to severity of psychopathology. Problems in encoding and processing were related only to a schizophrenic diagnosis. Their decoding deficits appeared attributable to general visuospatial discrimination impairment rather than repression-sensitization defenses or the affective connotation of cues. Adequacy of interpersonal functioning was associated with measures of attending and processing but not encoding. The measures of encoding, however, may have lacked adequate discriminating power due to low difficulty.
153

Loughborough University Spontaneous Expression Database and baseline results for automatic emotion recognition

Aina, Segun January 2015 (has links)
The study of facial expressions in humans dates back to the 19th century and the study of the emotions that these facial expressions portray dates back even further. It is a natural part of non-verbal communication for humans to pass across messages using facial expressions either consciously or subconsciously, it is also routine for other humans to recognize these facial expressions and understand or deduce the underlying emotions which they represent. Over two decades ago and following technological advances, particularly in the area of image processing, research began into the use of machines for the recognition of facial expressions from images with the aim of inferring the corresponding emotion. Given a previously unknown test sample, the supervised learning problem is to accurately determine the facial expression class to which the test sample belongs using the knowledge of the known class memberships of each image from a set of training images. The solution to this problem building an effective classifier to recognize the facial expression is hinged on the availability of representative training data. To date, much of the research in the area of Facial Expression Recognition (FER) is still based on posed (acted) facial expression databases, which are often exaggerated and therefore not representative of real life affective displays, as such there is a need for more publically accessible spontaneous databases that are well labelled. This thesis therefore reports on the development of the newly collected Loughborough University Spontaneous Expression Database (LUSED); designed to bolster the development of new recognition systems and to provide a benchmark for researchers to compare results with more natural expression classes than most existing databases. To collect the database, an experiment was set up where volunteers were discretely videotaped while they watched a selection of emotion inducing video clips. The utility of the new LUSED dataset is validated using both traditional and more recent pattern recognition techniques; (1) baseline results are presented using the combination of Principal Component Analysis (PCA), Fisher Linear Discriminant Analysis (FLDA) and their kernel variants Kernel Principal Component Analysis (KPCA), Kernel Fisher Discriminant Analysis (KFDA) with a Nearest Neighbour-based classifier. These results are compared to the performance of an existing natural expression database Natural Visible and Infrared Expression (NVIE) database. A scheme for the recognition of encrypted facial expression images is also presented. (2) Benchmark results are presented by combining PCA, FLDA, KPCA and KFDA with a Sparse Representation-based Classifier (SRC). A maximum accuracy of 68% was obtained recognizing five expression classes, which is comparatively better than the known maximum for a natural database; around 70% (from recognizing only three classes) obtained from NVIE.
154

Evolving Credible Facial Expressions with Interactive GAs

Smith, Nancy T. 01 January 2012 (has links)
A major focus of research in computer graphics is the modeling and animation of realistic human faces. Modeling and animation of facial expressions is a very difficult task, requiring extensive manual manipulation by computer artists. Our primary hypothesis was that the use of machine learning techniques could reduce the manual labor by providing some automation to the process. The goal of this dissertation was to determine the effectiveness of using an interactive genetic algorithm (IGA) to generate realistic variations in facial expressions. An IGA's effectiveness is measured by satisfaction with the end results, including acceptable levels of user fatigue. User fatigue was measured by the rate of successful convergence, defined as achieving a sufficient fitness level as determined by the user. Upon convergence, the solution with the highest fitness value was saved for later evaluation by participants with questionnaires. The participants also rated animations that were manually created by the user for comparison. The animation of our IGA is performed by interpolating between successive face models, also known as blendshapes. The position of each blendshape's vertices is determined by a set of blendshape controls. Chromosomes map to animation sequences, where genes correspond to blendshapes. The manually created animations were also produced by manipulating the blendshape control values of successive blendshapes. Due to user fatigue, IGAs typically use a small population with the user evaluating each individual. This is a serious limitation since there must be a sufficient number of building blocks in the initial population to converge to a good solution. One method that has been used to address this problem in the music domain is a surrogate fitness function, which serves as a filter to present a small subpopulation to the user for subjective evaluation. Our secondary hypothesis was that an IGA for the high-dimensional problem of facial animation would benefit from a large population made possible by using a neural network (NN) as a surrogate fitness function. The NN assigns a fitness value to every individual in the population, and the phenotypes of the highest rated individuals are presented to receive subjective fitness values from the user. This is a unique approach to the problem of automatic generation of facial animation. Experiments were conducted for each of the six emotions, using the optimal parameters that had been discovered. The average convergence rate was 85%. The quality of the NNs showed evidence of a correlation to convergence rates as measured by the true positive and false positive rates. The animations with the highest subjective fitness from the final set of experiments were saved for participant evaluation. The participants gave the IGA animations an average credibility rating of 69% and the manual animations an average credibility rating of 65%. The participants preferred the IGA animations an average of 54% of the time to the manual animations. The results of these experiments indicated that an IGA is effective at generating realistic variations in facial expressions that are comparable to manually created ones. Moreover, experiments that varied population size indicated that a larger population results in a higher convergence rate.
155

Face processing in persons with and without Alzheimer's disease

Unknown Date (has links)
This study aimed to understand the differences in strength or coordination of brain regions involved in processing faces in the presence of aging and/or progressing neuropathology (Alzheimer's disease). To this end, Experiment 1 evaluated age-related differences in basic face processing and the effects of familiarity in face processing. Overall, face processing in younger (22-35yrs) and older participants (63-83yrs) recruited a broadly distributed network of brain activity, but the distribution of activity varied depending on the age of the individual. The younger population utilized regions of the occipitotemporal, medial frontal and posterior parietal cortices while the older population recruited a concentrated occipitotemporal network. The younger participants were also sensitive to the type of face presented, as Novel faces were associated with greater mean BOLD activity than either the Famous or Relatives faces. Interestingly, Relatives faces were associated with greater mean B OLD activity in more regions of the brain than found in any other analysis in Exp. 1, spanning the inferior frontal, medial temporal and inferior parietal cortices. In contrast, the older adults were not sensitive to the type of face presented, which could reflect a difference in cognitive strategies used by the older population when presented with this type of face stimuli. Experiment 2 evaluated face processing, familiarity in face processing and also emphasized the interactive roles autobiographical processing and memory recency play in processing familiar faces in mature adults (MA; 45-55yrs), older adults (OA; 70-92yrs) and patients suffering from Alzheimer's disease (AD; 70-92yrs). / MA participants had greater mean BOLD activity values in more regions of the brain than observed in either of the older adult populations, spanning regions of the medial frontal, medial temporal, inferior parietal and occipital cortices. OA, in contrast, utilized a concentrated frontal and medial temporal network and AD participants had the greatest deficit in BOLD activity overall.Age-related differences in processing faces, in processing the type of face presented, in autobiographical information processing and in processing the recency of a memory were noted, as well as differences due to the deleterious effects of AD. / by Jeanna Winchester. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
156

The Happiness/Anger Superiority Effect: the influence of the gender of perceiver and poser in facial expression recognition

Unknown Date (has links)
Two experiments were conducted to investigate the impact of poser and perceiver gender on the Happiness/Anger Superiority effect and the Female Advantage in facial expression recognition. Happy, neutral, and angry facial expressions were presented on male and female faces under Continuous Flash Suppression (CFS). Participants of both genders indicated when the presented faces broke through the suppression. In the second experiment, angry and happy expressions were reduced to 50% intensity. At full intensity, there was no difference in the reaction time for female neutral and angry faces, but male faces showed a difference in detection between all expressions. Across experiments, male faces were detected later than female faces for all facial expressions. Happiness was generally detected faster than anger, except when on female faces at 50% intensity. No main effect for perceiver gender emerged. It was concluded that happiness is superior to anger in CFS, and that poser gender affects facial expression recognition. / by Sophia Peaco. / Thesis (M.A.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
157

Efeitos do escitalopram sobre a identificação de expressões faciais / Effects of escitalopram on the processing of emotional faces.

Alves Neto, Wolme Cardoso 16 May 2008 (has links)
ALVES NETO, W.C. Efeitos do escitalopram sobre a identificação de expressões faciais. Ribeirão Preto, SP: Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo; 2008. Os inibidores seletivos da recaptura de serotonina (ISRS) têm sido utilizados com sucesso para o tratamento de diversas patologias psiquiátricas. Sua eficácia clínica é atribuída a uma potencialização da neurotransmissão serotoninérgica, mas pouco ainda é conhecido sobre os mecanismos neuropsicológicos envolvidos nesse processo. Várias evidências sugerem que a serotonina estaria envolvida, entre outras funções, na regulação do comportamento social, nos processos de aprendizagem e memória e no processamento de emoções. O reconhecimento de expressões faciais de emoções básicas representa um valioso paradigma para o estudo do processamento de emoções, pois são estímulos condensados, uniformes e de grande relevância para o funcionamento social. O objetivo do estudo foi avaliar os efeitos da administração aguda e por via oral do escitalopram, um ISRS, no reconhecimento de expressões faciais de emoções básicas. Uma dose oral de 10 mg de escitalopram foi administrada a doze voluntários saudáveis do sexo masculino, em modelo duplo-cego, controlado por placebo, em delineamento cruzado, ordem randômica, 3 horas antes de realizarem a tarefa de reconhecimento de expressões faciais, com seis emoções básicas raiva, medo, tristeza, asco, alegria e surpresa mais a expressão neutra. As faces foram digitalmente modificadas de forma a criar um gradiente de intensidade entre 10 e 100% de cada emoção, com incrementos sucessivos de 10%. Foram registrados os estados subjetivos de humor e ansiedade ao longo da tarefa e o desempenho foi avaliado pela medida de acurácia (número de acertos sobre o total de estímulos apresentados). De forma geral, o escitalopram interferiu no reconhecimento de todas as expressões faciais, à exceção de medo. Especificamente, facilitou a identificação das faces de tristeza e prejudicou o reconhecimento de alegria. Quando considerado o gênero das faces, esse efeito foi observado para as faces masculinas, enquanto que para as faces femininas o escitalopram não interferiu com o reconhecimento de tristeza e aumentou o de alegria. Além disso, aumentou o reconhecimento das faces de raiva e asco quando administrado na segunda sessão e prejudicou a identificação das faces de surpresa nas intensidades intermediárias de gradação. Também apresentou um efeito positivo global sobre o desempenho na tarefa quando administrado na segunda sessão. Os resultados sugerem uma modulação serotoninérgica sobre o reconhecimento de expressões faciais emocionais e sobre a evocação de material previamente aprendido. / ALVES NETO, W.C. Effects of escitalopram on the processing of emotional faces. Ribeirão Preto, SP: Faculty of Medicine of Ribeirão Preto, University of São Paulo; 2008. The selective serotonin reuptake inhibitors (SSRI) have been used successfully for the treatment of various psychiatry disorders. The SSRI clinical efficacy is attributed to an enhancement of the serotonergic neurotransmission, but little is known about the neuropsychological mechanisms underlying this process. Several evidences suggest that serotonin is involved with the regulation of social behavior, learning and memory process and emotional processing. The recognition of basic emotions on facial expressions represents an useful task to study the emotional processing, since they are a condensate, uniform and important stimuli for social functioning. The aim of the study was to verify the effects of the SSRI escitalopram on the recognition of facial emotional expressions. Twelve healthy males completed two experimental sessions each (crossover design), in a randomized, balanced order, double-blind design. An oral dose of 10 mg of escitalopram was administered 3 hours before they performed an emotion recognition task with six basic emotions angry, fear, sadness, disgust, happiness and surprise and neutral expression. The faces were digitally morphed between 10% and 100% of each emotional standard, creating a 10% steps gradient. The subjective mood and anxiety states through the task were recorded and the performance through the task was defined by the accuracy measure (number of correct answers divided by the total of stimuli presented). In general, except of fear, escitalopram interfered with all the emotions tested. Specifically, facilitated the recognition of sadness, while impaired the identification of happiness. When the gender of the faces was analyzed, this effect was seen in male, but not female faces, where it improves the recognition of happiness. In addition, improves the recognition of angry and disgusted faces when administered at the second session and impaired the identification of surprised faces at intermediate levels of intensity. It also showed a global positive effect on task performance when administered at the second session. The results indicate a serotonergic modulation on the recognition of emotional faces and the recall of previous learned items.
158

"I distinctly remember you!": an investigation of memory for faces with unusual features

Unknown Date (has links)
Many errors in recognition are made because various features of a stimulus are attended inefficiently. Those features are not bound together and can then be confused with other information. One of the most common types of these errors is conjunction errors. These happen when mismatched features of memories are combined to form a composite memory. This study tests how likely conjunction errors, along with other recognition errors, occur when participants watch videos of people both with and without unusual facial features performing actions after a week time lag. It was hypothesized that participants would falsely recognize actresses in the conjunction item condition over the other conditions. The likelihood of falsely recognizing a new person increased when presented with a feature, but the conjunction items overall were most often falsely recognized. / by Autumn Keif. / Thesis (M.A.)--Florida Atlantic University, 2012. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2012. Mode of access: World Wide Web.
159

Human expression and intention via motion analysis: learning, recognition and system implementation. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2004 (has links)
by Ka Keung Caramon Lee. / "March 29, 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 188-210). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
160

Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers

Weber, Marlene January 2018 (has links)
The automotive industry is facing significant technological and sociological shifts, calling for an improved understanding of driver and passenger behaviours, emotions and needs, and a transformation of the traditional automotive design process. This research takes a human-centred approach to automotive research, investigating the users' emotional states during automobile driving, with the goal to develop a framework for automotive emotion research, thus enabling the integration of technological advances into the driving environment. A literature review of human emotion and emotion in an automotive context was conducted, followed by three driving studies investigating emotion through Facial-Expression Analysis (FEA): An exploratory study investigated whether emotion elicitation can be applied in driving simulators, and if FEA can detect the emotions triggered. The results allowed confidence in the applicability of emotion elicitation to a lab-based environment to trigger emotional responses, and FEA to detect those. An on-road driving study was conducted in a natural setting to investigate whether natures and frequencies of emotion events could be automatically measured. The possibility of assigning triggers to those was investigated. Overall, 730 emotion events were detected during a total driving time of 440 minutes, and event triggers were assigned to 92% of the emotion events. A similar second on-road study was conducted in a partially controlled setting on a planned road circuit. In 840 minutes, 1947 emotion events were measured, and triggers were successfully assigned to 94% of those. The differences in natures, frequencies and causes of emotions on different road types were investigated. Comparison of emotion events for different roads demonstrated substantial variances of natures, frequencies and triggers of emotions on different road types. The results showed that emotions play a significant role during automobile driving. The possibility of assigning triggers can be used to create a better understanding of causes of emotions in the automotive habitat. Both on-road studies were compared through statistical analysis to investigate influences of the different study settings. Certain conditions (e.g. driving setting, social interaction) showed significant influence on emotions during driving. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified. The methodology and results can be applied to design and research processes, allowing the identification of issues and opportunities in current automotive design to address challenges of future automotive design. Suggested future research includes the investigation of a wider variety of road types and situations, testing with different automobiles and the combination of multiple measurement techniques.

Page generated in 0.0558 seconds