• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 7
  • 7
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The drama of Senkatana by S.M. Mofokeng : a speech act exploration

Kock, L. J. (Levina Jacoba) 11 1900 (has links)
The drama of Senkatana by S.M. Mofokeng is analysed by applying principles provided by speech act theory, using as basis the explication of the theory by Bach and Harnish (1979). The socio-cultural context in the play has as its starting point the realm of myth and legend. From here all categories of relationships within the protagonist/antagonist encounter unfold, as do opposing sets of contextual beliefs characters rely on; these are primarily responsible for the growing conflict in the drama. Enhancing the mythical character of the play is the absorbing role played by the diboni, acting as seers, as prophets and as additional 'authorial voice'. Their and those of other characters' speech acts reflect this and more; they operate in a substantiated sign-system which provides a framework for evaluating each semiotic act from locutionary, illocutionary and perlocutionary dimensions of meaning. Chapter 1 comprises a historical survey of studies on speech act theory, and includes a brief summary of the position of the theory in the field of semiotics. The micro speech act analysis of the play is facilitated by the division of the text into smaller action units (summarised in Addendum 1). Chapter 2, containing the greater part of the exposition, commences the narration of the folktale and offers a clear rendering of the epic rise of the hero. Chapter 3 portrays the rise and progress of the antagonists challenging the hero, coupled with intensifying anxiety among the protagonists. Chapter 4 provides a vivid overview of how the values of the hero triumph over those of the antagonist despite the physical slaying of the hero. Chapter 5 offers a graphic outline of how the macro speech act is accomplished in the play. It is shown how an investigation of the speech act profiles of characters, coupled with the evaluation of illocutionary tactics and illocutionary/perlocutionary dynamics, communicates significant information pertaining to characterisation. A graph illustrating the rise and fall of micro speech acts within the larger macro speech act is provided in Addendum 2. Suggestions are made regarding future research in literary texts. / African Languages / D.Lit. et Phil. (African Languages)
2

Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised Frameworks

Chu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
3

The drama of Senkatana by S.M. Mofokeng : a speech act exploration

Kock, L. J. (Levina Jacoba) 11 1900 (has links)
The drama of Senkatana by S.M. Mofokeng is analysed by applying principles provided by speech act theory, using as basis the explication of the theory by Bach and Harnish (1979). The socio-cultural context in the play has as its starting point the realm of myth and legend. From here all categories of relationships within the protagonist/antagonist encounter unfold, as do opposing sets of contextual beliefs characters rely on; these are primarily responsible for the growing conflict in the drama. Enhancing the mythical character of the play is the absorbing role played by the diboni, acting as seers, as prophets and as additional 'authorial voice'. Their and those of other characters' speech acts reflect this and more; they operate in a substantiated sign-system which provides a framework for evaluating each semiotic act from locutionary, illocutionary and perlocutionary dimensions of meaning. Chapter 1 comprises a historical survey of studies on speech act theory, and includes a brief summary of the position of the theory in the field of semiotics. The micro speech act analysis of the play is facilitated by the division of the text into smaller action units (summarised in Addendum 1). Chapter 2, containing the greater part of the exposition, commences the narration of the folktale and offers a clear rendering of the epic rise of the hero. Chapter 3 portrays the rise and progress of the antagonists challenging the hero, coupled with intensifying anxiety among the protagonists. Chapter 4 provides a vivid overview of how the values of the hero triumph over those of the antagonist despite the physical slaying of the hero. Chapter 5 offers a graphic outline of how the macro speech act is accomplished in the play. It is shown how an investigation of the speech act profiles of characters, coupled with the evaluation of illocutionary tactics and illocutionary/perlocutionary dynamics, communicates significant information pertaining to characterisation. A graph illustrating the rise and fall of micro speech acts within the larger macro speech act is provided in Addendum 2. Suggestions are made regarding future research in literary texts. / African Languages / D.Lit. et Phil. (African Languages)
4

3D face analysis : landmarking, expression recognition and beyond

Zhao, Xi 13 September 2010 (has links) (PDF)
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
5

CONTENT UNDERSTANDING FOR IMAGING SYSTEMS: PAGE CLASSIFICATION, FADING DETECTION, EMOTION RECOGNITION, AND SALIENCY BASED IMAGE QUALITY ASSESSMENT AND CROPPING

Shaoyuan Xu (9116033) 12 October 2021 (has links)
<div>This thesis consists of four sections which are related with four research projects.</div><div><br></div><div>The first section is about Page Classification. In this section, we extend our previous approach which could classify 3 classes of pages: Text, Picture and Mixed, to 5 classes which are: Text, Picture, Mixed, Receipt and Highlight. We first design new features to define those two new classes and then use DAG-SVM to classify those 5 classes of images. Based on the results, our algorithm performs well and is able to classify 5 types of pages.</div><div><br></div><div>The second section is about Fading Detection. In this section, we develop an algorithm that can automatically detect fading for both text and non-text region. For text region, we first do global alignment and then perform local alignment. After that, we create a 3D color node system, assign each connected component to a color node and get the color difference between raster page connected component and scanned page connected. For non-text region, after global alignment, we divide the page into "super pixels" and get the color difference between raster super pixels and testing super pixels. Compared with the traditional method that uses a diagnostic page, our method is more efficient and effective.</div><div><br></div><div>The third section is about CNN Based Emotion Recognition. In this section, we build our own emotion recognition classification and regression system from scratch. It includes data set collection, data preprocessing, model training and testing. We extend the model to real-time video application and it performs accurately and smoothly. We also try another approach of solving the emotion recognition problem using Facial Action Unit detection. By extracting Facial Land Mark features and adopting SVM training framework, the Facial Action Unit approach achieves comparable accuracy to the CNN based approach.</div><div><br></div><div>The forth section is about Saliency Based Image Quality Assessment and Cropping. In this section, we propose a method of doing image quality assessment and recomposition with the help of image saliency information. Saliency is the remarkable region of an image that attracts people's attention easily and naturally. By showing everyday examples as well as our experimental results, we demonstrate the fact that, utilizing the saliency information will be beneficial for both tasks.</div>
6

Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

Kim, Leejin 19 November 2013 (has links)
The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures.
7

3D face analysis : landmarking, expression recognition and beyond / Reconnaissance de l'expression du visage

Zhao, Xi 13 September 2010 (has links)
Cette thèse de doctorat est dédiée à l’analyse automatique de visages 3D, incluant la détection de points d’intérêt et la reconnaissance de l’expression faciale. En effet, l’expression faciale joue un rôle important dans la communication verbale et non verbale, ainsi que pour exprimer des émotions. Ainsi, la reconnaissance automatique de l’expression faciale offre de nombreuses opportunités et applications, et est en particulier au coeur d’interfaces homme-machine "intelligentes" centrées sur l’être humain. Par ailleurs, la détection automatique de points d’intérêt du visage (coins de la bouche et des yeux, ...) permet la localisation d’éléments du visage qui est essentielle pour de nombreuses méthodes d’analyse faciale telle que la segmentation du visage et l’extraction de descripteurs utilisée par exemple pour la reconnaissance de l’expression. L’objectif de cette thèse est donc d’élaborer des approches de détection de points d’intérêt sur les visages 3D et de reconnaissance de l’expression faciale pour finalement proposer une solution entièrement automatique de reconnaissance de l’activité faciale incluant l’expression et les unités d’action (ou Action Units). Dans ce travail, nous avons proposé un réseau de croyance bayésien (Bayesian Belief Network ou BBN) pour la reconnaissance d’expressions faciales ainsi que d’unités d’action. Un modèle statistique de caractéristiques faciales (Statistical Facial feAture Model ou SFAM) a également été élaboré pour permettre la localisation des points d’intérêt sur laquelle s’appuie notre BBN afin de permettre la mise en place d’un système entièrement automatique de reconnaissance de l’expression faciale. Nos principales contributions sont les suivantes. Tout d’abord, nous avons proposé un modèle de visage partiel déformable, nommé SFAM, basé sur le principe de l’analyse en composantes principales. Ce modèle permet d’apprendre à la fois les variations globales de la position relative des points d’intérêt du visage (configuration du visage) et les variations locales en terme de texture et de forme autour de chaque point d’intérêt. Différentes instances de visages partiels peuvent ainsi être produites en faisant varier les valeurs des paramètres du modèle. Deuxièmement, nous avons développé un algorithme de localisation des points d’intérêt du visage basé sur la minimisation d’une fonction objectif décrivant la corrélation entre les instances du modèle SFAM et les visages requête. Troisièmement, nous avons élaboré un réseau de croyance bayésien (BBN) dont la structure décrit les relations de dépendance entre les sujets, les expressions et les descripteurs faciaux. Les expressions faciales et les unités d’action sont alors modélisées comme les états du noeud correspondant à la variable expression et sont reconnues en identifiant le maximum de croyance pour tous les états. Nous avons également proposé une nouvelle approche pour l’inférence des paramètres du BBN utilisant un modèle de caractéristiques faciales pouvant être considéré comme une extension de SFAM. Finalement, afin d’enrichir l’information utilisée pour l’analyse de visages 3D, et particulièrement pour la reconnaissance de l’expression faciale, nous avons également élaboré un descripteur de visages 3D, nommé SGAND, pour caractériser les propriétés géométriques d’un point par rapport à son voisinage dans le nuage de points représentant un visage 3D. L’efficacité de ces méthodes a été évaluée sur les bases FRGC, BU3DFE et Bosphorus pour la localisation des points d’intérêt ainsi que sur les bases BU3DFE et Bosphorus pour la reconnaissance des expressions faciales et des unités d’action. / This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.

Page generated in 0.0647 seconds