• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 11
  • 11
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A parts classification and coding system utilizing functional and shape characteristics in a matrix-code structure

Anderson, Ricky D. January 1992 (has links)
No description available.
2

Development and psychometric evaluation of an observational coding system measuring person-centred care in spouses of people with dementia

Ellis-Gray, S.L., Riley, G.A., Oyebode, Jan 31 May 2014 (has links)
Yes / The notion of person-centered care has been important in investigating relationships between people with dementia and paid carers, and measures are available to assess this. It has been suggested that person-centered care may be a useful construct to apply to understand family-care relationships. However, no measures of person-centered care in this context exist. The study aimed to develop an observational measure of person-centered care for this purpose. Method: First, a coding system incorporating a range of behaviors that could be considered person-centered or non-person-centered was constructed. Examples included a code relating to whether the person with dementia was involved in planning a task, and a code relating to how the spouse responded to confusion/distress. Second, 11 couples, where one partner had a dementia, were recruited and videotaped cooperating on an everyday task. The system was applied to the care-giving spouse's behaviors, labeling examples of behavior as person-centered or non-person-centered. The final step involved assessing the inter-rater reliability of the system. Results: The system captured nine categories of behavior, which were each divided into person-centered and non-person-centered types. The system had good reliability (Cohen's κ coefficients were: 0.65 for category and whether behaviors needed to be placed in a category; 0.81 for category excluding the decision about whether behaviors needed to be placed in a category; and 0.79 in relation to whether behaviors were person-centered or non-person-centered.) Conclusions: Although the small sample size limits the implications of the results, the system is a promising quantitative measure of spousal person-centered care.
3

Spatio-temporal representation and analysis of facial expressions with varying intensities

Sariyanidi, Evangelos January 2017 (has links)
Facial expressions convey a wealth of information about our feelings, personality and mental state. In this thesis we seek efficient ways of representing and analysing facial expressions of varying intensities. Firstly, we analyse state-of-the-art systems by decomposing them into their fundamental components, in an effort to understand what are the useful practices common to successful systems. Secondly, we address the problem of sequence registration, which emerged as an open issue in our analysis. The encoding of the (non-rigid) motions generated by facial expressions is facilitated when the rigid motions caused by irrelevant factors, such as camera movement, are eliminated. We propose a sequence registration framework that is based on pre-trained regressors of Gabor motion energy. Comprehensive experiments show that the proposed method achieves very high registration accuracy even under difficult illumination variations. Finally, we propose an unsupervised representation learning framework for encoding the spatio-temporal evolution of facial expressions. The proposed framework is inspired by the Facial Action Coding System (FACS), which predates computer-based analysis. FACS encodes an expression in terms of localised facial movements and assigns an intensity score for each movement. The framework we propose mimics those two properties of FACS. Specifically, we propose to learn from data a linear transformation that approximates the facial expression variation in a sequence as a weighted sum of localised basis functions, where the weight of each basis function relates to movement intensity. We show that the proposed framework provides a plausible description of facial expressions, and leads to state-of-the-art performance in recognising expressions across intensities; from fully blown expressions to micro-expressions.
4

The Influence of Implementation of TW-DRGs on the Hospital Management

Liu, Hsin-Hua 31 August 2012 (has links)
Increase in the cost of medical care services has become an important issue in many countries that have implemented national health insurance, including Taiwan. On July of 2002, the National Health Insurance of Taiwan implemented a global budgeting system for all hospital payments. It was hoped that such a system would control the increase of medical expenses within a certain expected range. However, in the absence of reasonable payment bases and effective utilization management and control mechanism, the outcome of implementing this new payment system has been difficult to measure. Therefore, the National Health Insurance (NHI) studied the possibility of implementing DRGs (diagnosis related groups) for all in-patient payments. To evaluate the impact of the new payment system, the medicinal datas collected 1 year before and after implementation of TW-DRGs were analyzed. The tested target is an orthopaedic department in a Public Medical Center. The tested items including average of days in hospital, medical costs, application of National insurance, and sub-item total knee replacement (TKR) and total hip replacement (THR). For overall investigation of the tested orthopaedic department, our findings revealed that implementation of TW-DRGs significantly diminished the average of days in hospital and the average of medical costs. However, implementation of TW-DRGs showed slight influence on the National Health Insurance Application. As to investigate common surgeries, TKR and THR, only the average of days in hospital of TKR was significantly decreased by implementation of TW-DRGs. In addition, other specific TW-DRGs numbered items were also examined to determine the alteration of the factors described above. Our results showed that implementation of TW-DRGs significantly diminished the days in hospital, the medical cost, and the National Health Insurance Application for the selected TW-DRGs numbered items. However, the quality in health care didn¡¦t have significant change after implementation of TW-DRGs. More complete data pools are needed for the more precise analysis to evaluate the influence of TW-DRGs system on the management of hospital and other medical factors in Taiwan.
5

POKERFACE: EMOTION BASED GAME-PLAY TECHNIQUES FOR COMPUTER POKER PLAYERS

Cockerham, Lucas 01 January 2004 (has links)
Numerous algorithms/methods exist for creating computer poker players. This thesis comparesand contrasts them. A set of poker agents for the system PokerFace are then introduced. A surveyof the problem of facial expression recognition is included in the hopes it may be used to build abetter computer poker player.
6

Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition

Vadapalli, Hima Bindu January 2011 (has links)
Philosophiae Doctor - PhD / This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
7

Method of modelling facial action units using partial differential equations

Ugail, Hassan, Ismail, N.B. January 2016 (has links)
No / In this paper we discuss a novel method of mathematically modelling facial action units for accurate representation of human facial expressions in 3- dimensions. Our method utilizes the approach of Facial Action Coding System (FACS). It is based on a boundary-value approach, which utilizes a solution to a fourth order elliptic Partial Differential Equation (PDE) subject to a suitable set of boundary conditions. Here the PDE surface generation method for human facial expressions is utilized in order to generate a wide variety of facial expressions in an efficient and realistic way. For this purpose, we identify a set of boundary curves corresponding to the key features of the face which in turn define a given facial expression in 3-dimensions. The action units (AUs) relating to the FACS are then efficiently represented in terms of Fourier coefficients relating to the boundary curves which enables us to store both the face and the facial expressions in an efficient way.
8

Facial Analysis for Real-Time Application: A Review in Visual Cues Detection Techniques

Yap, Moi Hoon, Ugail, Hassan, Zwiggelaar, R. 30 August 2012 (has links)
Yes / Emerging applications in surveillance, the entertainment industry and other human computer interaction applications have motivated the development of real-time facial analysis research covering detection, tracking and recognition. In this paper, the authors present a review of recent facial analysis for real-time applications, by providing an up-to-date review of research efforts in human computing techniques in the visible domain. The main goal is to provide a comprehensive reference source for researchers, regardless of specific research areas, involved in real-time facial analysis. First, the authors undertake a thorough survey and comparison in face detection techniques. In this survey, they discuss some prominent face detection methods presented in the literature. The performance of the techniques is evaluated by using benchmark databases. Subsequently, the authors provide an overview of the state-of-the-art of facial expressions analysis and the importance of psychology inherent in facial expression analysis. During the last decades, facial expressions analysis has slowly evolved into automatic facial expressions analysis due to the popularity of digital media and the maturity of computer vision. Hence, the authors review some existing automatic facial expressions analysis techniques. Finally, the authors provide an exemplar for the development of a facial analysis real-time application and propose a model for facial analysis. This review shows that facial analysis for real-time application involves multi-disciplinary aspects and it is important to take all domains into account when building a reliable system.
9

Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer Products

Clark, Elizabeth A. 17 March 2020 (has links)
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through hedonic testing but often does not capture affectual response as it pertains to product-generated (PG) and product-associated (PA) emotions. This research sought to assess the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing literature was performed that focused on the Facial Action Coding System (FACS) and its use to investigate consumer affect and characterize human emotional response to product-based stimuli, which revealed inconsistencies in how FACS is carried out as well as how emotional response is inferred from Action Unit (AU) activation. Automatic Facial Expression Analysis (AFEA), which automates FACS and translates the facial muscular positioning into the basic universal emotions, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). The ST-IAT did not yield significant PA emotions for packaged milk (p>0.05), but correspondence analysis of CATA data produced PA emotion insights including term selection based on arousal and underlying approach/withdrawal motivation related to packaging pigmentation. Time series statistical analysis of AFEA data provided increased insights on significant emotion expression, but the lack of difference (p>0.05) between certain expressed emotions that maintain no related AUs, such as happy and disgust, indicates that AFEA software may not be identifying AUs and determining emotion-based inferences in agreement with FACS. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light-blocking properties) underwent time series statistical analysis to determine if the sensory-engaging nature of control stimuli could impact time series statistical analysis of AFEA data. When compared against the limited sensory engaging (blank screen) control, contempt, happy, and angry were expressed more intensely (p<0.025) and with greater incidence for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (albeit fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken while comparing the control and experimental stimuli in time series analysis as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights on consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for consumer products such as foods and beverages. / Doctor of Philosophy / Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
10

FACIAL EXPRESSION DISCRIMINATES BETWEEN PAIN AND ABSENCE OF PAIN IN THE NON-COMMUNICATIVE, CRITICALLY ILL ADULT PATIENT

Arif-Rahu, Mamoona 03 December 2010 (has links)
BACKGROUND: Pain assessment is a significant challenge in critically ill adults, especially those unable to communicate their pain level. At present there is no universally accepted pain scale for use in the non-communicative (cognitively impaired, sedated, paralyzed or mechanically ventilated) patient. Facial expressions are considered among the most reflexive and automatic nonverbal indices of pain. The facial expression component of pain assessment tools include a variety of facial descriptors (wincing, frowning, grimacing, smile/relaxed) with inconsistent pain intensity ratings or checklists of behaviors. The lack of consistent facial expression description and quantification of pain intensity makes standardization of pain evaluation difficult. Although use of facial expression is an important behavioral measure of pain intensity, precise and accurate methods for interpreting the specific facial actions of pain in critically ill adults has not been identified. OBJECTIVE: The three specific aims of this prospective study were: 1) to describe facial actions during pain in non-communicative critically ill patients; 2) to determine facial actions that characterize the pain response; 3) to describe the effect of patient factors on facial actions during the pain response. DESIGN: Descriptive, correlational, comparative. SETTING: Two adult critical care units (Surgical Trauma ICU-STICU and Medical Respiratory ICU-MRICU) at an urban university medical center. SUBJECTS: A convenience sample of 50 non-communicative critically ill intubated, mechanically ventilated adult patients. Fifty-two percent were male, 48% Euro-American, with mean age 52.5 years (±17. 2). METHODS: Subjects were video-recorded while in an intensive care unit at rest (baseline phase) and during endotracheal suctioning (procedure phase). Observer-based pain ratings were gathered using the Behavioral Pain Scale. Facial actions were coded from video using the Facial Action Coding System (FACS) over a 30 second time period for each phase. Pain scores were calculated from FACS action units (AU) following Prkachin and Solomon metric. RESULTS: Fourteen facial action units were associated with pain response and found to occur more frequently during the noxious procedure than during baseline. These included areas of brow raiser, brow lower, orbit tightening, eye closure, head movements, mouth opening, nose wrinkling, and nasal dilatation, and chin raise. The sum of intensity of the 14 AUs was correlated with BPS (r=0.70, P<0.0001) and with the facial expression component of BPS (r=0.58, P<0.0001) during procedure. A stepwise multivariate analysis predicted 5 pain-relevant facial AUs [brow raiser (AU 1), brow lower (AU 4), nose wrinkling (AU 9), head turned right (AU 52), and head turned up (AU53)] that accounted for 71% of the variance (Adjusted R2=0.682) in pain response (F= 21.99, df=49, P<0.0001). The FACS pain intensity score based on 5 pain-relevant facial AUs was associated with BPS (r=0.77, P<0.0001) and with the facial expression component of BPS (r=0.63, P<0.0001) during procedure. Patient factors (e. g., age, gender, race, and diagnosis, duration of endotracheal intubation, ICU length of stay, and analgesic and sedative drug usages, and severity of illness) were not associated with the FACS pain intensity score. CONCLUSIONS: Overall, the FACS pain intensity score composed of inner brow raiser, brow lower, nose wrinkle, and head movements reflected a general pain action in our study. Upper facial expression provides an important behavioral measure of pain which may be used in the clinical evaluation of pain in the non-communicative critically ill patients. These results provide preliminary results that the Facial Action Coding System can discriminate a patient’s acute pain experience.

Page generated in 0.1225 seconds