• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pose-Invariant Face Recognition Using Real and Virtual Views

Beymer, David 28 March 1996 (has links)
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.
2

A Neuro-Fuzzy Approach for Multiple Human Objects Segmentation

Huang, Li-Ming 03 September 2003 (has links)
We propose a novel approach for segmentation of human objects, including face and body, in image sequences. In modern video coding techniques, e.g., MPEG-4 and MPEG-7, human objects are usually the main focus for multimedia applications. We combine temporal and spatial information and employ a neuro-fuzzy mechanism to extract human objects. A fuzzy self-clustering technique is used to divide the video frame into a set of segments. The existence of a face within a candidate face region is ensured by searching for possible constellations of eye-mouth triangles and verifying each eye-mouth combination with the predefined template. Then rough foreground and background are formed based on a combination of multiple criteria. Finally, human objects in the base frame and the remaining frames of the video stream are precisely located by a fuzzy neural network which is trained by a SVD-based hybrid learning algorithm. Through experiments, we compare our system with two other approaches, and the results have shown that our system can detect face locations and extract human objects more accurately.
3

A method for location based search for enhancing facial feature design

Al-dahoud, Ahmad, Ugail, Hassan January 2016 (has links)
No / In this paper we present a new method for accurate real-time facial feature detection. Our method is based on local feature detection and enhancement. Previous work in this area, such as that of Viola and Jones, require looking at the face as a whole. Consequently, such approaches have increased chances of reporting negative hits. Furthermore, such algorithms require greater processing power and hence they are especially not attractive for real-time applications. Through our recent work, we have devised a method to identify the face from real-time images and divide it into regions of interest (ROI). Firstly, based on a face detection algorithm, we identify the face and divide it into four main regions. Then, we undertake a local search within those ROI, looking for specific facial features. This enables us to locate the desired facial features more efficiently and accurately. We have tested our approach using the Cohn-Kanade’s Extended Facial Expression (CK+) database. The results show that applying the ROI has a relatively low false positive rate as well as provides a marked gain in the overall computational efficiency. In particular, we show that our method has a 4-fold increase in accuracy when compared to existing algorithms for facial feature detection.
4

A computational framework for measuring the facial emotional expressions

Ugail, Hassan, Aldahoud, Ahmad A.A. 20 March 2022 (has links)
No / The purpose of this chapter is to discuss and present a computational framework for detecting and analysing facial expressions efficiently. The approach here is to identify the face and estimate regions of facial features of interest using the optical flow algorithm. Once the regions and their dynamics are computed a rule based system can be utilised for classification. Using this framework, we show how it is possible to accurately identify and classify facial expressions to match with FACS coding and to infer the underlying basic emotions in real time.
5

Automatic Dynamic Tracking of Horse Head Facial Features in Video Using Image Processing Techniques

Doyle, Jason Emory 11 February 2019 (has links)
The wellbeing of horses is very important to their care takers, trainers, veterinarians, and owners. This thesis describes the development of a non-invasive image processing technique that allows for automatic detection and tracking of horse head and ear motion, respectively, in videos or camera feed, both of which may provide indications of horse pain, stress, or well-being. The algorithm developed here can automatically detect and track head motion and ear motion, respectively, in videos of a standing horse. Results demonstrating the technique for nine different horses are presented, where the data from the algorithm is utilized to plot absolute motion vs. time, velocity vs. time, and acceleration vs. time for the head and ear motion, respectively, of a variety of horses and ponies. Two-dimensional plotting of x and y motion over time is also presented. Additionally, results of pilot work in eye detection in light colored horses is also presented. Detection of pain in horses is particularly difficult because they are prey animals and have mechanisms to disguise their pain, and these instincts may be particularly strong in the presence of an unknown human, such as a veterinarian. Current state-of-the art for detecting pain in horses primarily involves invasive methods, such as heart rate monitors around the body, drawing blood for cortisol levels, and pressing on painful areas to elicit a response, although some work has been done for humans to sort and score photographs subjectively in terms of a "horse grimace scale." The algorithms developed in this thesis are the first that the author is aware for exploiting proven image processing approaches from other applications for development of an automatic tool for detection and tracking of horse facial indicators. The algorithms were done in common open source programs Python and OpenCV, and standard image processing approaches including Canny Edge detection Hue, Saturation, Value color filtering, and contour tracking were utilized in algorithm development. The work in this thesis provides the foundational development of a non -invasive and automatic detection and tracking program for horse head and ear motion, including demonstration of the viability of this approach using videos of standing horses. This approach lays the groundwork for robust tool development for monitoring horses non-invasively and without the required presence of humans in such applications as post-operative monitoring, foaling, evaluation of performance horses in competition and/or training, as well as for providing data for research on animal welfare, among other scenarios. / MS / There are many things that cause pain in horses, including improper saddle fit, inadequate care, laminitis, lameness, surgery, and colic, among others.The well-being of horses is very important to their care takers, trainers, veterinarians, and owners. Monitoring the well-being of horses is particularly important in many scenarios including post-operative monitoring, therapeutic riding programs, racing, dressage, and rodeo events, among numerous other activities. This thesis describes the development of a computer-based image processing technique for automatic detection and tracking of both horse head and ear motion, respectively, in videos of standing horses. The techniques developed here allow for the collection of data on head and ear motion over time, facilitating analysis of these motions that may provide reliable indicators of horse pain, stress, or well-being. Knowing if a horse is in pain is difficult because horses are prey animals that have mechanisms in place that minimize the display of pain so that they do not become easy targets for predators. Computer vision systems, like the one developed here, may be well suited to detect subtle changes in horse behavior for detecting distress in horses. The ability to remotely and automatically monitor horse well-being by exploiting computer-based image-processing techniques will create significant opportunities to improve the welfare of horses. The work presented here looks at the first use of image-processing approaches to detect and track facial features of standing horses in videos to help facilitate the development of automatic pain and stress detection in videos and camera feeds for owners, veterinarians, and horse-related organizations, among others.

Page generated in 0.1023 seconds