Spelling suggestions: "subject:": dose classification"" "subject:": dose 1classification""
1 |
Robust Classification of Head Pose from Low Resolution Images Under Various Lighting ConditionKhaki, Mohammad January 2017 (has links)
Companies have long been interested in gauging the customer’s level of interest in their advertisements. By analyzing the gaze direction of individuals viewing a public advertisement, we can infer their level of engagement. Head pose detection allows us to deduce pertinent information about gaze direction. Using video sensors, machine learning methods, and image processing techniques, information pertaining to the head pose of people viewing advertisements can be automatically collected and mined.
We propose a method for the coarse classification of head pose from low-resolution images in crowded scenes captured through a single camera and under different lighting conditions. Our method improves on the technique described in [1]; we introduce several modifications to the latter scheme to improve classification accuracy. First, we devise a mechanism that uses a cascade of three binary Support Vector Machines (SVM) classifiers instead of a single multi-class classifier. Second, we employ a bigger dataset for training by combining eight publicly available databases. Third, we use two sets of appearance features, Similarity Distance Map (SDM) and Gabor Wavelet (GW), to train the SVM classifiers. The scheme is tested with cross validation using the dataset and on videos we collected in a lab experiment. We found a significant improvement in the results achieved by the proposed method over existing schemes, especially for video pose classification. The results show that the proposed method is more robust under varying light conditions and facial expressions and in the presences of facial accessories compared to [1].
|
2 |
STUDENT ATTENTIVENESS CLASSIFICATION USING GEOMETRIC MOMENTS AIDED POSTURE ESTIMATIONGowri Kurthkoti Sridhara Rao (14191886) 30 November 2022 (has links)
<p> Body Posture provides enough information regarding the current state of mind of a person. This idea is used to implement a system that provides feedback to lecturers on how engaging the class has been by identifying the attentive levels of students. This is carried out using the posture information extracted with the help of Mediapipe. A novel method of extracting features are from the key points returned by Mediapipe is proposed. Geometric moments aided features classification performs better than the general distances and angles features classification. In order to extend the single person pose classification to multi person pose classification object detection is implemented. Feedback is generated regarding the entire lecture and provided as the output of the system. </p>
|
3 |
Dynamic Headpose Classification and Video Retargeting with Human AttentionAnoop, K R January 2015 (has links) (PDF)
Over the years, extensive research has been devoted to the study of people's head pose due to its relevance in security, human-computer interaction, advertising as well as cognitive, neuro and behavioural psychology. One of the main goals of this thesis is to estimate people's 3D head orientation as they freely move around in naturalistic settings such as parties, supermarkets etc. Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces captured are at low-resolution with a blurred appearance. Also labelling sufficient training data for headpose estimation in such settings is difficult due to the motion of targets and the large possible range of head orientations. Domain adaptation approaches are useful for transferring knowledge from the training source to the test target data having different attributes, minimizing target data labelling efforts in the process. This thesis examines the use of transfer learning for efficient multi-view head pose classification. Relationship between head pose and facial appearance from many labelled examples corresponding to the source data is learned initially. Domain adaptation techniques are then employed to transfer this knowledge to the target data. The following three challenging situations is addressed (I) ranges of head poses in the source and target images is different, (II) where source images capture a stationary person while target images capture a moving person with varying facial appearance due to changing perspective, scale and (III) a combination of (I) and (II). All proposed transfer learning methods are sufficiently tested and benchmarked on a new compiled dataset DPOSE for headpose classification.
This thesis also looks at a novel signature representation for describing object sets for covariance descriptors, Covariance Profiles (CPs). CP is well suited for representing a set of similarly related objects. CPs posit that the covariance matrices, pertaining to a specific entity, share the same eigen-structure. Such a representation is not only compact but also eliminates the need to store all the training data. Experiments on images as well as videos for applications such as object-track clustering and headpose estimation is shown using CP.
In the second part, Human-gaze for interest point detection for video retargeting is explored. Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. This thesis proposes an interactive human-in-loop framework to model eye-movements and predicts visual saliency in yet-unseen frames. Eye-tracking and video content is used to model visual attention in a manner that accounts for temporal discontinuities due to sudden eye movements, noise and behavioural artefacts. Gaze buffering, for eye-gaze analysis and its fusion with content based features is proposed. The method uses eye-gaze information along with bottom-up and top-down saliency to boost the importance of image pixels. Our robust visual saliency prediction is instantiated for content aware Video Retargeting.
|
4 |
Pose Classification of Horse Behavior in Video : A deep learning approach for classifying equine poses based on 2D keypoints / Pose-klassificering av Hästbeteende i Video : En djupinlärningsmetod för klassificering av hästposer baserat på 2D-nyckelpunkterSöderström, Michaela January 2021 (has links)
This thesis investigates whether Computer Vision can be a useful tool in interpreting the behaviors of monitored horses. In recent years, research in the field of Computer Vision has primarily focused on people, where pose estimation and action recognition are popular research areas. The thesis presents a pose classification network, where input features are described by estimated 2D key- points of horse body parts. The network output classifies three poses: ’Head above the wither’, ’Head aligned with the wither’ and ’Head below the wither’. The 2D reconstructions of keypoints are obtained using DeepLabCut applied to raw video surveillance data of a single horse. The estimated keypoints are then fed into a Multi-layer preceptron, which is trained to classify the mentioned classes. The network shows promising results with good performance. We found label noise when we spot-checked random samples of predicted poses and comparing them to the ground truth, as some of the labeled data consisted of false ground truth samples. Despite this fact, the conclusion is that satisfactory results are achieved with our method. Particularly, the keypoint estimates were sufficient enough for these poses for the model to succeed to classify a hold-out set of poses. / Uppsatsen undersöker främst om datorseende kan vara ett användbart verktyg för att tolka beteendet hos övervakade hästar. Under de senaste åren har forskning inom datorseende främst fokuserat på människor, där pose-estimering och händelseigenkänning är populära forskningsområden. Denna avhandling presenterar ett poseklassificeringsnätverk där indata beskrivs av uppskattade 2Dnyckelpunkter (eller så kallade intressepunkter) för hästkroppsdelar. Nätverket klassificerar tre poser: ’Huvud ovanför manken’, ’Huvud i linje med manken’ och ’Huvudet nedanför manken’. 2D-rekonstruktioner av nyckelpunkter erhålls med hjälp av DeepLabCut, applicerad på rå videoövervakningsdata för en häst. De uppskattade nyckelpunkterna matas sedan in i ett flerskikts- preceptron, som tränas för att klassificera de nämnda klasserna. Nätverket visar lovande resultat med bra prestanda. Vi hittade brus i etiketterna vid slumpmässiga stickprover av förutspådda poser som jämfördes med sanna etiketter där några etiketter bestod av falska sanna etiketter. Trots detta är slutsatsen att tillfredsställande resultat uppnås med vår metod. Speciellt var de estimerade nyckelpunkterna tillräckliga för dessa poser för att nätverket skulle lyckas med att klassificera ett separat dataset av samma osedda poser.
|
5 |
Non-linear dimensionality reduction and sparse representation models for facial analysis / Réduction de la dimension non-linéaire et modèles de la représentations parcimonieuse pour l’analyse du visageZhang, Yuyao 20 February 2014 (has links)
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination. / Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images.
|
6 |
Human Pose and Action Recognition using Negative Space AnalysisJanse Van Vuuren, Michaella 12 1900 (has links)
This thesis proposes a novel approach to extracting pose information from image sequences. Current state of the art techniques focus exclusively on the image space occupied by the body for pose and action recognition. The method proposed here, however, focuses on the negative spaces: the areas surrounding the individual. This has resulted in the colour-coded negative space approach, an image preprocessing step that circumvents the need for complicated model fitting or template matching methods. The approach can be described as follows: negative spaces surrounding the human silhouette are extracted using horizontal and vertical scanning processes. These negative space areas are more numerous, and undergo more radical changes in shape than the single area occupied by the figure of the person performing an action. The colour-coded negative space representation is formed using the four binary images produced by the scanning processes. Features are then extracted from the colour-coded images. These are based on the percentage of area occupied by distinct coloured regions as well as the bounding box proportions. Pose clusters are identified using feedback from an independent action set. Subsequent images are classified using a simple Euclidean distance measure. An image sequence is thus temporally segmented into its corresponding pose representations. Action recognition simply becomes the detection of a temporally ordered sequence of poses that characterises the action. The method is purely vision-based, utilising monocular images with no need for body markers or special clothing. Two datasets were constructed using several actors performing different poses and actions. Some of these actions included actors waving their arms, sitting down or kicking a leg. These actions were recorded against a monochrome background to simplify the segmentation of the actors from the background. The actions were then recorded on DV cam and digitised into a data base. The silhouette images from these actions were isolated and placed in a frame or bounding box. The next step was to highlight the negative spaces using a directional scanning method. This scanning method colour-codes the negative spaces of each action. What became immediately apparent is that very distinctive colour patterns formed for different actions. To emphasise the action, different colours were allocated to negative spaces surrounding the image. For example, the space between the legs of an actor standing in a T - pose with legs apart would be allocated yellow, while the space below the arms were allocated different shades of green. The space surrounding the head would be different shades of purple. During an action when the actor moves one leg up in a kicking fashion, the yellow colour would increase. Inversely, when the actor closes his legs and puts them together, the yellow colour filling the negative space would decrease substantially. What also became apparent is that these coloured negative spaces are interdependent and that they influence each other during the course of an action. For example, when an actor lifts one of his legs, increasing the yellow-coded negative space, the green space between that leg and the arm decreases. This interrelationship between colours hold true for all poses and actions as presented in this thesis. In terms of pose recognition, it is significant that these colour coded negative spaces and the way the change during an action or a movement are substantial and instantly recognisable. Compare for example, looking at someone lifting an arm as opposed to seeing a vast negative space changing shape. In a controlled research environment, several actors were instructed to perform a number of different actions. After colour coding the negative spaces, it became apparent that every action can be recognised by a unique colour coded pattern. The challenge is to ascribe a numerical presentation, a mathematical quotation, to extract the essence of what is so visually apparent. The essence of pose recognition and it's measurability lies in the relationship between the colours in these negative spaces and how they impact on each other during a pose or an action. The simplest way of measuring this relationship is by calculating the percentage of each colour present during an action. These calculated percentages become the basis of pose and action recognition. By plotting these percentages on a graph confirms that the essence of these different actions and poses can in fact been captured and recognised. Despite variations in these traces caused by time differences, personal appearance and mannerisms, what emerged is a clear recognisable pattern that can be married to an action or different parts of an action. 7 Actors might lift their left leg, some slightly higher than others, some slower than others and these variations in terms of colour percentages would be recorded as a trace, but there would be very specific stages during the action where the traces would correspond, making the action recognisable.In conclusion, using negative space as a tool in human pose and tracking recognition presents an exiting research avenue because it is influenced less by variations such as difference in personal appearance and changes in the angle of observation. This approach is also simplistic and does not rely on complicated models and templates
|
Page generated in 0.1109 seconds