Spelling suggestions: "subject:"emplate matching"" "subject:"atemplate matching""
21 |
Face pose estimation in monocular imagesShafi, Muhammad January 2010 (has links)
People use orientation of their faces to convey rich, inter-personal information. For example, a person will direct his face to indicate who the intended target of the conversation is. Similarly in a conversation, face orientation is a non-verbal cue to listener when to switch role and start speaking, and a nod indicates that a person has understands, or agrees with, what is being said. Further more, face pose estimation plays an important role in human-computer interaction, virtual reality applications, human behaviour analysis, pose-independent face recognition, driver s vigilance assessment, gaze estimation, etc. Robust face recognition has been a focus of research in computer vision community for more than two decades. Although substantial research has been done and numerous methods have been proposed for face recognition, there remain challenges in this field. One of these is face recognition under varying poses and that is why face pose estimation is still an important research area. In computer vision, face pose estimation is the process of inferring the face orientation from digital imagery. It requires a serious of image processing steps to transform a pixel-based representation of a human face into a high-level concept of direction. An ideal face pose estimator should be invariant to a variety of image-changing factors such as camera distortion, lighting condition, skin colour, projective geometry, facial hairs, facial expressions, presence of accessories like glasses and hats, etc. Face pose estimation has been a focus of research for about two decades and numerous research contributions have been presented in this field. Face pose estimation techniques in literature have still some shortcomings and limitations in terms of accuracy, applicability to monocular images, being autonomous, identity and lighting variations, image resolution variations, range of face motion, computational expense, presence of facial hairs, presence of accessories like glasses and hats, etc. These shortcomings of existing face pose estimation techniques motivated the research work presented in this thesis. The main focus of this research is to design and develop novel face pose estimation algorithms that improve automatic face pose estimation in terms of processing time, computational expense, and invariance to different conditions.
|
22 |
Multichannel Pulse Oximetry: Effectiveness in Reducing HR and SpO2 error due to Motion ArtifactsWarren, Kristen Marie 02 February 2016 (has links)
Pulse oximetry is used to measure heart rate (HR) and arterial oxygen saturation (SpO2) from photoplethysmographic (PPG) waveforms. PPG waveforms are highly sensitive to motion artifact (MA), limiting the implementation of pulse oximetry in mobile physiological monitoring using wearable devices. Previous studies have shown that multichannel pulse oximetry can successfully acquire diverse signal information during simple, repetitive motion, thus leading to differences in motion tolerance across channels. In this study, we introduce a multichannel forehead-mounted pulse oximeter and investigate the performance of this novel sensor under a variety of intense motion artifacts. We have developed a multichannel template-matching algorithm that chooses the channel with the least amount of motion artifact to calculate HR and SpO2 every 2 seconds. We show that for a wide variety of random motion, channels respond differently to motion, and the multichannel estimate outperforms single channel estimates in terms of motion tolerance, signal quality, and HR and SpO2 error. Based on 31 data sets of PPG waveforms corrupted by random motion, the mean relative HR error was decreased by an average of 5.6 bpm when the multichannel-switching algorithm was compared to the worst performing channel. The percentage of HR measurements with absolute errors ≤ 5 bpm during motion increased by an average of 27.8 % when the multichannel-switching algorithm was compared to the worst performing channel. Similarly, the mean relative SpO2 error was decreased by an average of 4.3 % during motion when the multichannel-switching algorithm was compared to each individual channel. The percentage of SpO2 measurements with absolute error ≤ 3 % during motion increased by an average of 40.7 % when the multichannel-switching algorithm was compared to the worst performing channel. Implementation of this multichannel algorithm in a wearable device will decrease dropouts in HR and SpO2 measurements during motion. Additionally, the differences in motion frequency introduced across channels observed in this study shows precedence for future multichannel-based algorithms that make pulse oximetry measurements more robust during a greater variety of intense motion.
|
23 |
Towards Developing Computer Vision Algorithms and Architectures for Real-world ApplicationsJanuary 2018 (has links)
abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading.
To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time.
Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists.
Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
|
24 |
Reconnaissance de partitions musicales par modélisation floue des informations extraites et des règles de notationRossant, Florence 12 1900 (has links) (PDF)
Nous présentons dans cette thèse une méthode complète de reconnaissance de partitions musicales imprimées, dans le cas monodique. Le système procède en deux phases distinctes : - La segmentation et l'analyse des symboles (essentiellement par corrélation), conçues pour surmonter les difficultés liées aux interconnexions et aux défauts d'impression, aboutissant à des hypothèses de reconnaissance. - L'interprétation de haut niveau, fondée sur une modélisation floue des informations extraites de l'image et des règles de notation, menant à la décision. Dans cette approche, la décision est reportée tant que le contexte n'est pas entièrement connu. Toutes les configurations d'hypothèses sont successivement évaluées, et la plus cohérente est retenue, par optimisation de tous les critères. Le formalisme utilisé, fondé sur la théorie des ensembles flous et des possibilités, permet de prendre en compte les différentes sources d'imprécision et d'incertitude, ainsi que la souplesse et la flexibilité de l'écriture musicale. Afin de gagner en fiabilité, nous proposons également des méthodes d'indication automatique des erreurs potentielles de reconnaissance, ainsi qu'une procédure d'apprentissage, optimisant les paramètres du système pour le traitement d'une partition particulière. Les performances obtenues sur une large base de données ont permis de montrer l'intérêt de la méthode proposée.
|
25 |
Semi Automated Bullet GroupAnalysis for Shooting Target TrainingMachiraju, Naga Kiran January 2018 (has links)
Competitive Shooting as a sport is becoming famous these days and analysis of shooting group or bullet group which is a process of analysis location of bullet holes in one shooting session and stands as a metric for Precision of the weapon, Shooter's Accuracy, his Consistency and helps in finding Accurate load for the Cartridge. Knowledge of these factors can help in improving one's shooting and fine-tuning skills as a Shooter. Bullet group is alsoinuenced by the Accuracy of Rie, Optimal hand load, free run distance, environmental conditions like humidity, temperature, ambient light, windspeed, Shooter's position. Analyzing the Bullet group can be done in various ways, one way of doing it is by taking a Digital Image and analyzing the Image and detecting positions of bullet holes and Calculating metrics from this Metrics like Geometry of bullet group, largest distance between two bullets, compactness of the bullet group on target. In this work, detection of bullet holes is done by using these techniques: Template matching, Histogram equalization, White Balancing, Median andGaussian altering and Peak detection algorithms. After obtaining positions of the bullet holes in the Image. Complete Automation can be done by using the training the Algorithm with a Machine learning framework with the help of Articial neural networks. The existing bullet group analysis software require the bullet group shot on a specifc target, which limits the shooters to shoot on a target of shooter's choice every time and, those targets are not universal and vary from place to place. This algorithm aims to work on various types of target, and taking a step towards making a more generalized and more versatile algorithm.
|
26 |
Automatic Eartag Recognition on Dairy Cows in Real Barn EnvironmentIlestrand, Maja January 2017 (has links)
All dairy cows in Europe wear unique identification tags in their ears. These eartags are standardized and contains the cows identification numbers, today only used for visual identification by the farmer. The cow also needs to be identified by an automatic identification system connected to milk machines and other robotics used at the farm. Currently this is solved with a non-standardized radio transmitter which can be placed on different places on the cow and different receivers needs to be used on different farms. Other drawbacks with the currently used identification system are that it is expensive and unreliable. This thesis explores the possibility to replace this non standardized radio frequency based identification system with a standardized computer vision based system. The method proposed in this thesis uses a color threshold approach for detection, a flood fill approach followed by Hough transform and a projection method for segmentation and evaluates template matching, k-nearest neighbour and support vector machines as optical character recognition methods. The result from the thesis shows that the quality of the data used as input to the system is vital. By using good data, k-nearest neighbour, which showed the best results of the three OCR approaches, handles 98 % of the digits.
|
27 |
Computer-Aided Optically Scanned Document Information Extraction SystemMei, Zhijie January 2020 (has links)
This paper introduced a Computer-Aided Optically Scanned Document Information Extraction System. It could extract information including invoice No., issued date, buyer, etc., from the optically scanned document to meet the demand of customs declaration companies. The system output the structured information to a relational database. In detail, a software architecture for the information extraction of diverse-structure optically scanned document is designed. In this system, the original document is classified firstly. It would put into template-based extraction to improve the extraction performance if its template is pre-defined in the system. Then, a method for image enhancement to improve the image classification is proposed. This method aims to optimize the accuracy of neural network model by extracting the template-related feature and actively removing the unrelated feature. Lastly, the above system is implemented in this paper. This extraction are programed in Python which is a cross-platform languages. This system comprises three parts, classification module, template-based extraction and non-template extraction all of which have APIs and could be ran independently. This feature make this system flexible and easy to customization for the further demand. 445 real-world customs document images were input to evaluate the system. The result revealed that the introduced system ensured the diverse document support with non-template extraction and reached the overall high performance with template-based extraction showing the goal was basically achieved.
|
28 |
Rozpoznání obličeje / Face RecognitionKopřiva, Adam January 2010 (has links)
This master's thesis considers methods of face recognition. There are described methods with different approachs: knowledge-based methods, feature invariant approaches, template matching methods and appearance-based methods. This master's thesis is focused particulary on template matching method and statistical methods like a principal component analysis (PCA) and linear discriminant analysis (LDA). There are described in detail template matching methods like active shape models (ASM) and active appearance models (AAM).
|
29 |
Rozpoznání kódu z kontrolního obrázku / Code Detection from Control ImageRůžička, Miloslav January 2009 (has links)
Work deals with code detection from control image. The document presents relevant image processing techniques dealing with a noise reduction, thresholding, color models, object segmentation and OCR. This project examines advantages and disadvantages of two selected methods for object segmentation and introduces developed system for object segmentation. The developed system for object segmentation and classification is realized, evaluated and results are discussed in details.
|
30 |
A Fast Localization Method Based on Distance Measurement in a Modeled Environment.Deo, Ashwin P. 03 August 2009 (has links)
No description available.
|
Page generated in 0.0854 seconds