Spelling suggestions: "subject:"attern recognition."" "subject:"battern recognition.""
1 |
An architecture and interaction techniques for handling ambiguity in recognition-based inputMankoff, Jennifer C. January 2001 (has links)
No description available.
|
2 |
Unconstrained iris recognitionAl Rifaee, Mustafa Moh'd Husien January 2014 (has links)
This research focuses on iris recognition, the most accurate form of biometric identification. The robustness of iris recognition comes from the unique characteristics of the human, and the permanency of the iris texture as it is stable over human life, and the environmental effects cannot easily alter its shape. In most iris recognition systems, ideal image acquisition conditions are assumed. These conditions include a near infrared (NIR) light source to reveal the clear iris texture as well as look and stare constraints and close distance from the capturing device. However, the recognition accuracy of the-state-of-the-art systems decreases significantly when these constraints are relaxed. Recent advances have proposed different methods to process iris images captured in unconstrained environments. While these methods improve the accuracy of the original iris recognition system, they still have segmentation and feature selection problems, which results in high FRR (False Rejection Rate) and FAR (False Acceptance Rate) or in recognition failure. In the first part of this thesis, a novel segmentation algorithm for detecting the limbus and pupillary boundaries of human iris images with a quality assessment process is proposed. The algorithm first searches over the HSV colour space to detect the local maxima sclera region as it is the most easily distinguishable part of the human eye. The parameters from this stage are then used for eye area detection, upper/lower eyelid isolation and for rotation angle correction. The second step is the iris image quality assessment process, as the iris images captured under unconstrained conditions have heterogeneous characteristics. In addition, the probability of getting a mis-segmented sclera portion around the outer ring of the iris is very high, especially in the presence of reflection caused by a visible wavelength light source. Therefore, quality assessment procedures are applied for the classification of images from the first step into seven different categories based on the average of their RGB colour intensity. An appropriate filter is applied based on the detected quality. In the third step, a binarization process is applied to the detected eye portion from the first step for detecting the iris outer ring based on a threshold value defined on the basis of image quality from the second step. Finally, for the pupil area segmentation, the method searches over the HSV colour space for local minima pixels, as the pupil contains the darkest pixels in the human eye. In the second part, a novel discriminating feature extraction and selection based on the Curvelet transform are introduced. Most of the state-of-the-art iris recognition systems use the textural features extracted from the iris images. While these fine tiny features are very robust when extracted from high resolution clear images captured at very close distances, they show major weaknesses when extracted from degraded images captured over long distances. The use of the Curvelet transform to extract 2D geometrical features (curves and edges) from the degraded iris images addresses the weakness of 1D texture features extracted by the classical methods based on textural analysis wavelet transform. Our experiments show significant improvements in the segmentation and recognition accuracy when compared to the-state-of-the-art results.
|
3 |
Minimal kernal classifiers for pattern recognition problemsHooper, Richard January 1996 (has links)
No description available.
|
4 |
An incoherent correlator-based star tracking system for satellite navigationKouris, Aristodimos January 2002 (has links)
No description available.
|
5 |
An approach to high performance image classifier design using a moving window principleHoque, Md. Sanaul January 2001 (has links)
No description available.
|
6 |
Multi-modal prediction and modelling using artificial neural networksLee, Gareth E. January 1991 (has links)
No description available.
|
7 |
Automatic drawing recognitionMahmood, A. January 1987 (has links)
No description available.
|
8 |
Unfamiliar facial identity registration and recognition performance enhancementAdam, Mohamad Z. January 2013 (has links)
The work in this thesis aims at studying the problems related to the robustness of a face recognition system where specific attention is given to the issues of handling the image variation complexity and inherent limited Unique Characteristic Information (UCI) within the scope of unfamiliar identity recognition environment. These issues will be the main themes in developing a mutual understanding of extraction and classification tasking strategies and are carried out as a two interdependent but related blocks of research work. Naturally, the complexity of the image variation problem is built up from factors including the viewing geometry, illumination, occlusion and other kind of intrinsic and extrinsic image variation. Ideally, the recognition performance will be increased whenever the variation is reduced and/or the UCI is increased. However, the variation reduction on 2D facial images may result in loss of important clues or UCI data for a particular face alternatively increasing the UCI may also increase the image variation. To reduce the lost of information, while reducing or compensating the variation complexity, a hybrid technique is proposed in this thesis. The technique is derived from three conventional approaches for the variation compensation and feature extraction tasks. In this first research block, transformation, modelling and compensation approaches are combined to deal with the variation complexity. The ultimate aim of this combination is to represent (transformation) the UCI without losing the important features by modelling and discard (compensation) and reduce the level of the variation complexity of a given face image. Experimental results have shown that discarding a certain obvious variation will enhance the desired information rather than sceptical in losing the interested UCI. The modelling and compensation stages will benefit both variation reduction and UCI enhancement. Colour, gray level and edge image information are used to manipulate the UCI which involve the analysis on the skin colour, facial texture and features measurement respectively. The Derivative Linear Binary transformation (DLBT) technique is proposed for the features measurement consistency. Prior knowledge of input image with symmetrical properties, the informative region and consistency of some features will be fully utilized in preserving the UCI feature information. As a result, the similarity and dissimilarity representation for identity parameters or classes are obtained from the selected UCI representation which involves the derivative features size and distance measurement, facial texture and skin colour. These are mainly used to accommodate the strategy of unfamiliar identity classification in the second block of the research work. Since all faces share similar structure, classification technique should be able to increase the similarities within the class while increase the dissimilarity between the classes. Furthermore, a smaller class will result on less burden on the identification or recognition processes. The proposed method or collateral classification strategy of identity representation introduced in this thesis is by manipulating the availability of the collateral UCI for classifying the identity parameters of regional appearance, gender and age classes. In this regard, the registration of collateral UCI s have been made in such a way to collect more identity information. As a result, the performance of unfamiliar identity recognition positively is upgraded with respect to the special UCI for the class recognition and possibly with the small size of the class. The experiment was done using data from our developed database and open database comprising three different regional appearances, two different age groups and two different genders and is incorporated with pose and illumination image variations.
|
9 |
Image segmentation on the basis of texture and depthBooth, David M. January 1991 (has links)
No description available.
|
10 |
Machine learning for handprinted character perceptionMalyan, R. R. January 1989 (has links)
Humans are well suited to the reading of textual information, but unfortunately it has not yet been possible to develop a machine to emulate this form of human behaviour. In the past, machines have been characterised by having static forms of specific knowledge necessary for character recognition. The resulting form of reading behaviour is most uncharacteristic of the way humans perceive textual information. The major problem with handprinted character recognition is the infinite variability in the character shapes and the ambiguities many of these shapes exhibit. Human perception of handprinted characters makes extensive use of "world knowledge" to remove such ambiguities. Humans are also continually modifying their world knowledge to further enhance their reading behaviour by acquiring new knowledge as they read. An information processing model for perception and learning of handprinted characters is proposed. The function of the model is to enable ambiguous character descriptions to converge to single character classifications. The accuracy of this convergence improves with reading experience on handprinted text. The model consists of three compon,ent parts. Firstly, a character classifier to recognise character patterns. These patterns may be both distorted anq noisy, where distortion is defined to be a consistent variability from known archetypical character descriptions and noise as a random inconsistent variability in character shape. Secondly, a perceptive mechanism that makes inferences from an incomplete linguistic world model of an author or of a specific domain of discourse from many authors. Finally, a incremental learning capability is integrated into the character classifier and perceptive mechanisms. This is to enable the internal world model to be continually adaptive to either changes in the domain of discourse or to different authors. A demonstrator is described, together with a summary of experimental results that clearly show the improvement in machine perception which results from continuous incremental learning.
|
Page generated in 0.0936 seconds