• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 20
  • 20
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Affine invariant object recognition by voting match techniques

Hsu, Tao-i 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis begins with a general survey of different model based systems for object recognition. The advantage and disadvantage of those systems are discussed. A system is then selected for study because of its effective Affine invariant matching [Ref. 1] characteristic. This system involves two separate phases, the modeling and the recognition. One is done off-line and the other is done on-line. A Hashing technique is implemented to achieve fast accessing and voting. Different test data sets are used in experiments to illustrate the recognition capabilities of this system. This demonstrates the capabilities of partial match, recognizing objects under similarity transformation applied to the models, and the results of noise perturbation. The testing results are discussed, and related experiences and recommendations are presented. / http://archive.org/details/affineinvarianto00hsut / Captain, Taiwan Republic of China Army
2

Contour Matching Using Local Affine Transformations

Bachelder, Ivan A. 01 April 1992 (has links)
Partial constraints are often available in visual processing tasks requiring the matching of contours in two images. We propose a non- iterative scheme to determine contour matches using locally affine transformations. The method assumes that contours are approximated by the orthographic projection of planar patches within oriented neighborhoods of varying size. For degenerate cases, a minimal matching solution is chosen closest to the minimal pure translation. Performance on noisy synthetic and natural contour imagery is reported.
3

Medical Image Segmentation by Transferring Ground Truth Segmentation

Vyas, Aseem January 2015 (has links)
The segmentation of medical images is a difficult task due to the inhomogeneous intensity variations that occurs during digital image acquisition, the complicated shape of the object, and the medical expert’s lack of semantic knowledge. Automated segmentation algorithms work well for some medical images, but no algorithm has been general enough to work for all medical images. In practice, most of the time the segmentation results are corrected by the experts before the actual use. In this work, we are motivated to determine how to make use of manually segmented data in automatic segmentation. The key idea is to transfer the ground truth segmentation from the database of train images to a given test image. The ground truth segmentation of MR images is done by experts. The process includes a hierarchical image decomposition approach that performs the shape matching of test images at several levels, starting with the image as a whole (i.e. level 0) and then going through a pyramid decomposition (i.e. level 1, level 2, etc.) with the database of the train images and the given test image. The goal of pyramid decomposition is to find the section of the training image that best matches a section of the test image of a different level. After that, a re-composition approach is taken to place the best matched sections of the training image to the original test image space. Finally, the ground truth segmentation is transferred from the best training images to their corresponding location in the test image. We have tested our method on a hip joint MR image database and the experiment shows successful results on level 0, level 1 and level 2 re-compositions. Results improve with deeper level decompositions, which supports our hypotheses.
4

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
5

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
6

A Methodology for the Integration of Hopfield Network and Genetic Algorithm Schemes for Graph Matching Problems

Huang, Chin-Chung 14 February 2005 (has links)
Object recognition is of much interest in recent industrial automation. Although a variety of approaches have been proposed to tackle the recognition problem, some cases such as overlapping objects, articulated objects, and low-resolution images, are still not easy for the existing schemes. Coping with these more complex images has remained a challenging task in the field. This dissertation, aiming to recognize objects from such images, proposes a new integrated method. For images with overlapping or articulated objects, graph matching methods are often used, seeing them as solving a combinatorial optimization problem. Both Hopfield network and the genetic algorithm are decent tools for the combinatorial optimization problems. Unfortunately, they both have intolerable drawbacks. The Hopfield network is sensitive to its initial state and stops at a local minimum if it is not properly given. The GA, on the other hand, only finds a near-global solution, and it is time-consuming for large-scale tasks. This dissertation proposes to combine these two methods, while eliminating their bad and keeping their good, to solve some complex recognition problems. Before the integration, some arrangements are required. For instance, specialized 2-D GA operators are used to accelerate the convergence. Also, the ¡§seeds¡¨ of the solution of the GA is extracted as the initial state of the Hopfield network. By doing so the efficiency of the system is greatly improved. Additionally, several fine-tuning post matching algorithms are also needed. In order to solve the homomorphic graph matching problem, i.e., multiple occurrences in a single scene image, the Hopfield network has to repeat itself until the stopping criteria are met. The method can not only be used to obtain the homomorphic mapping between the model and the scene graphs, but it can also be applied to articulated object recognition. Here we do not need to know in advance if the model is really an articulated object. The proposed method has been applied to measure some kinematic properties, such as the positions of the joints, relative linear and angular displacements, of some simple machines. The subject about articulated object recognition has rarely been mentioned in the literature, particularly under affine transformations. Another unique application of the proposed method is also included in the dissertation. It is about using low-resolution images, where the contour of an object is easily affected by noise. To increase the performance, we use the hexagonal grid in dealing with such low-resolution images. A hexagonal FFT simulation is first presented to pre-process the hexagonal images for recognition. A feature vector matching scheme and a similarity matching scheme are also devised to recognize simpler images with only isolated objects. For complex low-resolution images with occluded objects, the integrated method has to be tailored to go with the hexagonal grid. The low-resolution, hexagonal version of the integrated scheme has also been shown to be suitable and robust.
7

The recovery of 3-D structure using visual texture patterns

Loh, Angeline M. January 2006 (has links)
[Truncated abstract] One common task in Computer Vision is the estimation of three-dimensional surface shape from two-dimensional images. This task is important as a precursor to higher level tasks such as object recognition - since shape of an object gives clues to what the object is - and object modelling for graphics. Many visual cues have been suggested in the literature to provide shape information, including the shading of an object, its occluding contours (the outline of the object that slants away from the viewer) and its appearance from two or more views. If the image exhibits a significant amount of texture, then this too may be used as a shape cue. Here, ‘texture’ is taken to mean the pattern on the surface of the object, such as the dots on a pear, or the tartan pattern on a tablecloth. This problem of estimating the shape of an object based on its texture is referred to as shape-form-texture and it is the subject of this thesis . . . The work in this thesis is likely to impact in a number of ways. The second shape-form-texture algorithm provides one of the most general solutions to the problem. On the other hand, if the assumptions of the first shape-form-texture algorithm are met, this algorithm provides an extremely usable method, in that users should be able to input images of textured objects and click on the frontal texture to quickly reconstruct a fairly good estimation of the surface. And lastly, the algorithm for estimating the transformation between textures can be used as a part of many shape-form-texture algorithms, as well as being useful in other areas of Computer Vision. This thesis gives two examples of other applications for the method: re-texturing an object and placing objects in a scene.
8

Analysis of Affine Equivalent Boolean Functions for Cryptography

Fuller, Joanne Elizabeth January 2003 (has links)
Boolean functions are an important area of study for cryptography. These functions, consisting merely of one's and zero's, are the heart of numerous cryptographic systems and their ability to provide secure communication. Boolean functions have application in a variety of such systems, including block ciphers, stream ciphers and hash functions. The continued study of Boolean functions for cryptography is therefore fundamental to the provision of secure communication in the future. This thesis presents an investigation into the analysis of Boolean functions and in particular, analysis of affine transformations with respect to both the design and application of Boolean functions for cryptography. Past research has often been limited by the difficulties arising from the magnitude of the search space. The research presented in this thesis will be shown to provide an important step towards overcoming such restrictions and hence forms the basis for a new analysis methodology. The new perspective allows a reduced view of the Boolean space in which all Boolean functions are grouped into connected equivalence classes so that only one function from each class need be established. This approach is a significant development in Boolean function research with many applications, including class distinguishing, class structures, self mapping analysis and finite field based s-box analysis. The thesis will begin with a brief overview of Boolean function theory; including an introduction to the main theme of the research, namely the affine transformation. This will be followed by the presentation of a fundamental new theorem describing the connectivity that exists between equivalence classes. The theorem of connectivity will form the foundation for the remainder of the research presented in this thesis. A discussion of efficient algorithms for the manipulation of Boolean functions will then be presented. The ability of Boolean function research to achieve new levels of analysis and understanding is centered on the availability of computer based programs that can perform various manipulations. The development and optimisation of efficient algorithms specifically for execution on a computer will be shown to have a considerable advantage compared to those constructed using a more traditional approach to algorithm optimisation. The theorem of connectivety will be shown to be fundamental in the provision many avenues of new analysis and application. These applications include the first non-exhaustive test for determining equivalent Boolean functions, a visual representation of the connected equivalence class structure to aid in the understanding of the Boolean space and a self mapping constant that enables enumeration of the functions in each equivalence class. A detailed survey of the classes with six inputs is also presented, providing valuable insight into their range and structure. This theme is then continued in the application Boolean function construction. Two important new methodologies are presented; the first to yield bent functions and the second to yield the best currently known balanced functions of eight inputs with respect to nonlinearity. The implementation of these constructions is extremely efficient. The first construction yields bent functions of a variety of algebraic order and inputs sizes. The second construction provides better results than previously proposed heuristic techniques. Each construction is then analysed with respect to its ability to produce functions from a variety of equivalence classes. Finally, in a further application of affine equivalence analysis, the impact to both s-box design and construction will be considered. The effect of linear redundancy in finite field based s-boxes will be examined and in particular it will be shown that the AES s-box possesses complete linear redundancy. The effect of such analysis will be discussed and an alternative construction to s-box design that ensures removal of all linear redundancy will be presented in addition to the best known example of such an s-box.
9

Construction d'un Atlas 3D numérique de la cornée humaine par recalage d'images

Haddeji, Akram 12 1900 (has links)
Nous proposons de construire un atlas numérique 3D contenant les caractéristiques moyennes et les variabilités de la morphologie d’un organe. Nos travaux seront appliqués particulièrement à la construction d'un atlas numérique 3D de la totalité de la cornée humaine incluant la surface antérieure et postérieure à partir des cartes topographiques fournies par le topographe Orbscan II. Nous procédons tout d'abord par normalisation de toute une population de cornées. Dans cette étape, nous nous sommes basés sur l'algorithme de recalage ICP (iterative closest point) pour aligner simultanément les surfaces antérieures et postérieures d'une population de cornée vers les surfaces antérieure et postérieure d'une cornée de référence. En effet, nous avons élaboré une variante de l'algorithme ICP adapté aux images (cartes) de cornées qui tient compte de changement d'échelle pendant le recalage et qui se base sur la recherche par voisinage via la distance euclidienne pour établir la correspondance entre les points. Après, nous avons procédé pour la construction de l'atlas cornéen par le calcul des moyennes des élévations de surfaces antérieures et postérieures recalées et leurs écarts-types associés. Une population de 100 cornées saines a été utilisée pour construire l'atlas cornéen normal. Pour visualiser l’atlas, on a eu recours à des cartes topographiques couleurs similairement à ce qu’offrent déjà les systèmes topographiques actuels. Enfin, des observations ont été réalisées sur l'atlas cornéen reflétant sa précision et permettant de développer une meilleure connaissance de l’anatomie cornéenne. / We propose to build a 3D digital atlas which contains the average characteristics and variability of the morphology of an organ. In particular our work consists in the construction of a 3D digital atlas of the entire human cornea including anterior and posterior surfaces. The atlas was built using topographies provided by the Orbscan II system. First, we normalized the given population of corneas using a variant of the ICP (iterative closest point) algorithm for shape registration to fit simultaneously the anterior and posterior surfaces with the anterior and posterior surfaces of a reference cornea. Indeed, we developed a specific algorithm for corneas topographies that considers scaling during registration and which is based on neighborhood search via the Euclidean distance to find the correspondence between points. After that, we built the corneal atlas by averaging elevations of anterior and posterior surfaces and by calculating their associated standard deviations. A population of 100 healthy corneas was used to construct the normal corneal atlas. To illustrate the atlas, we used topographic color maps like those already offered by existing topographic systems. Finally, observations were made on the corneal atlas that reflects its precision and allows to develop a better understanding of corneal anatomy.
10

Kvantifikace vícerozměrných rizik / Quantification of multivariate risk

Hilbert, Hynek January 2013 (has links)
In the present work we study multivariate extreme value theory. Our main focus is on exceedances over linear thresholds. Smaller part is devoted to exce- edances over elliptical thresholds. We consider extreme values as those which belong to remote regions and investigate convergence of their distribution to the limit distribution. The regions are either halfspaces or ellipsoids. Working with halfspaces we distinguish between two setups: we either assume that the distribution of extreme values is directionally homogeneous and we let the halfspaces diverge in any direction, or we assume that there are some irre- gularities in the sample cloud which show us the fixed direction we should let the halfspaces drift out. In the first case there are three limit laws. The domains of attraction contain unimodal and rotund-exponential distributions. In the second case there exist a lot of limit laws without general form. The domains of attraction also fail to have common structure. The similar situation occurs for the exceedances over elliptical thresholds. The task here is to investigate convergence of the random vectors living in the complements of ellipsoids. For all, the limit distributions are determined by affine transformations and distribution of spectral measure. 1

Page generated in 0.1368 seconds