• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2662
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6272
  • 6272
  • 2010
  • 1527
  • 1196
  • 1150
  • 1030
  • 1002
  • 952
  • 927
  • 896
  • 804
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
921

Hierarchical fingerprint verification

Yager, Neil Gordon, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Fingerprints have been an invaluable tool for law enforcement and forensics for over a century, motivating research into automated fingerprint based identification in the early 1960's. More recently, fingerprints have found an application in the emerging industry of biometric systems. Biometrics is the automatic identification of an individual based on physiological or behavioral characteristics. Due to its security related applications and the current world political climate, biometrics is presently the subject of intense research by private and academic institutions. Fingerprints are emerging as the most common and trusted biometric for personal identification. However, despite decades of intense research there are still significant challenges for the developers of automated fingerprint verification systems. This thesis includes an examination of all major stages of the fingerprint verification process, with contributions made at each step. The primary focus is upon fingerprint registration, which is the challenging problem of aligning two prints in order to compare their corresponding features for verification. A hierarchical approach is proposed consisting of three stages, each of which employs novel features and techniques for alignment. Experimental results show that the hierarchical approach is robust and outperforms competing state-of-the-art registration methods from the literature. However, despite its power, like most algorithms it has limitations. Therefore, a novel method of information fusion at the registration level has been developed. The technique dynamically selects registration parameters from a set of competing algorithms using a statistical framework. This allows for the relative advantages of different approaches to be exploited. The results show a significant improvement in alignment accuracy for a wide variety of fingerprint databases. Given a robust alignment of two fingerprints, it still remains to be verified whether or not they have originated from the same finger. This is a non-trivial problem, and a close examination of fingerprint features available for this task is conducted with extensive experimental results.
922

The detection of 2D image features using local energy

Robbins, Benjamin John January 1996 (has links)
Accurate detection and localization of two dimensional (2D) image features (or 'key-points') is important for vision tasks such as structure from motion, stereo matching, and line labeling. 2D image features are ideal for these vision tasks because 2D image features are high in information and yet they occur sparsely in typical images. Several methods for the detection of 2D image features have already been developed. However, it is difficult to assess the performance of these methods because no one has produced an adequate definition of corners that encompasses all types of 2D luminance variations that make up 2D image features. The fact that there does not exist a consensus on the definition of 2D image features is not surprising given the confusion surrounding the definition of 1D image features. The general perception of 1D image features has been that they correspond to 'edges' in an image and so are points where the intensity gradient in some direction is a local maximum. The Sobel [68], Canny [7] and Marr-Hildreth [37] operators all use this model of 1D features, either implicitly or explicitly. However, other profiles in an image also make up valid 1D features, such as spike and roof profiles, as well as combinations of all these feature types. Spike and roof profiles can also be found by looking for points where the rate of change of the intensity gradient is locally maximal, as Canny did in defining a 'roof-detector' in much the same way he developed his 'edge-detector'. While this allows the detection of a wider variety of 1D features profiles, it comes no closer to the goal of unifying these different feature types to an encompassing definition of 1D features. The introduction of the local energy model of image features by Marrone and Owens [45] in 1987 provided a unified definition of 1D image features for the first time. They postulated that image features correspond to points in an image where there is maximal phase congruency in the frequency domain representation of the image. That is, image features correspond to points of maximal order in the phase domain of the image signal. These points of maximal phase congruency correspond to step-edge, roof, and ramp intensity profiles, and combinations thereof. They also correspond to the Mach bands perceived by humans in trapezoidal feature profiles. This thesis extends the notion of phase congruency to 2D image features. As 1D image features correspond to points of maximal 1D order in the phase domain of the image signal, this thesis contends that 2D image features correspond to maximal 2D order in this domain. These points of maximal 2D phase congruency include all the different types of 2D image features, including grey-level corners, line terminations, blobs, and a variety of junctions. Early attempts at 2D feature detection were simple 'corner detectors' based on a model of a grey-level corner in much the same way that early 1D feature detectors were based on a model of step-edges. Some recent attempts have included more complex models of 2D features, although this is basically a more complex a priori judgement of the types of luminance profiles that are to be labeled as 2D features. This thesis develops the 2D local energy feature detector based on a new, unified definition of 2D image features that marks points of locally maximum 2D order in the phase domain representation of the image as 2D image features. The performance of an implementation of 2D local energy is assessed, and compared to several existing methods of 2D feature detection. This thesis also shows that in contrast to most other methods of 2D feature detection, 2D local energy is an idempotent operator. The extension of phase congruency to 2D image features also unifies the detection of image features. 1D and 2D image features correspond to 1D and 2D order in the phase domain respresentation of the image respectively. This definition imposes a hierarchy of image features, with 2D image features being a subset of 1D image features. This ordering of image features has been implied ever since 1D features were used as candidate points for 2D feature detection by Kitchen [28] and others. Local energy enables the extraction of both 1D and 2D image features in a consistent manner; 2D image features are extracted from the 1D image features using the same operations that are used to extract 1D image features from the input image. The consistent approach to the detection of image features presented in this thesis allows the hierarchy of primitive image features to be naturally extended to higher order image features. These higher order image features can then also be extracted from higher order image data using the same hierarchical approach. This thesis shows how local energy can be naturally extended to the detection of 1D (surface) and higher order image features in 3D data sets. Results are presented for the detection of 1D image features in 3D confocal microscope images, showing superior performance to the 3D extension of the Sobel operator [74].
923

3D reconstruction of road vehicles based on textural features from a single image

Lam, Wai-leung, William. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
924

Techniques de codage avancées et applications au CDMA

Guemghar, Souad 29 January 2004 (has links) (PDF)
Ce travail propose des schémas de codage et de décodage à complexité réduite, afin d'approcher la capacité des canaux à entrée binaire et sortie symétrique, ainsi que des canaux d'accès multiple à répartition par codes. Dans la première partie de cette thèse, nous nous attelons à étudier l'ensemble aléatoire de codes irréguliers dits ``répétition-accumulation'', de longueur infinie, transmis sur un canal à entrée binaire et sortie symétrique, et décodés par l'algorithme somme-produit. En utilisant la technique de l'évolution de densités, on écrit un système récursif qui décrit l'évolution des densités des messages qui sont propagés sur le graphe de Tanner qui représente l'ensemble de codes. Ensuite, on formule un cadre général dans lequel l'évolution des densités est approximée par un système dynamique dont les variables appartiennent à l'ensemble des nombres réels. A partir de ce cadre, on propose quatre méthodes de complexité réduite pour optimiser des codes répétition-accumulation. Ces méthodes sont basées sur l'approximation Gaussienne, l'approximation réciproque (duale), et la fonction de transfert de l'information mutuelle extrinsèque. Ces méthodes permettent de construire des codes de différents rendements, et dont les taux d'erreur tendent vers zéro, pour peu que la condition de stabilité locale soit satisfaite. Les seuils de décodage, évalués par la technique d'évolution de densités exacte, sont très proches de la limite de Shannon du canal Gaussien à entrée binaire et du canal binaire symétrique. Pour le canal Gaussien à entrée binaire, nous nous intéressons à la performance de ces codes dans le cas de la longueur finie, avec un graphe de Tanner conditionné pour maximiser les tailles des cycles les plus courts ou de certains cycles dits ``bloquants''. La performance de ces codes est comparée à celle de l'ensemble aléatoire décodé au maximum de vraisemblance, ainsi qu'à celle des meilleurs codes de Gallager de même rendement et niveau de conditionnement. La deuxième partie de cette thèse développe un schéma de codage/décodage à complexité réduite afin d'approcher la capacité d'un canal aléatoire d'accès multiple à répartition par codes en présence d'un bruit Gaussien, dans la limite d'un système de taille infinie. Notre approche est basée sur l'utilisation d'une modulation à déplacement de phase quadrivalente, des codes binaires correcteurs d'erreurs atteignant la capacité du canal, des filtres à erreur quadratique moyenne minimale et du décodeur successif. On optimise le profil des puissances (respectivement des rendements) en supposant que les utilisateurs du système à accès multiple ont tous le même rendement (respectivement la même puissance). Dans le cas où tous les utilisateurs ont le même rendement, l'efficacité spectrale du système optimisé est très proche de l'efficacité spectrale optimale. Au travers de simulations numériques, il est montré que la méthode d'optimisation permet de passer du système à taille infinie à un système pratique de taille finie, dont le décodage successif ne propage pas d'erreur de décodage.
925

High resolution digital imaging of bacterial cells

Siebold, William A. 02 April 2001 (has links)
The most abundant clone found in ribosomal RNA clone libraries obtained from the world's oceans belongs to the SAR11 phylogenetic group of environmental marine bacteria. Imaging and counting SAR11 bacterial cells in situ has been an important research objective for the past decade. This objective has been especially challenging due to the extremely small size, and hypothetically, the low abundance of ribosomes contained by the cells. To facilitate the imaging of small dim oligotrophic bacterial cells, digital imaging technology featuring very small pixel size, high quantum yield scientific grade CCD chips was integrated with the use of multiple oligonucleotide probes on cells mounted on a non-fluorescing solid substrate. Research into the composition of bacterioplankton populations in natural marine systems follows a two-fold path. Increasing the culturability of microbes found in the natural environment is one research path. Identifying and enumerating the relative fractions of microorganisms in situ by culture-independent methods is another. The accumulation and systematic comparison of ribosomal RNA clones from the marine environment has resulted in a philosophical shift in marine microbiology away from dependence upon cultured strains and toward investigations of in situ molecular signals. The design and use of oligonucleotide DNA probes targeting rRNA targets has matured along with the growth in size and complexity of the public sequence databases. Hybridizing a fluorescently labeled oligonucleotide probe to an rRNA target inside an intact cell provides both phylogenetic and morphological information (a technique called Fluorescence in situ Hybridization (FISH)). To facilitate the imaging of small, dim oligotrophic bacterial cells, digital imaging technology featuring very small pixel size, high quantum yield, scientific grade CCD chips is integrated with the use of multiple oligonucleotide probes on cells mounted on a non-fluorescing solid substrate. This research develops the protocols necessary to acquire and analyze digital images of marine bacterial cells. Experiments were conducted with Bermuda Atlantic Time Series (BATS) environmental samples obtained during cruise BV21 (1998) and B138 (2000). The behavior of the SAR11⁴*Cy3 probe set when hybridized to bacterial cells from these samples was investigated to determine the optimal hybridization reaction conditions. The challenges of bacterial cell counting after cell transfer from PCTE membrane to treated microslides were addressed. Experiments with aged Oregon Coast seawater were performed to investigate the protocol used to transfer cells from membrane to microslides, and examined the distribution of cells and the statistics of counting cells using traditional epifluorescence microscopy and image analysis techniques. / Graduation date: 2002
926

Content-based color image retrieval

Varanguien de Villepin, Audrey 24 September 1999 (has links)
A fully automated method for content-based color image retrieval is developed to extract color and shape content of an image. A color segmentation algorithm based on the k-mean clustering algorithm is used and a saturated distance is proposed to discriminate between two color points in the HSV color space. The feature set describing an image includes main object shape, which is extracted using the morphological operations. The computed image features are tagged within the image and a graphical user interface is presented for retrieving images based on the color and shape of the objects. The experimental results using natural color images demonstrate effectiveness of the proposed method. / Graduation date: 2000
927

A high-performance, low power and memory-efficient VLD for MPEG applications

Zhang, Haowei 14 January 1997 (has links)
An extremely important area that has enabled or will enable many of the digital video services and applications such as VideoCD, DVD, DVC, HDTV, video conferencing, and DSS is digital video compression. The great success of digital video compression is mainly because of two factors. The state of the art in very large scale integrated circuit (VLSI) and a considerable body of knowledge accumulated over the last several decades in applying video compression algorithms such as discrete cosine transform (DCT), motion estimation (ME), motion compensation (MC) and entropy coding techniques. The MPEG (Moving Pictures Expert Group) standard reflects the second factor. In this thesis, MPEG standards are discussed thoroughly and interpreted, and a VLSI chip implementation (CMOS 0.35�� technology and 3 layer metal) of a variable length decoder (VLD) for MPEG applications is developed. The VLD developed here achieves high performance by using a parallel and pipeline architecture. Furthermore, MPEG bitstream patterns are carefully analyzed in order to drastically improve VLD memory efficiency. Finally, a special clock scheme is applied to reduce the chip's power consumption. / Graduation date: 1998
928

Compact Representations for Fast Nonrigid Registration of Medical Images

Timoner, Samson 04 July 2003 (has links)
We develop efficient techniques for the non-rigid registration of medical images by using representations that adapt to the anatomy found in such images. Images of anatomical structures typically have uniform intensity interiors and smooth boundaries. We create methods to represent such regions compactly using tetrahedra. Unlike voxel-based representations, tetrahedra can accurately describe the expected smooth surfaces of medical objects. Furthermore, the interior of such objects can be represented using a small number of tetrahedra. Rather than describing a medical object using tens of thousands of voxels, our representations generally contain only a few thousand elements. Tetrahedra facilitate the creation of efficient non-rigid registration algorithms based on finite element methods (FEM). We create a fast, FEM-based method to non-rigidly register segmented anatomical structures from two subjects. Using our compact tetrahedral representations, this method generally requires less than one minute of processing time on a desktop PC. We also create a novel method for the non-rigid registration of gray scale images. To facilitate a fast method, we create a tetrahedral representation of a displacement field that automatically adapts to both the anatomy in an image and to the displacement field. The resulting algorithm has a computational cost that is dominated by the number of nodes in the mesh (about 10,000), rather than the number of voxels in an image (nearly 10,000,000). For many non-rigid registration problems, we can find a transformation from one image to another in five minutes. This speed is important as it allows use of the algorithm during surgery. We apply our algorithms to find correlations between the shape of anatomical structures and the presence of schizophrenia. We show that a study based on our representations outperforms studies based on other representations. We also use the results of our non-rigid registration algorithm as the basis of a segmentation algorithm. That algorithm also outperforms other methods in our tests, producing smoother segmentations and more accurately reproducing manual segmentations.
929

Rosetta stones : deciphering the real /

Cho, Jae-Man. January 2007 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 2007. / Typescript. Includes bibliographical references (leaf 38).
930

Segmentation of medical image volumes using intrinsic shape information

Shiffman, Smadar. January 1900 (has links)
Thesis (Ph.D)--Stanford University, 1999. / Title from pdf t.p. (viewed April 3, 2002). "January 1999." "Adminitrivia V1/Prg/20000907"--Metadata.

Page generated in 0.5039 seconds