• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 12
  • 9
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 52
  • 38
  • 26
  • 25
  • 21
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Affine invariant object recognition by voting match techniques

Hsu, Tao-i 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis begins with a general survey of different model based systems for object recognition. The advantage and disadvantage of those systems are discussed. A system is then selected for study because of its effective Affine invariant matching [Ref. 1] characteristic. This system involves two separate phases, the modeling and the recognition. One is done off-line and the other is done on-line. A Hashing technique is implemented to achieve fast accessing and voting. Different test data sets are used in experiments to illustrate the recognition capabilities of this system. This demonstrates the capabilities of partial match, recognizing objects under similarity transformation applied to the models, and the results of noise perturbation. The testing results are discussed, and related experiences and recommendations are presented. / http://archive.org/details/affineinvarianto00hsut / Captain, Taiwan Republic of China Army
2

Implementations of Different Distance transformation methods with their comparisons

Yu, Yan-Liang 12 September 2007 (has links)
Euclidean Distance transformation is a fundamental technique for the application fields of image understanding and computer vision. Some important characteristics in image analysis such as shape factor, skeleton and medial axis are based upon the distance transformation computation. The lookup table algorithm is based upon the recursive computation structure of the 4N method. Therefore, this algorithm is very fast and is close to the 4N method, which performs as the fastest one among all the comparing algorithms in our experiments. The success of the lookup table algorithm is based upon a checking strategy by error geometry. The error candidates are arranged in order according to their distances to the reference point. In addition, a Local_Array is used to store the y coordinates of the closest foreground pixels above the processing line. Therefore we can find the correct feature point by checking the ordered candidates with the information provided from the Local_Array instead of comparisons among the candidates. In contrast, all the comparing eror-free Euclidean algorithms select their feature points from candidates by time consuming distance comparison.
3

Perceptually based methods for robust image hashing

Monga, Vishal 28 August 2008 (has links)
Not available / text
4

Perfect hashing and related problems /

Juvvadi, Ramana Rao. January 1993 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1993. / Vita. Abstract. Includes bibliographical references (leaves 133-136). Also available via the Internet.
5

Fast hashing on pentium SIMD architecture /

Acıic̦mez, Onur. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2005. / Printout. Includes bibliographical references (leaves 37-38). Also available on the World Wide Web.
6

Perceptually based methods for robust image hashing

Monga, Vishal, Evans, Brian L. January 2005 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Supervisor: Brian L. Evans. Vita. Includes bibliographical references.
7

Design and Evaluation of the Hamal Parallel Computer

Grossman, J.P. 05 December 2002 (has links)
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
8

Secure passwords through enhanced hashing /

Strahs, Benjamin. January 2009 (has links)
Thesis (Honors)--College of William and Mary, 2009. / Includes bibliographical references (leaves 27-28).
9

Image compression using locally sensitive hashing

Chucri, Samer Gerges 18 December 2013 (has links)
The problem of archiving photos is becoming increasingly important as image databases are growing more popular, and larger in size. One could take the example of any social networking website, where users share hundreds of photos, resulting in billions of total images to be stored. Ideally, one would like to use minimal storage to archive these images, by making use of the redundancy that they share, while not sacrificing quality. We suggest a compression algorithm that aims at compressing across images, rather than compressing images individually. This is a very novel approach that has never been adopted before. This report presents the design of a new image database compression tool. In addition to that, we implement a complete system on C++, and show the significant gains that we achieve in some cases, where we compress 90% of the initial data. One of the main tools we use is Locally Sensitive Hashing (LSH), a relatively new technique mainly used for similarity search in high-dimensions. / text
10

FPGA design and performance analysis of SHA-512, Whirlpool and PHASH hashing functions /

Zalewski, Przemysław. January 2008 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2008. / Typescript. Includes bibliographical references (leaves 84-85).

Page generated in 0.0598 seconds