• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 12
  • 9
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 52
  • 38
  • 26
  • 25
  • 21
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Esquemas de hashing perfeitos, mínimos, práticos, determinísticos e eficientes em tempo e em espaço

Zatesko, Leandro Miranda, 1988- 01 March 2012 (has links)
Resumo: Este trabalho propõe algoritmos determinísticos que, dado um conjunto com n chaves, constroem em tempo esperado O(n) uma função hash com tempo de busca no pior caso O(1), a qual mapeia sem colisão as chaves para o conjunto {0, . . . , n-1}. Esses esquemas de hashing perfeitos e mínimos são meras variantes dos esquemas aleatorizados de Botelho, Kohayakawa e Ziviani (2005) e Botelho, Pagh e Ziviani (2007) e mostraram resultados empíricos equivalentes aos dos algoritmos originais. As variantes determinísticas foram implementadas a partir dos códigos dos esquemas originais desenvolvidos na biblioteca CMPH pelos próprios autores, a qual é mantida no SourceForge.net. Todos os esquemas foram alimentados com os mesmos conjuntos de chaves, para que pudessem ser comparados com justiça. Foram executados testes para conjuntos com até 25 000 000 de chaves. Ademais, os esquemas propostos contam evidentemente com a vantagem de sempre produzirem a mesma hash para um mesmo conjunto de chaves. Esse comportamento determinístico pode ser útil para o desenvolvimento dum esquema dinâmico de hashing, em que figuram operações como inserção e deleção de chaves, inspirado num dos excelentes esquemas estáticos abordados. Um dos esquemas de Botelho, Pagh e Ziviani (2007), por exemplo de excelência, constrói hashes representáveis por apenas aproximadamente 2,62 bits por chave. Tal resultado é muito próximo da cota inferior justa conhecida, de aproximadamente 1,44 bits por chave. Tanto as versões determinísticas propostas quanto as originais mostram-se práticas para aplicações reais de Hashing. No entanto, na fundamentação teórica do trabalho de Botelho, Kohayakawa e Ziviani (2005) ainda restava uma conjectura. A presente dissertação também propõe uma demonstração para a conjectura e encerra a corretude do esquema.
12

Preimages for SHA-1

Motara, Yusuf Moosa January 2018 (has links)
This research explores the problem of finding a preimage — an input that, when passed through a particular function, will result in a pre-specified output — for the compression function of the SHA-1 cryptographic hash. This problem is much more difficult than the problem of finding a collision for a hash function, and preimage attacks for very few popular hash functions are known. The research begins by introducing the field and giving an overview of the existing work in the area. A thorough analysis of the compression function is made, resulting in alternative formulations for both parts of the function, and both statistical and theoretical tools to determine the difficulty of the SHA-1 preimage problem. Different representations (And- Inverter Graph, Binary Decision Diagram, Conjunctive Normal Form, Constraint Satisfaction form, and Disjunctive Normal Form) and associated tools to manipulate and/or analyse these representations are then applied and explored, and results are collected and interpreted. In conclusion, the SHA-1 preimage problem remains unsolved and insoluble for the foreseeable future. The primary issue is one of efficient representation; despite a promising theoretical difficulty, both the diffusion characteristics and the depth of the tree stand in the way of efficient search. Despite this, the research served to confirm and quantify the difficulty of the problem both theoretically, using Schaefer's Theorem, and practically, in the context of different representations.
13

Learning to hash for large scale image retrieval

Moran, Sean James January 2016 (has links)
This thesis is concerned with improving the effectiveness of nearest neighbour search. Nearest neighbour search is the problem of finding the most similar data-points to a query in a database, and is a fundamental operation that has found wide applicability in many fields. In this thesis the focus is placed on hashing-based approximate nearest neighbour search methods that generate similar binary hashcodes for similar data-points. These hashcodes can be used as the indices into the buckets of hashtables for fast search. This work explores how the quality of search can be improved by learning task specific binary hashcodes. The generation of a binary hashcode comprises two main steps carried out sequentially: projection of the image feature vector onto the normal vectors of a set of hyperplanes partitioning the input feature space followed by a quantisation operation that uses a single threshold to binarise the resulting projections to obtain the hashcodes. The degree to which these operations preserve the relative distances between the datapoints in the input feature space has a direct influence on the effectiveness of using the resulting hashcodes for nearest neighbour search. In this thesis I argue that the retrieval effectiveness of existing hashing-based nearest neighbour search methods can be increased by learning the thresholds and hyperplanes based on the distribution of the input data. The first contribution is a model for learning multiple quantisation thresholds. I demonstrate that the best threshold positioning is projection specific and introduce a novel clustering algorithm for threshold optimisation. The second contribution extends this algorithm by learning the optimal allocation of quantisation thresholds per hyperplane. In doing so I argue that some hyperplanes are naturally more effective than others at capturing the distribution of the data and should therefore attract a greater allocation of quantisation thresholds. The third contribution focuses on the complementary problem of learning the hashing hyperplanes. I introduce a multi-step iterative model that, in the first step, regularises the hashcodes over a data-point adjacency graph, which encourages similar data-points to be assigned similar hashcodes. In the second step, binary classifiers are learnt to separate opposing bits with maximum margin. This algorithm is extended to learn hyperplanes that can generate similar hashcodes for similar data-points in two different feature spaces (e.g. text and images). Individually the performance of these algorithms is often superior to competitive baselines. I unify my contributions by demonstrating that learning hyperplanes and thresholds as part of the same model can yield an additive increase in retrieval effectiveness.
14

Funções de hashing criptograficas

Sendin, Ivan da Silva, 1975- 25 July 2018 (has links)
Orientador: Ricardo Dahab / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-07-25T01:13:52Z (GMT). No. of bitstreams: 1 Sendin_IvandaSilva_M.pdf: 2088136 bytes, checksum: 1ca40560c5f99ce66467f8a173457084 (MD5) Previous issue date: 1999 / Resumo: A cada dia que passa o computador tem uma participação maior na vida das pessoas. Formas tradicionais de interação estão sendo substituídas por suas equivalentes digitais, virtuais ou eletrônicas. Correio eletrônico, lojas virtuais, dinheiro digitai, entre outros, já fazem parte do cotidiano das pessoas. Estas novas formas de interação não podem conviver com as formas tradicionais de garantir segurança. Uma assinatura em uma carta, uma impressão digital em um documento, um lacre de cera em um envelope, um cofre de aço cheio de cédulas também terão que ser substituídos por seus equivalentes eletrônicos. A criptografia moderna tem resposta para a maioria destes desafios.As funções criptográficas tradicionais de ciframento foram projetadas com o objetivo de garantir privacidade dos dados, mas nem sempre são suficientes para garantir outros requisitos de segurança. Devido ao fato de serem simples, rápidas e facilmente implementadas, tanto em hardware como em software, as funções criptográficas de hashing são utilizadas para gerar representações compactas de cadeias de bits (chamadas de impressão digital, message digest, valor hash ou simplesmente hash) que serão tratadas como seus identificadores únicos pelos protocolos criptográficos. Os principais usos das funções de hashing estão nos protocolos que visam garantir integridade, autenticidade e não repúdio. Este texto tem como objetivo estudar as funções de hashing criptográficas apresentando conceitos teóricos, implementações, usos e questões relevantes quanto à sua segurança. / Abstract: Not informed. / Mestrado / Mestre em Ciência da Computação
15

Perfect hashing and related problems

Juvvadi, Ramana Rao 04 May 2006 (has links)
One of the most common tasks in many computer applications is the maintenance of a dictionary of words. The three most basic operations on the dictionary are find, insert, and delete. An important data structure that supports these operations is a hash table. On a hash table, a basic operation takes 𝑂(1) time in the average case and 𝑂(𝑛) time in the worst case, where n is the number of words in the dictionary. While an ordinary hash function maps the words in a dictionary to a hash table with collisions, a perfect hash function maps the words in a dictionary to a hash table with no collisions. Thus, perfect hashing is a special case of hashing, in which a find operation takes 𝑂(1) time in the worst case, and an insert or a delete operation takes 𝑂(1) time in the average case and 𝑂(𝑛) time in the worst case. This thesis addresses the following issues: Mapping, ordering and searching (MOS) is a successful algorithmic approach to finding perfect hash functions for static dictionaries. Previously, no analysis has been given for the running time of the MOS algorithm. In this thesis, a lower bound is proved on the tradeoff between the time required to find a perfect hash function and the space required to represent the perfect hash function . A new algorithm for static dictionaries called the musical chairs(MC) algorithm is developed that is based on ordering the hyperedges of a graph. It is found experimentally that the MC algorithm runs faster than the MOS algorithm in all cases for which the MC algorithm is capable of finding a perfect hash function. A new perfect hashing algorithm is developed for dynamic dictionaries. In this algorithm, an insert or a delete operation takes 𝑂(1) time in the average case, and a find operation takes 𝑂(1) time in the worst case. The algorithm is modeled using results from queueing theory . An ordering problem from graph theory, motivated by the hypergraph ordering problem in the Me algorithm, is proved to be NP-complete. / Ph. D.
16

Online hashing for fast similarity search

Cakir, Fatih 02 February 2018 (has links)
In this thesis, the problem of online adaptive hashing for fast similarity search is studied. Similarity search is a central problem in many computer vision applications. The ever-growing size of available data collections and the increasing usage of high-dimensional representations in describing data have increased the computational cost of performing similarity search, requiring search strategies that can explore such collections in an efficient and effective manner. One promising family of approaches is based on hashing, in which the goal is to map the data into the Hamming space where fast search mechanisms exist, while preserving the original neighborhood structure of the data. We first present a novel online hashing algorithm in which the hash mapping is updated in an iterative manner with streaming data. Being online, our method is amenable to variations of the data. Moreover, our formulation is orders of magnitude faster to train than state-of-the-art hashing solutions. Secondly, we propose an online supervised hashing framework in which the goal is to map data associated with similar labels to nearby binary representations. For this purpose, we utilize Error Correcting Output Codes (ECOCs) and consider an online boosting formulation in learning the hash mapping. Our formulation does not require any prior assumptions on the label space and is well-suited for expanding datasets that have new label inclusions. We also introduce a flexible framework that allows us to reduce hash table entry updates. This is critical, especially when frequent updates may occur as the hash table grows larger and larger. Thirdly, we propose a novel mutual information measure to efficiently infer the quality of a hash mapping and retrieval performance. This measure has lower complexity than standard retrieval metrics. With this measure, we first address a key challenge in online hashing that has often been ignored: the binary representations of the data must be recomputed to keep pace with updates to the hash mapping. Based on our novel mutual information measure, we propose an efficient quality measure for hash functions, and use it to determine when to update the hash table. Next, we show that this mutual information criterion can be used as an objective in learning hash functions, using gradient-based optimization. Experiments on image retrieval benchmarks confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions.
17

Estudo e implementação de algoritmos de resumo (hash) criptografico na plataforma Intel 'MARCA REGISTRADA' XScale / Study and implementation of cryptographic hash algorithms on the Intel XScale platform

Tavares, Paulo Henrique 21 February 2006 (has links)
Orientador: Ricardo Dahab / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-09T00:39:10Z (GMT). No. of bitstreams: 1 Tavares_PauloHenrique_M.pdf: 612792 bytes, checksum: 34039a441661bcdbc5f686566d558fae (MD5) Previous issue date: 2006 / Resumo: Nos últimos anos, a quantidade de dispositivos móveis tem crescido dramaticamente assim como a complexidade das aplicações que os usuários desejam executar neles. Ao mesmo tempo, a preocupação com a segurança do grande público também tem aumentado, criando uma demanda maior por aplicações criptográficas, como assinaturas digitais, que dependem de funções de resumo (hash) criptográfico. Neste contexto tornou-se importante estudar o desempenho de funções de resumo nesta nova geração de processadores, desenvolvidos para estes dispositivos. Neste trabalho estudamos a família de funções de resumo SHA (Secure Hash Algorithm) e Whirlpool, algumas de suas implementações, as características dos processadores Intel XScale que podem ser usadas para melhorar o desempenho de tais funções, com atenção especial para as novas extensões Wireless MMX. Também aplicamos algumas dessas técnicas e apresentamos os resultados dos testes de desempenho executados / Abstract: In recent years, the number of mobile devices has grown dramatically and so has the complexity of applications their users wish to run. At the same time, security concerns of the general public have also increased, creating a greater demand for cryptographic applications such as digital signatures, which use hash functions. In this context it has become very important to study the performance of hash functions on the new generation of processors developed for these devices. In this work we study the SHA (Secure Hash Algorithm) family of hash functions and some of their implementations, the Intel XScale processors characteristics that can be used to improve the performance of those functions, with special attention to the new Wireless MMX extensions. We also applied some of these techniques and report the results of the performance tests executed. / Mestrado / Engenharia de Computação / Mestre Profissional em Computação
18

Implementação em software de algoritmos de resumo criptográfico / Software implementation of cryptographic hash algorithms

Oliveira, Thomaz Eduardo de Figueiredo 18 August 2018 (has links)
Orientador: Julio César López Hernández / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-18T13:36:05Z (GMT). No. of bitstreams: 1 Oliveira_ThomazEduardodeFigueiredo_M.pdf: 4175073 bytes, checksum: 14d147ca37955c85736d05e60182a583 (MD5) Previous issue date: 2011 / Resumo: Os algoritmos de resumo criptográfico são uma importante ferramenta usada em muitas aplicações para o processamento seguro e eficiente de informações. Na década de 2000, sérias vulnerabilidades encontradas em funções de resumo tradicionais, como o SHA-1 e o MD5, levou a comunidade a repensar o desenvolvimento da criptanálise destes algoritmos e projetar novas estratégias para a sua construção. Como resultado, o instituto NIST anunciou em novembro de 2007 um concurso público para o desenvolvimento de um novo padrão de funções de resumo, o SHA-3, contando com a participação de autores de todo o mundo. Esta dissertação foca nos aspectos da implementação em software de alguns algoritmos submetidos no concurso SHA-3, buscando compreender a forma como os autores desenvolveram a questão do custo computacional de seus projetos em diversas plataformas, além de entender os novos paradigmas de implementação introduzidos pela tecnologia presente nos processadores atuais. Como consequência, propusemos novas técnicas algorítmicas para a implementação em software de alguns algoritmos, como o Luffa e o Keccak, levando aos mesmos melhorias significativas de desempenho / Abstract: Hash algorithms are an important tool of cryptography used in many applications for secure and efficient information processing. During the 2000 decade, serious vulnerabilities found at some traditional hash functions like SHA-1 and MD5 prompted the cryptography community to review the advances in the cryptanalysis of these algorithms and their design strategies. As a result, on November, 2007, NIST announced a public competition to develop a new cryptographic hash function, the SHA-3, which involved competitors throughout the world. This work focuses on the software implementation aspects of some of the SHA-3 submitted algorithms, seeking to comprehend how the authors resolved the computational cost issues at distinct platforms and to understand the new paradigms introduced by the present processors technology. As a consequence, we proposed new algorithmic techniques for the software implementation of Luffa and Keccak hash algorithms, improving their performance significantly / Mestrado / Teoria da Computação / Mestre em Ciência da Computação
19

Utilização de funções LSH para busca conceitual baseada em ontologias / Use of LSH functions for conceptual search based on ontologies

Paula, Luciano Bernardes de 18 August 2018 (has links)
Orientador: Maurício Ferreira Magalhães / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-18T18:55:11Z (GMT). No. of bitstreams: 1 Paula_LucianoBernardesde_D.pdf: 1395906 bytes, checksum: 9d9a3c99b7292a6482f33e12db95abd6 (MD5) Previous issue date: 2011 / Resumo: O volume de dados disponíveis na WWW aumenta a cada dia. Com o surgimento da Web Semântica, os dados passaram a ter uma representação do seu significado, ou seja, serem classificados em um conceito de um domínio de conhecimento, tal domínio geralmente definido por uma ontologia. Essa representação, apoiada em todo o ferramental criado para a Web Semântica, propicia a busca conceitual. Nesse tipo de busca, o objetivo não é a recuperação de um dado específico, mas dados, de diversos tipos, classificados em um conceito de um domínio de conhecimento. Utilizando um índice de similaridade, é possível a recuperação de dados referentes a outros conceitos do mesmo domínio, aumentando a abrangência da busca. A indexação distribuída desses dados pode fazer com que uma busca conceitual por similaridade se torne muito custosa. Existem várias estruturas de indexação distribuída, como as redes P2P, que são empregadas na distribuição e compartilhamento de grandes volumes de dados. Esta tese propõe a utilização de funções LSH na indexação de conceitos de um domínio, definido por uma ontologia, mantendo a similaridade entre eles. Dessa forma, conceitos similares são armazenados próximos um dos outros, tal conceito de proximidade medida em alguma métrica, facilitando a busca conceitual por similaridade / Abstract: The volume of data available in the WWW increases every day. The Semantic Web emerged, giving a representation of the meaning of data, being classified in a concept of a knowledge domain, which is generally defined using an ontology. This representation, based in all the tools created for the Semantic Web, possibilitates the conceptual search. In this type of search, the goal is not to retrieve a specific piece of data, but several data, of several types, classified in a concept of an ontology. Using a similarity level, the retrieval of data that refer to other concepts of the domain is also possible, making the search broader. The distributed indexing of all these data may turn the conceptual search costly. The Internet holds several structures of distributed indexing, such as P2P networks, which are used in the distribution and sharing of huge volumes of data. This thesis presents how it is possible to use LSH functions to generate identifiers to concepts of a domain, defined using an ontology, keeping their similarity. This way, similar concepts are stored near each other, such distance measured in some metric, turning the conceptual search by similarity easier / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
20

Privacy Preserving EEG-based Authentication Using Perceptual Hashing

Koppikar, Samir Dilip 12 1900 (has links)
The use of electroencephalogram (EEG), an electrophysiological monitoring method for recording the brain activity, for authentication has attracted the interest of researchers for over a decade. In addition to exhibiting qualities of biometric-based authentication, they are revocable, impossible to mimic, and resistant to coercion attacks. However, EEG signals carry a wealth of information about an individual and can reveal private information about the user. This brings significant privacy issues to EEG-based authentication systems as they have access to raw EEG signals. This thesis proposes a privacy-preserving EEG-based authentication system that preserves the privacy of the user by not revealing the raw EEG signals while allowing the system to authenticate the user accurately. In that, perceptual hashing is utilized and instead of raw EEG signals, their perceptually hashed values are used in the authentication process. In addition to describing the authentication process, algorithms to compute the perceptual hash are developed based on two feature extraction techniques. Experimental results show that an authentication system using perceptual hashing can achieve performance comparable to a system that has access to raw EEG signals if enough EEG channels are used in the process. This thesis also presents a security analysis to show that perceptual hashing can prevent information leakage.

Page generated in 0.1482 seconds