Spelling suggestions: "subject:"has""
111 |
Time Efficient and Quality Effective K Nearest Neighbor Search in High Dimension SpaceJanuary 2011 (has links)
abstract: K-Nearest-Neighbors (KNN) search is a fundamental problem in many application domains such as database and data mining, information retrieval, machine learning, pattern recognition and plagiarism detection. Locality sensitive hash (LSH) is so far the most practical approximate KNN search algorithm for high dimensional data. Algorithms such as Multi-Probe LSH and LSH-Forest improve upon the basic LSH algorithm by varying hash bucket size dynamically at query time, so these two algorithms can answer different KNN queries adaptively. However, these two algorithms need a data access post-processing step after candidates' collection in order to get the final answer to the KNN query. In this thesis, Multi-Probe LSH with data access post-processing (Multi-Probe LSH with DAPP) algorithm and LSH-Forest with data access post-processing (LSH-Forest with DAPP) algorithm are improved by replacing the costly data access post-processing (DAPP) step with a much faster histogram-based post-processing (HBPP). Two HBPP algorithms: LSH-Forest with HBPP and Multi- Probe LSH with HBPP are presented in this thesis, both of them achieve the three goals for KNN search in large scale high dimensional data set: high search quality, high time efficiency, high space efficiency. None of the previous KNN algorithms can achieve all three goals. More specifically, it is shown that HBPP algorithms can always achieve high search quality (as good as LSH-Forest with DAPP and Multi-Probe LSH with DAPP) with much less time cost (one to several orders of magnitude speedup) and same memory usage. It is also shown that with almost same time cost and memory usage, HBPP algorithms can always achieve better search quality than LSH-Forest with random pick (LSH-Forest with RP) and Multi-Probe LSH with random pick (Multi-Probe LSH with RP). Moreover, to achieve a very high search quality, Multi-Probe with HBPP is always a better choice than LSH-Forest with HBPP, regardless of the distribution, size and dimension number of the data set. / Dissertation/Thesis / M.S. Computer Science 2011
|
112 |
The Design and Analysis of Hash Families For Use in Broadcast EncryptionJanuary 2012 (has links)
abstract: Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged subscribers. Minimal Perfect Hash Families (PHFs) have been shown to provide a basis for the construction of memory-efficient t-resilient Key Pre-distribution Schemes (KPSs) from multiple instances of 1-resilient KPSs. Using this technique, the task of constructing a large t-resilient BES is reduced to finding a near-minimal PHF of appropriate parameters. While combinatorial and probabilistic constructions exist for minimal PHFs with certain parameters, the complexity of constructing them in general is currently unknown. This thesis introduces a new type of hash family, called a Scattering Hash Family (ScHF), which is designed to allow for the scalable and ingredient-independent design of memory-efficient BESs for large parameters, specifically resilience and total number of subscribers. A general BES construction using ScHFs is shown, which constructs t-resilient KPSs from other KPSs of any resilience ≤w≤t. In addition to demonstrating how ScHFs can be used to produce BESs , this thesis explores several ScHF construction techniques. The initial technique demonstrates a probabilistic, non-constructive proof of existence for ScHFs . This construction is then derandomized into a direct, polynomial time construction of near-minimal ScHFs using the method of conditional expectations. As an alternative approach to direct construction, representing ScHFs as a k-restriction problem allows for the indirect construction of ScHFs via randomized post-optimization. Using the methods defined, ScHFs are constructed and the parameters' effects on solution size are analyzed. For large strengths, constructive techniques lose significant performance, and as such, asymptotic analysis is performed using the non-constructive existential results. This work concludes with an analysis of the benefits and disadvantages of BESs based on the constructed ScHFs. Due to the novel nature of ScHFs, the results of this analysis are used as the foundation for an empirical comparison between ScHF-based and PHF-based BESs . The primary bases of comparison are construction efficiency, key material requirements, and message transmission overhead. / Dissertation/Thesis / M.S. Computer Science 2012
|
113 |
Uma arquitetura escalável para recuperação e atualização de informações com relação de ordem total. / A scalable architecture for retrieving information with total order relationship.Vladimir Emiliano Moreira Rocha 17 November 2017 (has links)
Desde o início do século XXI, vivenciamos uma explosão na produção de informações de diversos tipos, tais como fotos, áudios, vídeos, entre outros. Dentre essas informações, existem aquelas em que a informação pode ser dividida em partes menores, mas que devem ser relacionadas seguindo uma ordem total. Um exemplo deste tipo de informação é um arquivo de vídeo que foi dividido em dez segmentos identificados com números de 1 a 10. Para reproduzir o vídeo original a partir dos segmentos é necessário que seus identificadores estejam ordenados. A estrutura denominada tabela de hash distribuída (DHT) tem sido amplamente utilizada para armazenar, atualizar e recuperar esse tipo de informação de forma eficiente em diversos cenários, como monitoramento de sensores e vídeo sob demanda. Entretanto, a DHT apresenta problemas de escalabilidade quando um membro da estrutura não consegue atender as requisições recebidas, trazendo como consequência a inacessibilidade da informação. Este trabalho apresenta uma arquitetura em camadas denominada MATe, que trata o problema da escalabilidade em dois níveis: estendendo a DHT com a introdução de agentes baseados na utilidade e organizando a quantidade de requisições solicitadas. A primeira camada trata a escalabilidade ao permitir a criação de novos agentes com o objetivo de distribuir as requisições evitando que um deles tenha a escalabilidade comprometida. A segunda camada é composta por grupos de dispositivos organizados de tal forma que somente alguns deles serão escolhidos para fazer requisições. A arquitetura foi implementada para dois cenários onde os problemas de escalabilidade acontecem: (i) monitoramento de sensores; e (ii) vídeo sob demanda. Para ambos cenários, os resultados experimentais mostraram que MATe melhora a escalabilidade quando comparada com as implementações originais da DHT. / Since the beginning of the 21st century, we have experienced an explosive growth in the generation of information, such as photos, audios, videos, among others. Within this information, there are some in which the information can be divided and related following a total order. For example, a video file can be divided into ten segments identified with numbers from 1 to 10. To play the original video from these segments, their identifiers must be fully ordered. A structure called Distributed Hash Table (DHT) has been widely used to efficiently store, update, and retrieve this kind of information in several application domains, such as video on demand and sensor monitoring. However, DHT encounters scalability issues when one of its members fails to answer the requests, resulting in information loss. This work presents MATe, a layered architecture that addresses the problem of scalability on two levels: extending the DHT with the introduction of utility-based agents and organizing the volume of requests. The first layer manages the scalability by allowing the creation of new agents to distribute the requests when one of them has compromised its scalability. The second layer is composed of groups of devices, organized in such a way that only a few of them will be chosen to perform requests. The architecture was implemented in two application scenarios where scalability problems arise: (i) sensor monitoring; and (ii) video on demand. For both scenarios, the experimental results show that MATe improves scalability when compared to original DHT implementations.
|
114 |
Uso de EDPs caóticas em criptografiaMarco, Anderson Gonçalves January 2017 (has links)
Orientador : Prof. Dr. Jerônimo Cordoni Pellegrini / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2017. / A criptografia é fundamental para assegurar as pessoas seus direitos e liberdades individuais e para protejer os dados das organizações. Este trabalho descreve uma nova função
hash de 256 bits. O algoritmo usa em sua concepção equações diferenciais parciais (EDPs) catóticas, inaugurando assim um novo paradigma nesta area. Por utilizar EDPs catóticas o algoritmo pode ter seu desempenho aumentado através de programação paralela. Usar a técnica de programação paralela não é possível em sistemas caóticoa clássicos, como
mapas e equaçõess diferenciais ordinarias. As EDPs de Gray-Scott são as EDPs usadas neste trabalho. Dois grandesções. O outro desafio foi diminuir o custo computacional fixo
que uma versão anterior do algoritmo, tambem desenvolvida pelo autor, apresentava na etapa de finalização. Os resultados obtidos mostram que o algoritmo desenvolvido possui
um bom desempenho em arquiteturas paralelas de computadores, mostrando que este novo paradigma para elaboração de algoritmos de criptografia tira um bom proveito do atual paradigma para arquiteturas de computadores. / Cryptography is essential to ensure the people individual freedom and to protect organizations data. This work presents a novel cryptographic hash function of 256 bits. The algorithm uses partial diferential this area. Because of the use of chaotic partial diferential equations this algorithm can be parallelized so its performance will be increased when a GPU is available. Parallelization isn't possible on classic chaotic systems, for instance, maps and ordinary diferential equations. The Gray-Scott PDEs are the PDEs that are being utilized in this work. There were two challenges in the development of this work. The first task was to ensure that the evolution of Gray-Scott PDEs have sensibility to initial conditions, after discretization. The second was to decrease the fixed-cost to finish algorithm, this issue was present in
a previous algorithm version of the algorithm. The results obtained show that algorithm developed has a good performance for parallel computer architectures, thus the new paradigm for cryptographic algorithms makes good use of the current paradigm for computers architecture.
|
115 |
Modelo estrutural para compartilhamento de arquivos peer-to-peerRezende, Evandro da Silva [UNESP] 27 July 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0
Previous issue date: 2009-07-27Bitstream added on 2014-06-13T19:38:59Z : No. of bitstreams: 1
rezende_es_me_sjrp.pdf: 541725 bytes, checksum: e9bdd04ef5d9de1ce630cff828619883 (MD5) / O processo de compartilhamento de arquivos passou por grande evolução desde a criação dos computadores, até os dias atuais. Esta evolução teve início com o compartilhamento de dados em computadores de grande porte, passando pela criação das mídias removíveis e, posteriormente, beneficiando-se dos avanços nas tecnologias de redes de computadores. Nesta última, é possível verificar duas fases distintas, a primeira, em que a arquitetura predominante foi cliente/servidor, e a segunda, na qual busca-se voltar às origens da Internet, e utilizar largamente o conceito de computação peer-to-peer. Apesar de todo avanço ocorrido com o compartilhamento de arquivos peer-to-peer, desde seu surgimento no final da década de noventa até os dias atuais, verifica-se que todo o avanço ocorrido ainda não foi suficiente para garantir a plena excelência no que esta tecnologia foi proposta, facilitar a troca de arquivos entre usuários e sistemas. Assim, esta dissertação apresenta um modelo de indexação e acesso a arquivos compartilhados, de maneira totalmente descentralizada e autônoma. Além disso, busca-se criar mecanismos de acesso ao conteúdo compartilhado através de padrões da indústria, oferecendo acesso a estes arquivos para diversas aplicações e sistemas, por exemplo, navegadores web, o qual foi foco de um estudo de caso neste trabalho, onde foi possível verificar todos os benefícios e viabilidade de tal modelo. / The file sharing process has evolved significantly since the creation of the first personal computers to the present days. Such evolution begun with the possibility of data sharing among the users of a same supercomputer, passed through the creation of removable storage devices and, after that, took advantage in the technology advances in computer networks. In this last phase, it is possible to distinguish two different approaches one based on a client-server architecture, and another aiming to use the peer-to-peer concepts for data access. The growth in the adoption of peer-to-peer approaches, however, still have not been enough to guarantee the primary goal of this approach, which is to simplify access to shared data, due to the existence of multiple incompatible peer-to-peer networks. In this context, this dissertation presents a completely decentralized and autonomic approach to data indexing and access. In this sense, this approach intends to provide mechanisms to access data stored on independent peer-to-peer networks using industry standards, and offering access to such data to user and system applications, such as the web browsers, which was the focus of study case in this work, and made possible to verify all it’s benefits and feasibility.
|
116 |
MEMS Harsh Environment Sensors for Earth and Space ExplorationJanuary 2013 (has links)
abstract: Harsh environments have conditions that make collecting scientific data difficult with existing commercial-off-the-shelf technology. Micro Electro Mechanical Systems (MEMS) technology is ideally suited for harsh environment characterization and operation due to the wide range of materials available and an incredible array of different sensing techniques while providing small device size, low power consumption, and robustness. There were two main objectives of the research conducted. The first objective was to design, fabricate, and test novel sensors that measure the amount of exposure to ionizing radiation for a wide range of applications including characterization of harsh environments. Two types of MEMS ionizing radiation dosimeters were developed. The first sensor was a passive radiation-sensitive capacitor-antenna design. The antenna's emitted frequency of peak-intensity changed as exposure time to radiation increased. The second sensor was a film bulk acoustic-wave resonator, whose resonant frequency decreased with increasing ionizing radiation exposure time. The second objective was to develop MEMS sensor systems that could be deployed to gather scientific data and to use that data to address the following research question: do temperature and/or conductivity predict the appearance of photosynthetic organisms in hot springs. To this end, temperature and electrical conductivity sensor arrays were designed and fabricated based on mature MEMS technology. Electronic circuits and the software interface to the electronics were developed for field data collection. The sensor arrays utilized in the hot springs yielded results that support the hypothesis that temperature plays a key role in determining where the photosynthetic organisms occur. Additionally, a cold-film fluidic flow sensor was developed, which is suitable for near-boiling temperature measurement. Future research should focus on (1) developing a MEMS pH sensor array with integrated temperature, conductivity, and flow sensors to provide multi-dimensional data for scientific study and (2) finding solutions to biofouling and self-calibration, which affects sensor performance over long-term deployment. / Dissertation/Thesis / Ph.D. Engineering 2013
|
117 |
Evaluation and Implementation for Pushing Automatic Updates to IoT DevicesMin, Menglei January 2017 (has links)
In recent years, Internet of Things has developed rapidly, and now has penetrated into human life and industrial production. It is speculated that the internet of things will become ubiquitous in the future, which will bring a series of problems. First, the large number of things will lead to operated system and software updates consuming a lot of manpower and resources. Another problem is the Internet of things facing security issues, in recent years for the means of Internet of things and tools have been increasing largely. Therefore, to achieve a secure automatic update on the Internet of Things is essential. This report will follow such an automatic update system based on Internet of things to expand. First it elaborated on the main motive of this problem, found three existing related works and three security methods for communication to analyze. Then combined results of analysis, put forward own a secure automatic update solution: manager and devices connect and mutual authentication in real time, at the same time, the manager will regularly check the database to see if there is new version application. When the administrator uploads a new version, the manager will download the version and then sends to all devices, then device installs and finally restart itself. Next, the report described how to implement this system in detail and evaluated it. In the end, this report summarized and introduces the future work.
|
118 |
Hash Comparison Module for OCFAAxelsson, Therese, Melani, Daniel January 2010 (has links)
Child abuse content on the Internet is today an increasing problem and difficult to dealwith. The techniques used by paedophiles are getting more sophisticated which means ittakes more effort of the law enforcement to locate this content. To help solving this issue, a EU-funded project named FIVES is developing a set oftools to help investigations involving large amounts of image and video material. One ofthese tools aims to help identifying potentially illegal files by hash signatures derived fromusing classification information from another project. / FIVES
|
119 |
Differential Power Analysis In-Practice for Hardware Implementations of the Keccak Sponge FunctionGraff, Nathaniel 01 June 2018 (has links)
The Keccak Sponge Function is the winner of the National Institute of Standards and Technology (NIST) competition to develop the Secure Hash Algorithm-3 Standard (SHA-3). Prior work has developed reference implementations of the algorithm and described the structures necessary to harden the algorithm against power analysis attacks which can weaken the cryptographic properties of the hash algorithm. This work demonstrates the architectural changes to the reference implementation necessary to achieve the theoretical side channel-resistant structures, compare their efficiency and performance characteristics after synthesis and place-and-route when implementing them on Field Programmable Gate Arrays (FPGAs), publish the resulting implementations under the Massachusetts Institute of Technology (MIT) open source license, and show that the resulting implementations demonstrably harden the sponge function against power analysis attacks.
|
120 |
Secure and efficient post-quantum cryptographic digital signature algorithmsMahmoud, Mahmoud Yehia Ahmed 24 August 2021 (has links)
Cryptographic digital signatures provide authentication to communicating parties over
communication networks. They are integral asymmetric primitives in cryptography. The
current digital signature infrastructure adopts schemes that rely on the hardness of finding
discrete logarithms and factoring in finite groups. Given the recent advances in physics
which point towards the eventual construction of large scale quantum computers, these
hard problems will be solved in polynomial time using Shor’s algorithm. Hence, there is a
clear need to migrate the cryptographic infrastructure to post-quantum secure alternatives.
Such an initiative is demonstrated by the PQCRYPTO project and the current Post-Quantum Cryptography (PQC) standardization competition run by the National Institute of Standards and Technology (NIST).
This dissertation considers hash-based digital signature schemes. Such algorithms rely
on simple security notions such as preimage, and weak and strong collision resistances
of hash functions. These notions are well-understood and their security against quantum
computers has been well-analyzed. However, existing hash-based signature schemes have large signature sizes and high computational costs. Moreover, the signature size increases with the number of messages to be signed by a key pair.
The goal of this work is to develop hash-based digital signature schemes to overcome the aforementioned limitations. First, FORS, the underlying few-time signature scheme of the NIST PQC alternate candidate SPHINCS+ is analyzed against adaptive chosen message attacks, and DFORS, a few-time signature scheme with adaptive chosen message security, is proposed. Second, a new variant of SPHINCS+ is introduced that improves the computational cost and security level. Security analysis for the new variant is presented. In addition, the hash-based group digital signature schemes, Group Merkle (GM) and Dynamic Group Merkle (DGM), are studied and their security is analyzed. Group Merkle Multi-Treem (GMMT) is proposed to solve some of the limitations of the GM and DGM hash-based group signature schemes. / Graduate
|
Page generated in 0.0499 seconds