• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 12
  • 9
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 52
  • 38
  • 26
  • 25
  • 21
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Efficient Constructions for Deterministic Parallel Random Number Generators and Quantum Key Distribution

Ritchie, Robert Peter 22 April 2021 (has links)
No description available.
72

Reviving Mozart with Intelligence Duplication

Galajda, Jacob E 01 January 2021 (has links)
Deep learning has been applied to many problems that are too complex to solve through an algorithm. Most of these problems have not required the specific expertise of a certain individual or group; most applied networks learn information that is shared across humans intuitively. Deep learning has encountered very few problems that would require the expertise of a certain individual or group to solve, and there has yet to be a defined class of networks capable of achieving this. Such networks could duplicate the intelligence of a person relative to a specific task, such as their writing style or music composition style. For this thesis research, we propose to investigate Artificial Intelligence in a new direction: Intelligence Duplication (ID). ID encapsulates neural networks that are capable of solving problems that require the intelligence of a specific person or collective group. This concept can be illustrated by learning the way a composer positions their musical segments -as in the Deep Composer neural network. This will allow the network to generate similar songs to the aforementioned artist. One notable issue that arises with this is the limited amount of training data that can occur in some cases. For instance, it would be nearly impossible to duplicate the intelligence of a lesser known artist or an artist who did not live long enough to produce many works. Generating many artificial segments in the artist's style will overcome these limitations. In recent years, Generative Adversarial Networks (GANs) have shown great promise in many similarly related tasks. Generating artificial segments will give the network greater leverage in assembling works similar to the artist, as there will be an increased overlap in data points within the hashed embedding. Additional review indicates that current Deep Segment Hash Learning (DSHL) network variations have potential to optimize this process. As there are less nodes in the input and output layers, DSHL networks do not need to compute nearly as much information as traditional networks. We indicate that a synthesis of both DSHL and GAN networks will provide the framework necessary for future ID research. The contributions of this work will inspire a new wave of AI research that can be applied to many other ID problems.
73

Similarity Estimation with Non-Transitive LSH

Lewis, Robert R. 29 September 2021 (has links)
No description available.
74

Querying Structured Data via Informative Representations

Bandyopadhyay, Bortik January 2020 (has links)
No description available.
75

Video Integrity through Blockchain Technology

Hemlin Billström, Adam, Huss, Fabian January 2017 (has links)
The increasing capabilities of today’s smartphones enables users to live stream video directly from their mobile device. One increasing concern regarding videos found online is their authenticity and integrity. From a consumer standpoint, it is very hard to distinguish and discern whether or not a video found on online can be trusted, if it was the original version, or if has been taken out of context. This thesis will investigate a method which tries to apply video integrity to live streamed media. The main purpose of this thesis was to design and evaluate a proof of concept prototype which will apply data integrity while simultaneously recording videos through an Android device. Additionally, the prototype has an online verification platform which verifies the integrity of the recorded video. Blockchain is a technology with the inherent ability to store data in a chronological chained link of events: establishing an irrefutable database. Using cryptographic hashes together with blockchain: an Android device can generate cryptographic hashes of the data content from a video recording, and consequently transmit these hashes to a blockchain. The same video is deconstructed in the web client creating hashes that can subsequently be compared with the ones found in the blockchain. A resulting prototype system provides some of the desired functions. However, the prototype is limited in that it does not have the ability to sign the hashes produced. It has also been limited in that it does not employ HTTPS for communication, and the verification process needs to be optimized to make it usable for real applications. / Den ökande funktionaliteten hos dagens smarta mobiltelefoner ger användare möjligheten att direktsända video. Det förekommer en ökande oro när det kommer till videors äkthet och huruvida en video är original eller inte. Ur en konsumentsynpunkt är det nämligen väldigt svårt att bedöma huruvida det går att lita på videon, om det är originalvideon eller om det bara är så att videon är tagen ur sitt sammanhang. Detta examensarbete på Master-nivå kommer att undersöka en metod för att verifiera att direktsänd media är oförändrad. Huvudsyftet med arbetet var att ta fram och utvärdera en prototyp som kan säkerställa oföränderlighet inom direktsänd video samtidigt som videon spelas in på mobilenheten.  Prototypen har dessutom en webbaserad verifieringsplattform som kan verifiera och säkerställa huruvida videon (media) är oförändrad. Blockkedjeteknologin har den inbyggda egenskapen att kunna spara data i en kronologisk sammanlänkad ordning av händelser. Den skapar databas som inte kan ifrågasättas. Genom att använda kryptografisk hashning tillsammans med blockkedjetekniken kan en Android mobilenhet skapa kryptografiska hashar av videodata under tiden som videon spelas in och simultant skicka dessa hashar till en blockkedja. Samma video tas sedan isär i prototypens verifieringsfunktion. Verifieringsfunktionen skapar sedan hashar på samma sätt som i mobilenheten för att kunna jämföra dessa hashar mot de hashar som kan hämtas från blockkedjan. Prototypen är fungerande men saknar viss eftersträvad funktionalitet. Prototypen är begränsad på det sätt att mobilenheten inte kan signera de hashar som genereras. Den saknar även möjligheten att kommunicera över HTTPS protokollet samt att processen för att verifiera videomaterial är alldeles för långsam för att kunna användas i en verklig produkt.
76

A Scalable, Load-Balancing Data Structure for Highly Dynamic Environments

Foster, Anthony 05 June 2008 (has links)
No description available.
77

Vector Instruction Set Extensions for Efficient and Reliable Computation of Keccak

Rawat, Hemendra Kumar 27 August 2016 (has links)
Recent processor architectures such as Intel Westmere (and later) and ARMv8 include instruction-level support for the Advanced Encryption Standard (AES), for the Secure Hashing Standard (SHA-1, SHA2) and for carry-less multiplication. These crypto-instructions are optimized for a single algorithm and provide significant performance improvements over software written using general-purpose instruction set. However, today's secure systems and protocols do not rely on just one, but a suite of many cryptographic applications that are expected to work in a correct and reliable manner. In this work, we propose a new instruction set for supporting efficient and reliable cryptography on modern processors. For efficiency, we propose flexible instruction set extensions for Keccak, a cryptographic kernel for hashing, authenticated encryption, key-stream generation and random-number generation. Keccak is the basis of the SHA-3 standard and the newly proposed Keyak and Ketje authenticated ciphers. For reliability, we propose a set of trusted instructions to verify the integrity of a cryptographic software library. These instructions are aimed at detecting tamper in the software or in the configurable hardware. We develop the instruction extensions for a 128-bit interface, commonly available in the vector processing unit of many modern processors. Simulation results on GEM5 architectural simulator show that the proposed instructions not only improves the performance of Keccak applications by 2 times (over NEON programming) and 6 times (over assembly programming), but also improves the reliability of applications at a performance overhead of just 6%. / Master of Science
78

[en] LSHSIM: A LOCALITY SENSITIVE HASHING BASED METHOD FOR MULTIPLE-POINT GEOSTATISTICS / [pt] LSHSIM: UM MÉTODO DE GEOESTATÍSTICA MULTIPONTO BASEADO EM LOCALITY SENSITIVITY HASHING

PEDRO NUNO DE SOUZA MOURA 14 November 2017 (has links)
[pt] A modelagem de reservatórios consiste em uma tarefa de muita relevância na medida em que permite a representação de uma dada região geológica de interesse. Dada a incerteza envolvida no processo, deseja-se gerar uma grande quantidade de cenários possíveis para se determinar aquele que melhor representa essa região. Há, então, uma forte demanda de se gerar rapidamente cada simulação. Desde a sua origem, diversas metodologias foram propostas para esse propósito e, nas últimas duas décadas, Multiple-Point Geostatistics (MPS) passou a ser a dominante. Essa metodologia é fortemente baseada no conceito de imagem de treinamento (TI) e no uso de suas características, que são denominadas de padrões. No presente trabalho, é proposto um novo método de MPS que combina a aplicação de dois conceitos-chave: a técnica denominada Locality Sensitive Hashing (LSH), que permite a aceleração da busca por padrões similares a um dado objetivo; e a técnica de compressão Run-Length Encoding (RLE), utilizada para acelerar o cálculo da similaridade de Hamming. Foram realizados experimentos com imagens de treinamento tanto categóricas quanto contínuas que evidenciaram que o LSHSIM é computacionalmente eficiente e produz realizações de boa qualidade, enquanto gera um espaço de incerteza de tamanho razoável. Em particular, para dados categóricos, os resultados sugerem que o LSHSIM é mais rápido do que o MS-CCSIM, que corresponde a um dos métodos componentes do estado-da-arte. / [en] Reservoir modeling is a very important task that permits the representation of a geological region of interest. Given the uncertainty involved in the process, one wants to generate a considerable number of possible scenarios so as to find those which best represent this region. Then, there is a strong demand for quickly generating each simulation. Since its inception, many methodologies have been proposed for this purpose and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this work, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. We have performed experiments with both categorical and continuous images which showed that LSHSIM is computationally efficient and produce good quality realizations, while achieving a reasonable space of uncertainty. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
79

Contributions au guillochage et à l'authentification de photographies / Contributions to guillochage and photograph authentication

Rivoire, Audrey 29 October 2012 (has links)
L'objectif est de développer un guillochage de photographie inspiré de l'holographie numérique en ligne, capable d'encoder la signature issue d'un hachage robuste de l'image (méthode de Mihçak et Venkatesan). Une telle combinaison peut permettre l'authentification de l'image guillochée dans le domaine numérique, le cas échéant après impression. Cette approche contraint le hachage à être robuste au guillochage. La signature est codée en un nuage de formes que l'on fait virtuellement diffracter pour former la marque à insérer (guilloches dites de Fresnel) dans l'image originale. Image dense, cette marque est insérée de façon peu, voire non visible afin de ne pas gêner la perception du contenu de la photographie mais de façon à pouvoir ultérieurement lire la signature encodée en vue de la comparer à la signature de la photographie à vérifier. L'impression-lecture rend la tâche plus difficile. Le guillochage de Fresnel et l'authentification associée sont testés sur une banque (réduite) d'images / This work aims to develop a new type of guilloché pattern to be inserted in a photograph (guillochage), inspired from in-line digital holography and able to encode an image robust hash value (méthode de Mihçak et Venkatesan). Such a combination can allow the authentication of the image including the guilloché pattern in the digital domain and possibly in the print domain. This approach constraints image hashing to be robust to guillochage. The hash value is encoded as a cloud of shapes that virtually produces a diffraction”pattern to be inserted as a mark (named “guilloches de Fresnel“) in the original image. The image insertion results from a trade off : the high-density mark should be quite or even not visible in order to avoid any disturbance in the perception of the image content but detectable in order to be able to compare the decoded hash to the hash of the current photograph. Print and scan makes the task harder. Both the Fresnel guillochage and the associated authentication are tested on a (reduced) image database
80

Lyra2: password hashing scheme with improved security against time-memory trade-offs. / LYRA2: um esquema de hash de senhas com maior segurança contra trade-offs entre processamento e memória.

Andrade, Ewerton Rodrigues 07 June 2016 (has links)
To protect against brute force attacks, modern password-based authentication systems usually employ mechanisms known as Password Hashing Schemes (PHS). Basically, a PHS is a cryptographic algorithm that generates a sequence of pseudorandom bits from a user-defined password, allowing the user to configure the computational costs involved in the process aiming to raise the costs of attackers testing multiple passwords trying to guess the correct one. Traditional schemes such as PBKDF2 and bcrypt, for example, include a configurable parameter that controls the number of iterations performed, allowing the user to adjust the time required by the password hashing process. The more recent scrypt and Lyra algorithms, on the other hand, allow users to control both processing time and memory usage. Despite these advances, there is still considerable interest by the research community in the development of new (and better) alternatives. Indeed, this led to the creation of a competition with this specific purpose, the Password Hashing Competition (PHC). In this context, the goal of this research effort is to propose a superior PHS alternative. Specifically, the objective is to improve the Lyra algorithm, a PHS built upon cryptographic sponges whose project counted with the authors\' participation. The resulting solution, called Lyra2, preserves the security, efficiency and flexibility of Lyra, including: the ability to configure the desired amount of memory and processing time to be used by the algorithm; and (2) the capacity of providing a high memory usage with a processing time similar to that obtained with scrypt. In addition, it brings important improvements when compared to its predecessor: (1) it allows a higher security level against attack venues involving time-memory trade-offs; (2) it includes tweaks for increasing the costs involved in the construction of dedicated hardware to attack the algorithm; (3) it balances resistance against side-channel threats and attacks relying on cheaper (and, hence, slower) storage devices. Besides describing the algorithm\'s design rationale in detail, this work also includes a detailed analysis of its security and performance in different platforms. It is worth mentioning that Lyra2, as hereby described, received a special recognition in the aforementioned PHC competition. / Para proteger-se de ataques de força bruta, sistemas modernos de autenticação baseados em senhas geralmente empregam algum Esquema de Hash de Senhas (Password Hashing Scheme - PHS). Basicamente, um PHS é um algoritmo criptográfico que gera uma sequência de bits pseudo-aleatórios a partir de uma senha provida pelo usuário, permitindo a este último configurar o custo computacional envolvido no processo e, assim, potencialmente elevar os custos de atacantes testando múltiplas senhas em paralelo. Esquemas tradicionais utilizados para esse propósito são o PBKDF2 e bcrypt, por exemplo, que incluem um parâmetro configurável que controla o número de iterações realizadas pelo algoritmo, permitindo ajustar-se o seu tempo total de processamento. Já os algoritmos scrypt e Lyra, mais recentes, permitem que usuários não apenas controlem o tempo de processamento, mas também a quantidade de memória necessária para testar uma senha. Apesar desses avanços, ainda há um interesse considerável da comunidade de pesquisa no desenvolvimento e avaliação de novas (e melhores) alternativas. De fato, tal interesse levou recentemente à criação de uma competição com esta finalidade específica, a Password Hashing Competition (PHC). Neste contexto, o objetivo do presente trabalho é propor uma alternativa superior aos PHS existentes. Especificamente, tem-se como alvo melhorar o algoritmo Lyra, um PHS baseado em esponjas criptográficas cujo projeto contou com a participação dos autores do presente trabalho. O algoritmo resultante, denominado Lyra2, preserva a segurança, eficiência e flexibilidade do Lyra, incluindo a habilidade de configurar do uso de memória e tempo de processamento do algoritmo, e também a capacidade de prover um uso de memória superior ao do scrypt com um tempo de processamento similar. Entretanto, ele traz importantes melhorias quando comparado ao seu predecessor: (1) permite um maior nível de segurança contra estratégias de ataque envolvendo trade-offs entre tempo de processamento e memória; (2) inclui a possibilidade de elevar os custos envolvidos na construção de plataformas de hardware dedicado para ataques contra o algoritmo; (3) e provê um equilíbrio entre resistância contra ataques de canal colateral (\"side-channel\") e ataques que se baseiam no uso de dispositivos de memória mais baratos (e, portanto, mais lentos) do que os utilizados em computadores controlados por usuários legítimos. Além da descrição detalhada do projeto do algoritmo, o presente trabalho inclui também uma análise detalhada de sua segurança e de seu desempenho em diferentes plataformas. Cabe notar que o Lyra2, conforme aqui descrito, recebeu uma menção de reconhecimento especial ao final da competição PHC previamente mencionada.

Page generated in 0.0419 seconds