Spelling suggestions: "subject:"cash"" "subject:"ash""
131 |
Covering Arrays for Some Equivalence Classes of WordsCassels, Joshua, Godbole, Anant 01 August 2019 (has links)
Covering arrays for words of length (Formula presented.) over a (Formula presented.) -letter alphabet are (Formula presented.) arrays with entries from the alphabet so that for each choice of (Formula presented.) columns, each of the (Formula presented.) (Formula presented.) -letter words appears at least once among the rows of the selected columns. We study two schemes in which all words are not considered to be different. In the first case known as partitioning hash families, words are equivalent if they induce the same partition of a (Formula presented.) element set. In the second case, words of the same weight are equivalent. In both cases, we produce logarithmic upper bounds on the minimum size (Formula presented.) of a covering array. Definitive results for (Formula presented.), as well as general results, are provided.
|
132 |
Performance Study of Concurrent Search Trees and Hash Algorithms on Multiprocessors SystemsDemuynck, Marie-Anne 05 1900 (has links)
This study examines the performance of concurrent algorithms for B-trees and linear hashing. B-trees are widely used as an access method for large, single key, database files, stored in lexicographic order on secondary storage devices. Linear hashing is a fast and reliable hash algorithm, suitable for accessing records stored unordered in buckets. This dissertation presents performance results on implementations of concurrent Bunk-tree and linear hashing algorithms, using lock-based, partitioned and distributed methods on the Sequent Symmetry shared memory multiprocessor system and on a network of distributed processors created with PVM (Parallel Virtual Machine) software. Initial experiments, which started with empty data structures, show good results for the partitioned implementations and lock-based linear hashing, but poor ones for lock-based Blink-trees. A subsequent test, which started with loaded data structures, shows similar results, but with much improved performances for locked Blink- trees. The data also highlighted the high cost of split operations, which reached up to 70% of the total insert time.
|
133 |
Analyse de primitives symétriques / Analysis of symmetric primitivesKarpman, Pierre 18 November 2016 (has links)
Cette thèse a pour objet d'étude les algorithmes de chiffrement par blocet les fonctions de hachage cryptograpiques, qui sont deux primitives essentielles de la cryptographie dite «symétrique».Dans une première partie, nous étudions des éléments utiles pour la conception de chiffres par bloc: tout d'abord des matrices de diffusion de grande dimension issues de codes correcteurs géométriques, puis une boîte de substitution offrant une bonne diffusion. Dans le second cas, nous montrons aussi comment utiliser cet élément pour construire un chiffre compact et efficace sur petits processeurs.Dans une seconde partie, nous nous intéressons à des attaques en collision à initialisation libre sur la fonction de hachage SHA-1. Nous montrons comment les attaques classiques sur cette fonction peuvent être rendues plus efficaces en exploitant la liberté supplémentaire offerte par ce modèle. Ceci nous permet en particulier de calculer explicitement des collisions pour la fonction de compression de SHA-1 non réduite. / This thesis is about block ciphers and cryptographic hash functions, which are two essential primitives of symmetric-key cryptography. In the first part of this manuscript, we study useful building blocks for block cipher design. We first consider large diffusion matrices builtfrom algebraic-geometry codes, and then construct a small S-box with good diffusion. In the second case, we show how the S-box can be used to define a compact and efficient block cipher targetting small processors. In the second part, we focus on the SHA-1 hash function, for which we develop a free start collision attack. We show how classical collision attacks can be made more efficient by exploiting the additional freedom provided by the model. This allows us in particular to compute explicit collisions for the full compression function of SHA-1.
|
134 |
Reviving Mozart with Intelligence DuplicationGalajda, Jacob E 01 January 2021 (has links)
Deep learning has been applied to many problems that are too complex to solve through an algorithm. Most of these problems have not required the specific expertise of a certain individual or group; most applied networks learn information that is shared across humans intuitively. Deep learning has encountered very few problems that would require the expertise of a certain individual or group to solve, and there has yet to be a defined class of networks capable of achieving this. Such networks could duplicate the intelligence of a person relative to a specific task, such as their writing style or music composition style. For this thesis research, we propose to investigate Artificial Intelligence in a new direction: Intelligence Duplication (ID). ID encapsulates neural networks that are capable of solving problems that require the intelligence of a specific person or collective group. This concept can be illustrated by learning the way a composer positions their musical segments -as in the Deep Composer neural network. This will allow the network to generate similar songs to the aforementioned artist. One notable issue that arises with this is the limited amount of training data that can occur in some cases. For instance, it would be nearly impossible to duplicate the intelligence of a lesser known artist or an artist who did not live long enough to produce many works. Generating many artificial segments in the artist's style will overcome these limitations. In recent years, Generative Adversarial Networks (GANs) have shown great promise in many similarly related tasks. Generating artificial segments will give the network greater leverage in assembling works similar to the artist, as there will be an increased overlap in data points within the hashed embedding. Additional review indicates that current Deep Segment Hash Learning (DSHL) network variations have potential to optimize this process. As there are less nodes in the input and output layers, DSHL networks do not need to compute nearly as much information as traditional networks. We indicate that a synthesis of both DSHL and GAN networks will provide the framework necessary for future ID research. The contributions of this work will inspire a new wave of AI research that can be applied to many other ID problems.
|
135 |
Similarity Estimation with Non-Transitive LSHLewis, Robert R. 29 September 2021 (has links)
No description available.
|
136 |
Video Integrity through Blockchain TechnologyHemlin Billström, Adam, Huss, Fabian January 2017 (has links)
The increasing capabilities of today’s smartphones enables users to live stream video directly from their mobile device. One increasing concern regarding videos found online is their authenticity and integrity. From a consumer standpoint, it is very hard to distinguish and discern whether or not a video found on online can be trusted, if it was the original version, or if has been taken out of context. This thesis will investigate a method which tries to apply video integrity to live streamed media. The main purpose of this thesis was to design and evaluate a proof of concept prototype which will apply data integrity while simultaneously recording videos through an Android device. Additionally, the prototype has an online verification platform which verifies the integrity of the recorded video. Blockchain is a technology with the inherent ability to store data in a chronological chained link of events: establishing an irrefutable database. Using cryptographic hashes together with blockchain: an Android device can generate cryptographic hashes of the data content from a video recording, and consequently transmit these hashes to a blockchain. The same video is deconstructed in the web client creating hashes that can subsequently be compared with the ones found in the blockchain. A resulting prototype system provides some of the desired functions. However, the prototype is limited in that it does not have the ability to sign the hashes produced. It has also been limited in that it does not employ HTTPS for communication, and the verification process needs to be optimized to make it usable for real applications. / Den ökande funktionaliteten hos dagens smarta mobiltelefoner ger användare möjligheten att direktsända video. Det förekommer en ökande oro när det kommer till videors äkthet och huruvida en video är original eller inte. Ur en konsumentsynpunkt är det nämligen väldigt svårt att bedöma huruvida det går att lita på videon, om det är originalvideon eller om det bara är så att videon är tagen ur sitt sammanhang. Detta examensarbete på Master-nivå kommer att undersöka en metod för att verifiera att direktsänd media är oförändrad. Huvudsyftet med arbetet var att ta fram och utvärdera en prototyp som kan säkerställa oföränderlighet inom direktsänd video samtidigt som videon spelas in på mobilenheten. Prototypen har dessutom en webbaserad verifieringsplattform som kan verifiera och säkerställa huruvida videon (media) är oförändrad. Blockkedjeteknologin har den inbyggda egenskapen att kunna spara data i en kronologisk sammanlänkad ordning av händelser. Den skapar databas som inte kan ifrågasättas. Genom att använda kryptografisk hashning tillsammans med blockkedjetekniken kan en Android mobilenhet skapa kryptografiska hashar av videodata under tiden som videon spelas in och simultant skicka dessa hashar till en blockkedja. Samma video tas sedan isär i prototypens verifieringsfunktion. Verifieringsfunktionen skapar sedan hashar på samma sätt som i mobilenheten för att kunna jämföra dessa hashar mot de hashar som kan hämtas från blockkedjan. Prototypen är fungerande men saknar viss eftersträvad funktionalitet. Prototypen är begränsad på det sätt att mobilenheten inte kan signera de hashar som genereras. Den saknar även möjligheten att kommunicera över HTTPS protokollet samt att processen för att verifiera videomaterial är alldeles för långsam för att kunna användas i en verklig produkt.
|
137 |
Exploring Material Representations for Sparse Voxel DAGsPineda, Steven 01 June 2021 (has links) (PDF)
Ray tracing is a popular technique used in movies and video games to create compelling visuals. Ray traced computer images are increasingly becoming more realistic and almost indistinguishable from real-word images. Due to the complexity of scenes and the desire for high resolution images, ray tracing can become very expensive in terms of computation and memory. To address these concerns, researchers have examined data structures to efficiently store geometric and material information. Sparse voxel octrees (SVOs) and directed acyclic graphs (DAGs) have proven to be successful geometric data structures for reducing memory requirements. Moxel DAGs connect material properties to these geometric data structures, but experience limitations related to memory, build times, and render times. This thesis examines the efficacy of connecting an alternative material data structure to existing geometric representations.
The contributions of this thesis include the creation of a new material representation using hashing to accompany DAGs, a method to calculate surface normals using neighboring voxel data, and a demonstration and validation that DAGs can be used to super sample based on proximity. This thesis also validates the visual acuity from these methods via a user survey comparing different output images. In comparison to the Moxel DAG implementation, this work increases render time, but reduces build times and memory, and improves the visual quality of output images.
|
138 |
Design and Implementation of a Customized Encryption Algorithm for Authentication and Secure Communication between DevicesDaddala, Bhavana January 2017 (has links)
No description available.
|
139 |
DHT-based Collaborative Web TranslationTu, Zongjie January 2016 (has links)
No description available.
|
140 |
Performance Optimization of Public Key Cryptography on Embedded PlatformsPabbuleti, Krishna Chaitanya 23 May 2014 (has links)
Embedded systems are so ubiquitous that they account for almost 90% of all the computing devices. They range from very small scale devices with an 8-bit microcontroller and few kilobytes of RAM to large-scale devices featuring PC-like performance with full-blown 32-bit or 64-bit processors, special-purpose acceleration hardware and several gigabytes of RAM. Each of these classes of embedded systems have unique set of challenges in terms of hardware utilization, performance and power consumption. As network connectivity becomes a standard feature in these devices, security becomes an important concern. Public Key Cryptography is an indispensable tool to implement various security features necessary on these embedded platforms. In this thesis, we provide optimized PKC solutions on platforms belonging to two extreme classes of the embedded system spectrum.
First, we target high-end embedded platforms Qualcomm Snapdragon and Intel Atom. Each of these platforms features a dual-core processor, a GPU and a gigabyte of RAM. We use the SIMD coprocessor built into these processors to accelerate the modular arithmetic which accounts for the majority of execution time in Elliptic Curve Cryptography. We exploit the structure of NIST primes to perform the reduction step as we perform the multiplication. Our implementation runs over two times faster than OpenSSL implementations on the respective platforms.
The second platform we targeted is an energy-harvested wireless sensor node which has a 16-bit MSP430 microcontroller and a low power RF interface. The system derives its power from a solar panel and is constrained in terms of available energy and computational power. We analyze the computation and communication energy requirements for different signature schemes, each with a different trade-off between computation and communication. We investigate the Elliptic Curve Digital Signature Algorithm (ECDSA), the Lamport-Diffie one-time hash-based signature scheme (LD-OTS) and the Winternitz one-time hash-based signature scheme (W-OTS). We demonstrate that there’s a trade-off between energy needs, security level and algorithm selection. However, when we consider the energy needs for the overall system, we show that all schemes are within one order of magnitude from each another. / Master of Science
|
Page generated in 0.0308 seconds