Spelling suggestions: "subject:"merkle"" "subject:"zerkle""
11 |
Octagon HouseLohr, Jonathan 22 August 2011 (has links)
No description available.
|
12 |
Bonsai Merkle Tree Streams: Bulk Memory Verification Unit for Trusted Program Verification SystemRios, Richard J 01 December 2024 (has links) (PDF)
Today, all modern computing systems are undoubtedly vulnerable to numerous types of attacks that could be targeted toward any layer of the system from dedicated hardware to highly abstracted software. Unfortunately, many devices and systems naturally contain inadequately protected components or software modules that un- dermine their security as a whole. Additionally, security is heavily variable system to system, and has a huge dependence on adequate implementation and ongoing support from device and software manufacturers. To address these various security issues in a very general way, TrustGuard, a containment security system utilizing an external device called the Sentry that would verify the activity of the host machine and control all incoming/outgoing communication accordingly, was created. To do this, Trust- Guard uses cryptographic memory protection schemes, a small trusted hardware and software base, and recomputation and checking of application behavior running on the host machine at an instruction-by-instruction granularity before allowing exter- nal communication to occur. Currently, however, the TrustGuard system only allows for one 8-byte chunk to be sent or received externally at one time, limiting overall throughput, and heavily polluting the main system caches in the case of large data transfers. To combat this limitation, This thesis proposes a system to allow for ef- ficient communication of large batches of data at once. In particular, it does so by using a small dedicated cache and efficient tree traversal techniques to asynchronously verify large chunks of program memory in stream-like fashion. This thesis primarily serves to provide a design, proof-of-concept, and collection of important information that will help future students implement such a system.
|
13 |
Formas e normas de [jus]validação da informação: das marcas pessoais ao logical e à assinatura digitalCunha, Mauro Leonardo de Brito Albuquerque 21 February 2006 (has links)
Submitted by Valdinei Souza (neisouza@hotmail.com) on 2015-10-06T20:49:45Z
No. of bitstreams: 1
DISSERTACAO LEONARDO COM CAPA.pdf: 1273238 bytes, checksum: 45cf92815aa55e73d2858f881116bc64 (MD5) / Approved for entry into archive by Urania Araujo (urania@ufba.br) on 2015-11-13T17:59:16Z (GMT) No. of bitstreams: 1
DISSERTACAO LEONARDO COM CAPA.pdf: 1273238 bytes, checksum: 45cf92815aa55e73d2858f881116bc64 (MD5) / Made available in DSpace on 2015-11-13T17:59:16Z (GMT). No. of bitstreams: 1
DISSERTACAO LEONARDO COM CAPA.pdf: 1273238 bytes, checksum: 45cf92815aa55e73d2858f881116bc64 (MD5) / Esta dissertação buscou explorar a validação jurídica dos processos de informação jurídica ou juridicisada pelo referido processo. São dois, portanto, os objetivos: conceituar os processos de informação jurídica e conceituar os processos de sua validação jurídica. Buscou-se, pois, recompor ponto a ponto o itinerário do surgimento à validação jurídica das tecnologias de validação da informação desde as marcas pessoais pré-históricas até a tecnologia criptográfica assimétrica que proporcionou o advento da assinatura digital. Os conceitos de forma, de norma e de padrão são analisados com o fulcro na problematização do tema da validação nos processos humanos de comunicação da informação. / Abstract- This paper means to explore legal validation of information processes, wether the information is legal or legalized by its validation process. It had, thus, two main objectives, i.e.: to conceptualize legal information processes and to conceptualize legal validation processes pursuant to the latter. A step-by-step trace of the path from the advent to the legal validation of information processes – since the beginning of it as pre-historical personal marks, up to the latest asymmetric cryptographic technologies that allow the upcoming of digital signatures. The concepts of norm, form, pattern and standard are thus analyzed, meaning to further comprehend the ever-evolving quest for validation in human information communication processes.
|
14 |
LF-PKI: Practical, Secure, and High-Performance Design and Implementation of a Lite Flexible PKI / LF-PKI: Praktisk, säker och Högpresterande design och Implementering av Lite Flexible PKIXu, Yongzhe January 2022 (has links)
Today’s Web Public Key Infrastructure (PKI) builds on a homogeneous trust model. All Certificate Authorities (CAs) are equally trusted once they are marked as trusted CAs on the client side. As a result, the security of the Web PKI depends on the weakest CA. Trust heterogeneity and flexibility can be introduced in today’s Web PKI to mitigate the problem. Each client could have different levels of trust in each trusted CA, according to the properties of each CA, such as the location, reputation and scale of the CA. As a result, the loss caused by the compromise of a less trusted CA will be relieved. In this work, we study Flexible-PKI (F-PKI), which is an enhancement of Web PKI, and propose Lite Flexible-PKI (LF-PKI) to address the limitations of F-PKI. LF-PKI is designed to securely and efficiently manage domain policies and enable trust heterogeneity on the client side. The domain owner can issue domain policies for their domains, and the client will have a complete view of the domain policies issued for a specific domain. Based on the collection of domain policies from LF-PKI, trust heterogeneity can be achieved on the client side. Each client will choose the domain policies based on the trust levels of the CA. On the basis of the LF-PKI design, a high-performance implementation of LF-PKI was developed, optimized, and analyzed. The optimized implementation can provide the LF-PKI services for worldwide domains on a single server with moderate hardware. / Dagens Web Public Key Infrastructure (PKI) bygger på en homogen förtroendemodell. Alla certifikatutfärdare (CA) är lika betrodda när de är markerade som betrodda certifikatutfärdare på klientsidan. Som ett resultat beror säkerheten för webb-PKI på den svagaste CA. Förtroendeheterogenitet och flexibilitet kan införas i dagens webb-PKI för att mildra problemet. Varje klient kan ha olika nivåer av förtroende för varje betrodd certifikatutfärdare, beroende på egenskaperna hos varje certifikatutfärdare, såsom certifikatutfärdarens plats, rykte och omfattning. Som ett resultat kommer förlusten som orsakats av kompromissen av en mindre pålitlig CA att avhjälpas. I detta arbete studerar vi Flexible-PKI (F-PKI), som är en förbättring av webb-PKI, och föreslår Lite Flexible-PKI (LF-PKI) för att ta itu med begränsningarna hos F-PKI. LF-PKI är utformad för att säkert och effektivt hantera domänpolicyer och möjliggöra förtroendeheterogenitet på klientsidan. Domänägaren kan utfärda domänpolicyer för sina domäner, och klienten kommer att ha en fullständig bild av domänpolicyerna som utfärdats för en specifik domän. Baserat på insamlingen av domänpolicyer från LF-PKI kan förtroendeheterogenitet uppnås på klientsidan. Varje klient kommer att välja domänpolicyer baserat på förtroendenivåerna för CA. På basis av LF-PKI-designen utvecklades, optimerades och analyserades en högpresterande implementering av LF-PKI. Den optimerade implementeringen kan tillhandahålla LF-PKI-tjänster för världsomspännande domäner på en enda server med måttlig hårdvara.
|
15 |
Approximate Distributed Set Reconciliation with Defined AccuracyKruber, Nico 24 April 2020 (has links)
Mit aktuell vorhandenen Mitteln ist es schwierig, objektiv approximative Algorithmen zum Mengenabgleich gegenüberzustellen und zu vergleichen. Jeder Algorithmus kann durch unterschiedliche Wahl seiner jeweiligen Parameter an ein gegebenes Szenario angepasst werden und so zum Beispiel Bandbreiten- oder CPU-optimiert werden. Änderungen an den Parametern gehen jedoch meistens auch mit Änderungen an der Genauigkeit bei der Erkennung von Differenzen in den teilnehmenden Mengen einher und behindern somit objektive Vergleiche, die auf derselben Genauigkeit basieren.
In dieser Arbeit wird eine Methodik entwickelt, die einen fairen Vergleich von approximativen Algorithmen zum Mengenabgleich erlaubt. Dabei wird eine feste Zielgenauigkeit definiert und im Weiteren alle die Genauigkeit beeinflussenden Parameter entsprechend gesetzt. Diese Methode ist universell genug, um für eine breite Masse an Algorithmen eingesetzt zu werden. In der Arbeit wird sie auf zwei triviale hashbasierte Algorithmen, einem basierend auf Bloom Filtern und einem basierend auf Merkle Trees angewandt, um dies zu untermauern. Im Vergleich zu vorherigen Arbeiten zu Merkle Trees wird vorgeschlagen, die Größe der Hashsummen dynamisch im Baum zu wählen und so den Bandbreitenbedarf an die gewünschte Zielgenauigkeit anzupassen. Dabei entsteht eine neue Variante des Mengenabgleichs mit Merkle Trees, bei der sich erstmalig die Genauigkeit konfigurieren lässt. Eine umfassende Evaluation eines jeden der vier unter dem Genauigkeitsmodell angepassten Algorithmen bestätigt die Anwendbarkeit der entwickelten Methodik und nimmt eine Neubewertung dieser Algorithmen vor.
Die vorliegenden Ergebnisse erlauben die Auswahl eines effizienten Algorithmus für unterschiedliche praktische Szenarien basierend auf einer gewünschten Zielgenauigkeit. Die präsentierte Methodik zur Bestimmung passender Parameter, um für unterschiedliche Algorithmen die gleiche Genauigkeit zu erreichen, kann auch auf weitere Algorithmen zum Mengenabgleich angewandt werden und erlaubt eine objektive, allgemeingültige Einordnung ihrer Leistung unter verschiedenen Metriken. Der in der Arbeit entstandene neue approximative Mengenabgleich mit Merkle Trees erweitert die Anwendbarkeit von Merkle Trees und wirft ein neues Licht auf dessen Effektivität. / The objective comparison of approximate versioned set reconciliation algorithms is challenging. Each algorithm's behaviour can be tuned for a given use case, e.g. low bandwidth or computational overhead, using different sets of parameters. Changes of these parameters, however, often also influence the algorithm's accuracy in recognising differences between participating sets and thus hinder objective comparisons based on the same level of accuracy.
We develop a method to fairly compare approximate set reconciliation algorithms by enforcing a fixed accuracy and deriving accuracy-influencing parameters accordingly. We show this method's universal applicability by adopting two trivial hash-based algorithms as well as set reconciliation with Bloom filters and Merkle trees. Compared to previous research on Merkle trees, we propose to use dynamic hash sizes to align the transfer overhead with the desired accuracy and create a new Merkle tree reconciliation algorithm with an adjustable accuracy target. An extensive evaluation of each algorithm under this accuracy model verifies its feasibility and ranks these four algorithms.
Our results allow to easily choose an efficient algorithm for practical set reconciliation tasks based on the required level of accuracy. Our way to find configuration parameters for different, yet equally accurate, algorithms can also be adopted to other set reconciliation algorithms and allows to rate their respective performance in an objective manner. The resultant new approximate Merkle tree reconciliation broadens the applicability of Merkle trees and sheds some new light on its effectiveness.
|
16 |
Design and implementation of a blockchain shipping applicationBouidani, Maher M. 31 January 2019 (has links)
The emerging Blockchain technology has the potential to shift the traditional centralized systems to become more flexible, efficient and decentralized. An important area to apply this capability is supply chain. Supply chain visibility and transparency has become an important aspect of a successful supply chain platform as it becomes more complex than ever before. The complexity comes from the number of participants involved and the intricate roles and relations among them. This puts more pressure on the system and the customers in terms of system availability and tamper-resistant data. This thesis presents a private and permisioned application that uses Blockchain and aims to automate the shipping processes among different participants in the supply chain ecosystem. Data in this private ledger is governed with the participants’ invocation of their smart contracts. These smart contracts are designed to satisfy the participants’ different roles in the supply chain. Moreover, this thesis discusses the performance measurements of this application results in terms of the transaction throughput, transaction average latency and resource utilization. / Graduate
|
17 |
TCB Minimizing Model of Computation (TMMC)Bushra, Naila 13 December 2019 (has links)
The integrity of information systems is predicated on the integrity of processes that manipulate data. Processes are conventionally executed using the conventional von Neumann (VN) architecture. The VN computation model is plagued by a large trusted computing base (TCB), due to the need to include memory and input/output devices inside the TCB. This situation is becoming increasingly unjustifiable due to the steady addition of complex features such as platform virtualization, hyper-threading, etc. In this research work, we propose a new model of computation - TCB minimizing model of computation (TMMC) - which explicitly seeks to minimize the TCB, viz., hardware and software that need to be trusted to guarantee the integrity of execution of a process. More specifically, in one realization of the model, the TCB can be shrunk to include only a low complexity module; in a second realization, the TCB can be shrunk to include nothing, by executing processes in a blockchain network. The practical utilization of TMMC using a low complexity trusted module, as well as a blockchain network, is detailed in this research work. The utility of the TMMC model in guaranteeing the integrity of execution of a wide range of useful algorithms (graph algorithms, computational geometric algorithms, NP algorithms, etc.), and complex large-scale processes composed of such algorithms, are investigated.
|
18 |
Improved Internet Security Protocols Using Cryptographic One-Way Hash ChainsAlabrah, Amerah 01 January 2014 (has links)
In this dissertation, new approaches that utilize the one-way cryptographic hash functions in designing improved network security protocols are investigated. The proposed approaches are designed to be scalable and easy to implement in modern technology. The first contribution explores session cookies with emphasis on the threat of session hijacking attacks resulting from session cookie theft or sniffing. In the proposed scheme, these cookies are replaced by easily computed authentication credentials using Lamport's well-known one-time passwords. The basic idea in this scheme revolves around utilizing sparse caching units, where authentication credentials pertaining to cookies are stored and fetched once needed, thereby, mitigating computational overhead generally associated with one-way hash constructions. The second and third proposed schemes rely on dividing the one-way hash construction into a hierarchical two-tier construction. Each tier component is responsible for some aspect of authentication generated by using two different hash functions. By utilizing different cryptographic hash functions arranged in two tiers, the hierarchical two-tier protocol (our second contribution) gives significant performance improvement over previously proposed solutions for securing Internet cookies. Through indexing authentication credentials by their position within the hash chain in a multi-dimensional chain, the third contribution achieves improved performance. In the fourth proposed scheme, an attempt is made to apply the one-way hash construction to achieve user and broadcast authentication in wireless sensor networks. Due to known energy and memory constraints, the one-way hash scheme is modified to mitigate computational overhead so it can be easily applied in this particular setting. The fifth scheme tries to reap the benefits of the sparse cache-supported scheme and the hierarchical scheme. The resulting hybrid approach achieves efficient performance at the lowest cost of caching possible. In the sixth proposal, an authentication scheme tailored for the multi-server single sign-on (SSO) environment is presented. The scheme utilizes the one-way hash construction in a Merkle Hash Tree and a hash calendar to avoid impersonation and session hijacking attacks. The scheme also explores the optimal configuration of the one-way hash chain in this particular environment. All the proposed protocols are validated by extensive experimental analyses. These analyses are obtained by running simulations depicting the many scenarios envisioned. Additionally, these simulations are supported by relevant analytical models derived by mathematical formulas taking into consideration the environment under investigation.
|
19 |
Bases de Datos NoSQL: escalabilidad y alta disponibilidad a través de patrones de diseñoAntiñanco, Matías Javier 09 June 2014 (has links)
Este trabajo presenta un catálogo de técnicas y patrones de diseño aplicados actualmente en bases de datos NoSQL. El enfoque propuesto consiste en una presentación del estado del arte de las bases de datos NoSQL, una exposición de los conceptos claves relacionados y una posterior exhibición de un conjunto de técnicas y patrones de diseño orientados a la escalabilidad y alta disponibilidad.
Para tal fin,
• Se describen brevemente las características principales de los bases de datos NoSQL, cuales son los factores que motivaron su aparición, sus diferencias con sus pares relacionales, se presenta el teorema CAP y se contrasta las propiedades ACID contra las BASE.
• Se introducen las problemáticas que motivan las técnicas y patrones de diseño a describir.
• Se presentan técnicas y patrones de diseños que solucionen las problemáticas.
• Finalmente, se concluye con un análisis integrador, y se indican otros temas de investigación pertinentes.
|
20 |
Preserving privacy with user-controlled sharing of verified informationBauer, David Allen 13 November 2009 (has links)
Personal information, especially certified personal information, can be very valuable to its subject, but it can also be abused by other parties for identify theft, blackmail, fraud, and more. One partial solution to the problem is credentials, whereby personal information is tied to identity, for example by a photo or signature on a physical credential.
We present an efficient scheme for large, redactable, digital credentials that allow certified personal attributes to safely be used to provide identification. A novel method is provided for combining credentials, even when they were originally issued by different authorities. Compared to other redactable digital credential schemes, the proposed scheme is approximately two orders of magnitude faster, due to aiming for auditability over anonymity. In order to expand this scheme to hold other records, medical records for example, we present a method for efficient signatures on redactable data where there are dependencies between different pieces of data. Positive results are shown using both artificial datasets and a dataset derived from a Linux package manager.
Electronic credentials must of course be held in a physical device with electronic memory. To hedge against the loss or compromise of the physical device holding a user's credentials, the credentials may be split up. An architecture is developed and prototyped for using split-up credentials, with part of the credentials held by a network attached agent. This architecture is generalized into a framework for running identity agents with various capabilities. Finally, a system for securely sharing medical records is built upon the generalized agent framework. The medical records are optionally stored using the redactable digital credentials, for source verifiability.
|
Page generated in 0.0271 seconds