Spelling suggestions: "subject:"has""
51 |
Analysis of Non-Interactive Zero Knowledge ProofHegde, Suprabha Shreepad 02 November 2018 (has links)
No description available.
|
52 |
A Study on Hash-based Signature Schemes / ハッシュ関数に基づく署名方式の研究YUAN, QUAN 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24258号 / 情博第802号 / 新制||情||135(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 神田 崇行, 教授 吉川 正俊, 教授 梅野 健 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
53 |
An Enthnographic Look at Rabbit Hash, KentuckyClare, Callie E. 26 June 2007 (has links)
No description available.
|
54 |
Talk to your neighbors : A study on groupings in distributed hash-tables to provide efficient IoT interactionsDenison, Timothy January 2022 (has links)
With the increase of devices on the internet that comes coupled with the growing IoT field, there is a high amount of research being conducted on the topic. Whilst much has been done to make these systems more scalable and resilient by replacing the current standard architecture with a decentralized one, the applied models mostly focus on the implementation details of such a system, and little thought is placed on the algorithms used to structure the architecture itself. Instead, one of the many, already defined protocols is used, and the system is built around this. These protocols, whilst elegant and outright ingenious in their own nature are initially intended for other applications, and hence do not take any advantage of the domain specifics of IoT, and hence the implemented solutions are sub-optimal in terms of performance and overhead. This thesis attempts to bridge that gap by first providing data on an existing IoT system, and then using the data to leverage the modifications of the prevailing protocol for decentralized peer-to-peer architectures. This is done by introducing groups in the ID scheme of the system, and thus greatly modifying the internal structure, forcing devices with interest in each other to be placed closely in the structure. The consequence of this is that there is a major reduction of overhead in searching for devices, bringing the total number of devices required to be contacted for normal use-cases down substantially.
|
55 |
Generative models meet similarity search: efficient, heuristic-free and robust retrievalDoan, Khoa Dang 23 September 2021 (has links)
The rapid growth of digital data, especially visual and textual contents, brings many challenges to the problem of finding similar data. Exact similarity search, which aims to exhaustively find all relevant items through a linear scan in a dataset, is impractical due to its high computational complexity. Approximate-nearest-neighbor (ANN) search methods, especially the Learning-to-hash or Hashing methods, provide principled approaches that balance the trade-offs between the quality of the guesses and the computational cost for web-scale databases. In this era of data explosion, it is crucial for the hashing methods to be both computationally efficient and robust to various scenarios such as when the application has noisy data or data that slightly changes over time (i.e., out-of-distribution).
This Thesis focuses on the development of practical generative learning-to-hash methods and explainable retrieval models. We first identify and discuss the various aspects where the framework of generative modeling can be used to improve the model designs and generalization of the hashing methods. Then we show that these generative hashing methods similarly enjoy several appealing empirical and theoretical properties of generative modeling. Specifically, the proposed generative hashing models generalize better with important properties such as low-sample requirement, and out-of-distribution and data-corruption robustness. Finally, in domains with structured data such as graphs, we show that the computational methods in generative modeling have an interesting utility beyond estimating the data distribution and describe a retrieval framework that can explain its decision by borrowing the algorithmic ideas developed in these methods.
Two subsets of generative hashing methods and a subset of explainable retrieval methods are proposed. For the first hashing subset, we propose a novel adversarial framework that can be easily adapted to a new problem domain and three training algorithms that learn the hash functions without several hyperparameters commonly found in the previous hashing methods. The contributions of our work include: (1) Propose novel algorithms, which are based on adversarial learning, to learn the hash functions; (2) Design computationally efficient Wasserstein-related adversarial approaches which have low computational and sample efficiency; (3) Conduct extensive experiments on several benchmark datasets in various domains, including computational advertising, and text and image retrieval, for performance evaluation. For the second hashing subset, we propose energy-based hashing solutions which can improve the generalization and robustness of existing hashing approaches. The contributions of our work for this task include: (1) Propose data-synthesis solutions to improve the generalization of existing hashing methods; (2) Propose energy-based hashing solutions which exhibit better robustness against out-of-distribution and corrupted data; (3) Conduct extensive experiments for performance evaluations on several benchmark datasets in the image retrieval domain.
Finally, for the last subset of explainable retrieval methods, we propose an optimal alignment algorithm that achieves a better similarity approximation for a pair of structured objects, such as graphs, while capturing the alignment between the nodes of the graphs to explain the similarity calculation. The contributions of our work for this task include: (1) Propose a novel optimal alignment algorithm for comparing two sets of bag-of-vectors embeddings; (2) Propose a differentiable computation to learn the parameters of the proposed optimal alignment model; (3) Conduct extensive experiments, for performance evaluation of both the similarity approximation task and the retrieval task, on several benchmark graph datasets. / Doctor of Philosophy / Searching for similar items, or similarity search, is one of the fundamental tasks in this information age, especially when there is a rapid growth of visual and textual contents. For example, in a search engine such as Google, a user searches for images with similar content to a referenced image; in online advertising, an advertiser finds new users, and eventually targets these users with advertisements, where the new users have similar profiles to some referenced users who have previously responded positively to the same or similar advertisements; in the chemical domain, scientists search for proteins with a similar structure to a referenced protein. The practical search applications in these domains often face several challenges, especially when these datasets or databases can contain a large number (e.g., millions or even billions) of complex-structured items (e.g., texts, images, and graphs). These challenges can be organized into three central themes: search efficiency (the economical use of resources such as computation and time) and model-design effort (the ease of building the search model). Besides search efficiency and model-design effort, it is increasingly a requirement of a search model to possess the ability to explain the search results, especially in the scientific domains where the items are structured objects such as graphs.
This dissertation tackles the aforementioned challenges in practical search applications by using the computational techniques that learn to generate data. First, we overcome the need to scan the entire large dataset for similar items by considering an approximate similarity search technique called hashing. Then, we propose an unsupervised hashing framework that learns the hash functions with simpler objective functions directly from raw data. The proposed retrieval framework can be easily adapted into new domains with significantly lower effort in model design. When labeled data is available but is limited (which is a common scenario in practical search applications), we propose a hashing network that can synthesize additional data to improve the hash function learning process. The learned model also exhibits significant robustness against data corruption and slight changes in the underlying data. Finally, in domains with structured data such as graphs, we propose a computation approach that can simultaneously estimate the similarity of structured objects, such as graphs, and capture the alignment between their substructures, e.g., nodes. The alignment mechanism can help explain the reason why two objects are similar or dissimilar. This is a useful tool for domain experts who not only want to search for similar items but also want to understand how the search model makes its predictions.
|
56 |
Geotechnical Investigation and Characterization of Bivalve-Sediment InteractionsConsolvo, Samuel Thomas 24 June 2020 (has links)
Scour around important foundation elements for bridges and other coastal infrastructure is the leading cause of failure and instability of those structures. Traditional scour mitigation methods, such as the placement of riprap, the use of collars or slots, embedding foundations deeper, or a combination thereof can be costly, require long-term maintenance, and can potentially have detrimental environmental effects downstream. These difficulties with traditional methods are potentially alleviated with the implementation of self-sustaining bivalve (e.g., mussel, oyster, scallop) farms that could act as mats of interconnected living barriers, protecting the seabed from scour. The mats would help to attract larval settlement by making the substrate a more suitable habitat, contributing to the sustainability of the bivalve farms. Colonies of bivalves are already being used as living shorelines for retreatment mitigation, embankment stabilization, and supporting habitat for other marine life. These applications are accomplished, in part, by bivalves' strong attachment capabilities from the bioadhesives they secrete that act as a strong underwater glue, adhering their shells to granular substrate. Some species of mussels have been shown to withstand water flow velocities greater than 6 m/s without detaching. For reference, riprap with a median grain size of about 655 mm has been shown to require a flow velocity of at least 1.7 m/s for incipient motion of the boulder-sized riprap. In addition to the contiguous living bivalve mat offering scour protection, the whole or fragmented shells (i.e., shell hash) that are left behind from dead bivalves are hypothesized to reduce erosion potential. Shell hash-laden sediments should be able to better withstand shearing, thereby increasing the critical shear stress required to erode material, compared to sediment without shell hash.
Habitat suitability for bivalve colonies is also an important consideration to evaluate what surface enhancements may be needed for a site to be selected for implementation of bivalve scour mats. Bed surfaces that consist of unconsolidated fine-grained sediment are unlikely to be able to support bivalve species as the organisms could sink into the sediment, not allowing solid anchoring points. In contrast, harder substrates typically found in granular sediments offer much more suitable habitats. Along with testing the influence of shell hash and bioadhesive on sediment behavior, this thesis aims to establish a methodology to evaluate whether a section of seafloor can support bivalves or enhancement materials (e.g., shell, shale, or slag fragments) without them sinking, thereby depriving them of oxygen.
Together, the examining of geotechnical aspects of bivalve habitat enhancement through seabed soil alteration and the influence of shell hash and bioadhesives on sediment shear behavior are part of a novel multidisciplinary approach toward this proposed bioengineered scour solution. Consequently, the research objectives explored in this thesis are as follows: (1) characterize morphology of existing bivalve colonies through acoustic and direct field measurements; (2) evaluate the spatial variation of the sediment shear strength in terms of proximity to bivalve colonies; (3) expand the domain of confining pressures and shell hash weight fractions used in sediment strength testing; (4) quantify the changes in shear strength and erodibility from laboratory tests on sampled material with and without the presence of bioadhesives, as well as shell fragments mixed in with the sediment; and, (5) develop a methodology ranking system for the suitability of a surficial sediments to support seeding material to improve benthic life habitat substrates.
Three exploratory field surveys were conducted where colonies of oysters and other benthic life were present: in the Piankatank River in Virginia, in the Northwest Arm of the Sydney Harbour in Nova Scotia, Canada, and at the Rachel Carson Reserve in North Carolina. Field sampling techniques included Ponar grab samples, hand-dug samples, X-ray rectangular prism cores, and cylindrical push cores, which were all pivotal to understanding sediment composition, size and shape of particle distributions, as well as in-situ depth profiles of shells. Remote sensing and intrusive instrumentation included a rotary scanning sonar, acoustic Doppler current profilers, CTD (Conductivity, Temperature, Depth) probes, underwater cameras, a portable free-fall penetrometer, and in-situ jet erosion testing which helped to characterize the morphology of the bivalve colonies and the spatial variability of sediment strength. Subsequent laboratory experiments included grain size distribution analyses, vacuum triaxial tests to measure changes in shear strength with and without shell hash, and miniature vane and pocket erodometer tests on bioadhesive-treated sediments. The results showed: (1) a significant increase in the standard deviation of the backscatter intensity where the oyster reef was located; (2) the in-situ sediment shear strength increased slightly closer to the oyster reef at the Piankatank River site; (3) samples with a higher oyster density exhibited less uniform particle size distributions; (4) the presence of less than approximately 4% (by weight) of shell fragments increased the secant friction angle by approximately 6° relative to samples with no shell fragments; and, (5) the harbor bed of the Northwest Arm of the Sydney Harbour is a suitable stiffness for enhancement with shell hash over about 23% of its area. Preliminary testing showed a subtle increase in the torsional shear resistance and a decrease in erodibility for bioadhesive-treated samples; however, further testing is needed for confidence to be achieved in the results due to bioadhesive supply issues. / Master of Science / Oysters and mussels are aquatic mollusks (i.e., bivalves) that are known to be able to withstand strong storm flows without detaching from rocks and other hard surfaces. Knowing this and the increasing need for environmental- and ecological-friendly solutions in engineering and construction further accelerated by climate change and sea level rise are the motivations for studying whether bivalves can be used in this capacity. Traditional methods to protect against bridge failures caused from individual piers that become unstable from sediment eroding away from their bases can be costly, require long-term maintenance efforts, and can potentially have detrimental environmental impacts. As an alternative to or supplement to traditional methods, bivalves could be laid down in mats near the base of piers to act as a protective interconnected layer, diverting strong water flows away from the otherwise exposed sediments susceptible to erosion while strengthening the seabed.
Much is known and has been investigated on the biology of bivalves but understanding how these organisms influence the sediments near them has not been studied extensively from a geotechnical engineering perspective. Specifically, within geotechnical engineering, this study is focused primarily on the influence of oyster shell fractures, naturally found in the vicinity of bivalve colonies, and the organic glue that bivalves use to attach themselves to rocks on the engineering behavior of nearby sediments. Secondary to that main objective is to establish a methodology to evaluate whether a section of seafloor can support bivalves without them sinking, thereby suffocating them. In summary, this thesis investigates methods to evaluate whether the seafloor is suitable for supporting bivalves and if their presence changes the way sediments behave after various forces are applied.
To accomplish these research goals, three exploratory field surveys were conducted for this thesis: in the Piankatank River in Virginia, in the Northwest Arm of the Sydney Harbour in Nova Scotia, Canada, and at the Rachel Carson Reserve in North Carolina where bivalves were present. Through field sediment sampling, underwater sonar imagery, penetrating probes, and subsequent geotechnical laboratory testing, shell-sediment interactions were characterized. The results showed: (1) an oyster reef in the Piankatank River could be observed in great detail with sonar imagery; (2) sediment strength increased slightly the closer to the oyster reef; samples with more oyster shells in them exhibited (3) a wider range of particle sizes and (4) an increase in sediment strength; and (5) less than a quarter of the harbor bed of the Northwest Arm of the Sydney Harbour is suitable for armoring the seafloor with pieces of shell, shale, and slag to support bivalve growth. Initial tests with the organic underwater glue from bivalves showed promising results with respect to improvements in sediment strength and decreased erodibility, however, further testing is needed as supply of the organic glue was limited.
|
57 |
Fair Comparison of ASIC Performance for SHA-3 FinalistsZuo, Yongbo 22 June 2012 (has links)
In the last few decades, secure algorithms have played an irreplaceable role in the protection of private information, such as applications of AES on modems, as well as online bank transactions. The increasing application of secure algorithms on hardware has made implementations on ASIC benchmarks extremely important. Although all kinds of secure algorithms have been implemented into various devices, the effects from different constraints on ASIC implementation performance have never been explored before.
In order to analyze the effects from different constraints for secure algorithms, SHA-3 finalists, which includes Blake, Groestl, Keccak, JH, and Skein, have been chosen as the ones to be implemented for experiments in this thesis.
This thesis has first explored the effects of different synthesis constraints on ASIC performance, such as the analysis of performance when it is constrained for frequency, or maximum area, etc. After that, the effects of choosing various standard libraries were tested, for instance, the performance of UMC 130nm and IBM 130nm standard libraries have been compared. Additionally, the effects of different technologies have been analyzed, such as 65nm, 90nm, 130nm and 180nm of UMC libraries. Finally, in order to further understand the effects, experiments for post-layout analysis has been explored. While some algorithms remain unaffected by floor plan shapes, others have shown preference for a specific shape, such as JH, which shows a 12% increase in throughput/area with a 1:2 rectangle compared to a square.
Throughout this thesis, the effects of different ASIC implementation factors have been comprehensively explored, as well as the details of the methodology, metrics, and the framework of the experiments. Finally, detailed experiment results and analysis will be discussed in the following chapters. / Master of Science
|
58 |
Resgate de autoria em esquemas de assinatura em anel / Retrieving authorship from ring signature schemesAntonio Emerson Barros Tomaz 23 May 2014 (has links)
A proposta apresentada nesta dissertaÃÃo representa uma expansÃo do conceito original de assinatura em anel. Um esquema de assinatura em anel permite que um membro de um grupo divulgue uma mensagem anonimamente, de tal forma que cada um dos membros do grupo seja considerado o possÃvel autor da mensagem. A ideia principal de uma assinatura em anel à garantir o anonimato do assinante e ainda garantir a autenticidade da informaÃÃo, mostrando que a mensagem partiu de um dos membros do referido grupo. Esta dissertaÃÃo apresenta um esquema de assinatura em anel baseado no esquema de Rivest et al. (2001), em
que o assinante pode, mais tarde, revogar seu anonimato apresentando valores secretos que provam que somente ele seria capaz de gerar tal assinatura. Esta propriedade serà chamada aqui de resgate de autoria. A principal diferenÃa em relaÃÃo ao trabalho de Rivest et al. (2001)
à apresentada antes mesmo de comeÃar a geraÃÃo da assinatura. Os valores utilizados como entrada para a funÃÃo trapdoor serÃo cÃdigos de autenticaÃÃo de mensagem - MACs gerados pelo algoritmo HMAC, um algoritmo de autenticaÃÃo de mensagem baseado em funÃÃo hash
resistente à colisÃo. Essa modificaÃÃo simples permitirà que, no futuro, o assinante revele-se como o verdadeiro autor da mensagem apresentando os valores secretos que geraram os MACs. / The proposal presented in this thesis represents an expansion of the original concept of ring signature. A ring signature scheme allows a member of a group to publish a message anonymously, so that each member of the group can be considered the author of the message. The main idea of a ring signature is to guarantee the anonymity of the subscriber also ensure the authenticity of information, showing that the message came from one of the members of that group. This thesis presents a signature scheme based on (RIVEST et al., 2001), where the subscriber can later revoke anonymity presenting secret values that prove that he would only be able to generate such a signature. This property will be referred to here as rescue of authorship. The main difference to the proposal of Rivest et al. (2001) is presented before we even begin signature generation. The values used as input to the trapdoor function are message authentication codes - MACs generated by the HMAC algorithm, an algorithm for message authentication based on hash function collision resistant. This simple modification will allow, in the future, the subscriber to reveal itself as the true author of the message by showing the secret values to generate those MACs.
|
59 |
Ανάπτυξη σε FPGA κρυπτογραφικού συστήματος για υλοποίηση της JH hash functionΜπάρδης, Δημήτριος 31 May 2012 (has links)
Στόχος της παρούσας Διπλωματικής Εργασίας είναι ο σχεδιασμός και υλοποίηση ενός Κρυπτογραφικού Συστήματος με βάση τον Αλγόριθμο κατακερματισμού JH. Ο σχεδιασμός του κρυπτογραφικού αυτού συστήματος έγινε με τη χρήση γλώσσας VHDL (Very High Speed Integrated Circuits hardware description language) και στη συνέχεια η υλοποίηση αυτή έγινε πάνω σε πλατφόρμα FPGA (Field Programmable Gate Array).
Ο αλγόριθμος JH είναι ένας αλγόριθμος κατακερματισμού (hash function) ο οποίος σχεδιάστηκε στα πλαίσια του διαγωνισμου κρυπτογραφιας NIST (National Institute of Standards and Technology). Η πρώτη του έκδοση έγινε στις 31 Οκτωβρίου 2008 ενώ η τελική του έκδοση έγινε στις 16 Ιανουαρίου 2011. Ο Αλγόριθμος JH έχει τρεις υποκατηγορίες. Υπάρχει ο JH-224, JH-256, JH-384 και ο JH-512.
Βασικό χαρακτηριστικό του αλγορίθμου αυτού είναι το γεγονός πώς οι λειτουργίες που συμβαίνουν σε κάθε γύρο είναι ίδιες. Επίσης σημαντικό γνώρισμα ειναι η ασφάλεια που παρέχει ο αλγόριθμος αυτός καθώς ο μεγάλος αριθμός των ενεργών S-boxes που χρησιμοποιούνται και ταυτόχρονα το γεγονός ότι σε κάθε γύρο χρησιμοποιείται ένα διαφορετικό κλειδι το οποίο παράγεται εκεινη τη στιγμή και δεν ειναι αποθηκευμένο σε ένα σημείο, στο οποίο θα μπορούσε κάποιος να επέμβει, κάνει το σύστημά μας εξαιρετικά δυνατό και ανθεκτικό απέναντι σε επιθέσεις όπως είναι η διαφορική κρυπτανάλυση.
Για την εξακρίβωση της ορθής λειτουργίας του συστήματος χρησιμοποιήθηκε μία υλοποίηση του Αλγορίθμου JH σε γλώσσα C. Χρησιμοποιώντας την υλοποίηση αυτή κάθε φορά που θέλουμε να κρυπτογραφήσουμε ένα μήνυμα το οποίο είναι μία σειρά από bit, λαμβάνουμε το κρυπτογραφημένο μήνυμα. Αυτο το κρυπτογραφημένο μήνυμα το συγκρίνουμε με αυτό που παίρνουμε στην έξοδο του συστήματος JH που σχεδιάσαμε και με αυτό το τρόπο επιβεβαιώνουμε την ορθότητα του αποτελέσματος.
Ύστερα από την non-pipelined υλοποίηση του συστήματος αυτού, χρησιμοποιήθηκε η τεχνική της συσωλήνωσης (pipeline). Πιο συγκεκριμένα εγιναν 4 διαφορετικές pipelined υλοποιήσεις με 2,3,6 και 7 στάδια. Σκοπός είναι για κάθε μία pipelined υλοποίηση να γίνει έλεγχος σε θέματα απόδοσης, κατανάλωσης ισχύος καθώς επίσης και σε θέματα επιφάνειας. Στη συνέχεια γίνεται μία σύγκριση στα προαναφερθέντα θέματα μεταξύ των διαφορετικών pipelined υλοποιήσεων και με την non-pipelined υλοποίηση του κρυπτογραφικού συστήματος JH. Επίσης αξίζει να σημειωθεί πώς γίνεται ιδιαίτερη αναφορά στο throughput και στο throughput per area των pipelined υλοποιήσεων.
Από τα πειραματικά αποτελέσματα που προέκυψαν η JH NON PIPELINED υλοποίηση έχει απόδοση 97 MHz με κατανάλωση ισχύος 137mW και συνολική επιφάνεια 2284 slices σε SPARTAN 3E FPGA συσκευή. Ενώ από την ανάλυση της JH NON PIPELINED υλοποίησης και των 4 pipelined υλοποιήσεων σε 4 διαφορετικά FPGA (2 της οικογένειας SPARTAN και 2 της οικογένειας VIRTEX) συμπεραίνουμε πώς στην οικογένεια VIRTEX η κατανάλωση ισχύος είναι πάντα μεγαλύτερη σε σχεση με την οικογένεια SPARTAN. / The purpose of this Thesis Project is the design and implementation of a Cryptographic System using the JH Hash Algorithm. The design of this Cryptographic System was performed using the VHDL language (Very High Speed Integrated Circuits hardware description language) and then this implementation was executed on a FPGA platform (Field Programmable Gate Array).The JH Algorithm is a hash algorithm that was developed during the NIST (National Institute of Standards and Technology) Cryptography Competition. Its first version was released on 31 October 2008 while its last version was released on 16 January 2011. The JH Hash Algorithm has three subcategories. There is JH-224, JH-256, JH-384, and JH-512.
Basic characteristic of this Algorithm is the fact that the functions that are executed in each round are identical. Moreover important characteristic is the security that this Algorithm provides us while the big number of active S-Boxes that is used and in the same time the fact that in each round a different key is produced on the fly, and is not stored in a place that a third person could have access, makes our system really strong and resistant to attacks such as the differential attack.
To confirm the right functionality of the system the implementation of the JH Algorithm in C Language is used. Using this implementation each time we want to cipher a message, which is a sequence of bits, we get the message digest. This message digest is compared with the message digest that we get from the JH system that we developed with VHDL and in this way we confirm the correctness of the result.
After the non pipelined implementation of the JH system the pipeline technique was used. To be more specific 4 different pipelined implementations with 2, 3, 6 and 7 stages were performed. The target was to check the performance, area and power dissipation for each pipelined implementation. Next a comparison was performed between the various pipelined implementations and the non pipelined implementation for the above mentioned issues. In addition to this it is worth to mention that considerable reference is made for throughput and throughput per area for the pipelined implementations.
According to the experimental results the JH NON PIPELINED implementation has a performance of 97 MHz, with power dissipation of 137mW and a total area of 2284 Slices on SPARTAN 3E FPGA device. From the JH NON PIPELINED implementation and the other 4 pipelined implementations on 4 different FPGA Devices (2 from the VIRTEX family and 2 from the SPARTAN family) we concluded that the power dissipation is bigger in VIRTEX family devices in comparison to SPARTAN family Devices.
|
60 |
DistroFS: En lösning för distribuerad lagring av filer / DistroFS: A Solution For Distributed File StorageHansen, Peter, Norell, Olov January 2007 (has links)
Nuvarande implementationer av distribuerade hashtabeller (DHT) har en begränsad storlek för data som kan lagras, som t.ex. OpenDHTs datastorleks gräns på 1kByte. Är det möjligt att lagra filer större än 1kByte med DHT-tekniken? Finns det någon lösning för att skydda de data som lagrats utan att försämra prestandan? Vår lösning var att utveckla en klient- och servermjukvara. Mjukvaran använder sig av DHT tekniken för att dela upp filer och distribuera delarna över ett serverkluster. För att se om mjukvaran fungerade som tänkt, gjorde vi ett test utifrån de inledande frågorna. Testet visade att det är möjligt att lagra filer större än 1kByte, säkert med DHT tekniken utan att förlora för mycket prestanda. / Currently existing distributed hash table (DHT) implementations use a small storage size for data, such as OpenDHT’s storage size limitation of 1kByte. Is it possible to store larger files than 1kByte using the DHT technique? Is there a way to protect the data without losing to much performance? Our solution was to develop a client and server software. This software uses the DHT technique to split files and distribute their parts across a cluster of servers. To see if the software worked as intended we created a test based on our opening questions. This test shows that it indeed is possible to store large files securely using the DHT technique without losing any significant performance.
|
Page generated in 0.0496 seconds