• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 11
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 17
  • 12
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analyse de primitives symétriques / Analysis of symmetric primitives

Karpman, Pierre 18 November 2016 (has links)
Cette thèse a pour objet d'étude les algorithmes de chiffrement par blocet les fonctions de hachage cryptograpiques, qui sont deux primitives essentielles de la cryptographie dite «symétrique».Dans une première partie, nous étudions des éléments utiles pour la conception de chiffres par bloc: tout d'abord des matrices de diffusion de grande dimension issues de codes correcteurs géométriques, puis une boîte de substitution offrant une bonne diffusion. Dans le second cas, nous montrons aussi comment utiliser cet élément pour construire un chiffre compact et efficace sur petits processeurs.Dans une seconde partie, nous nous intéressons à des attaques en collision à initialisation libre sur la fonction de hachage SHA-1. Nous montrons comment les attaques classiques sur cette fonction peuvent être rendues plus efficaces en exploitant la liberté supplémentaire offerte par ce modèle. Ceci nous permet en particulier de calculer explicitement des collisions pour la fonction de compression de SHA-1 non réduite. / This thesis is about block ciphers and cryptographic hash functions, which are two essential primitives of symmetric-key cryptography. In the first part of this manuscript, we study useful building blocks for block cipher design. We first consider large diffusion matrices builtfrom algebraic-geometry codes, and then construct a small S-box with good diffusion. In the second case, we show how the S-box can be used to define a compact and efficient block cipher targetting small processors. In the second part, we focus on the SHA-1 hash function, for which we develop a free start collision attack. We show how classical collision attacks can be made more efficient by exploiting the additional freedom provided by the model. This allows us in particular to compute explicit collisions for the full compression function of SHA-1.
12

Improving network performance with a polarization-aware routing approach / Förbättra nätverksprestanda med en polarisationsmedveten routingmetod

Pan, Jingyi January 2023 (has links)
Traffic polarization in networks refers to the phenomenon where traffic tends to concentrate along specific routes or edges when doing multipath routing, leading to imbalanced flow patterns. This spatial distribution of traffic can result in congested and overburdened links, while other routes remain underutilized. Such imbalanced traffic distribution can lead to network bottlenecks, reduced throughput, and compromised Quality of Service for critical applications. These issues emphasize the urgent necessity to address traffic polarization and its detrimental impact on network efficiency and resilience. In this master thesis, we introduce a novel approach to tackle the problem of hash polarization and evaluate the performance of our implementation. Perhaps influenced by the RFC 2992 document, previous works always use the whole value of the hash result to do the multipath routing decisions, and therefore try to mitigate the polarization problem by developing more functions or reusing them. However, we investigate if the polarizion issue can be solved by utilizing different parts of the hash result. In this case, the most critical problem would be how to choose the bits of the hash result for the multipath routing decisions. Unfortunately, during the experiment, we discovered that the optimal performance design is influenced by many factors in the network topology and traffic demand pattern, making it difficult to summarize a universal law. Nevertheless, our research has proposed a mechanism called “bit-awareness”, which can significantly alleviate the problem of selecting overlapping bits, and hence addresses the polarization issue. / Trafikpolarisering i nätverk hänvisar till fenomenet där trafik tenderar att koncentreras längs specifika rutter eller kanter när man gör flervägsdirigering, vilket leder till obalanserade flödesmönster. Denna rumsliga fördelning av trafik kan resultera i överbelastade och överbelastade länkar, medan andra vägar förblir underutnyttjade. Sådan obalanserad trafikdistribution kan leda till nätverksflaskhalsar, minskad genomströmning och försämrad tjänstekvalitet för kritiska applikationer. Dessa frågor betonar det akuta behovet av att ta itu med trafikpolarisering och dess skadliga inverkan på nätverkseffektivitet och motståndskraft. I denna masteruppsats introducerar vi ett nytt tillvägagångssätt för att tackla problemet med hashpolarisering och utvärdera prestandan för vår implementering. Kanske påverkat av RFC 2992-dokumentet, skulle tidigare arbeten använda hela värdet av hashresultatet för att fatta beslut om flervägsdirigering och därför försöka mildra polariseringsproblemet genom att utveckla fler funktioner eller återanvända dem. Vi undrar dock om problemet kan lösas genom att använda olika delar av hashresultatet. I det här fallet skulle det mest avgörande problemet vara hur man väljer bitarna i hashresultatet för besluten om flervägsdirigering. Tyvärr upptäckte vi under experimentet att den optimala prestandadesignen påverkas av många faktorer i nätverkstopologin och trafikefterfrågan, vilket gör det svårt att sammanfatta en universell lag. Ändå har vår forskning föreslagit en mekanism som kallas ”bit-medvetenhet”, som avsevärt kan lindra problemet med att välja överlappande bitar, och därmed adresserar polariseringsfrågan.
13

Diverse modules and zero-knowledge / Diverse modules and zero-knowledge

Ben Hamouda--Guichoux, Fabrice 01 July 2016 (has links)
Les smooth (ou universal) projective hash functions ont été introduites par Cramer et Shoup, à Eurocrypt'02, comme un outil pour construire des schémas de chiffrement efficaces et sûrs contre les attaques à chiffrés choisis. Depuis, elles ont trouvé de nombreuses applications, notamment pour la construction de schémas d'authentification par mot de passe, d'oblivious transfer, de signatures en blanc, et de preuves à divulgation nulle de connaissance. Elles peuvent êtres vues comme des preuves implicites d'appartenance à certains langages. Un problème important est de caractériser pour quels langages de telles fonctions existent.Dans cette thèse, nous avançons dans la résolution de ce problème en proposant la notion de diverse modules. Un diverse module est une représentation d'un langage, comme un sous-module d'un module plus grand, un module étant un espace vectoriel sur un anneau. À n'importe quel diverse module est associée une smooth projective hash function pour le même langage. Par ailleurs, presque toutes les smooth projective hash functions actuelles sont construites de cette manière.Mais les diverse modules sont aussi intéressants en eux-mêmes. Grâce à leur structure algébrique, nous montrons qu'ils peuvent facilement être combinés pour permettre de nouvelles applications, comme les preuves implicites à divulgation nulle de connaissance (une alternative légère aux preuves non-interactives à divulgation nulle de connaissance), ainsi que des preuves non-interactives à divulgation nulle de connaissance et one-time simulation-sound très efficaces pour les langages linéaires sur les groupes cycliques. / Smooth (or universal) projective hash functions were first introduced by Cramer and Shoup, at Eurocrypt'02, as a tool to construct efficient encryption schemes, indistinguishable under chosen-ciphertext attacks. Since then, they have found many other applications, including password-authenticated key exchange, oblivious transfer, blind signatures, and zero-knowledge arguments. They can be seen as implicit proofs of membership for certain languages. An important question is to characterize which languages they can handle.In this thesis, we make a step forward towards this goal, by introducing diverse modules. A diverse module is a representation of a language, as a submodule of a larger module, where a module is essentially a vector space over a ring. Any diverse module directly yields a smooth projective hash function for the corresponding language, and almost all the known smooth projective hash functions are constructed this way.Diverse modules are also valuable in their own right. Thanks to their algebraic structural properties, we show that they can be easily combined to provide new applications related to zero-knowledge notions, such as implicit zero-knowledge arguments (a lightweight alternative to non-interactive zero-knowledge arguments), and very efficient one-time simulation-sound (quasi-adaptive) non-interactive zero-knowledge arguments for linear languages over cyclic groups.
14

Combined robust and fragile watermarking algorithms for still images : design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions

Jassim, Taha Dawood January 2014 (has links)
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
15

Robust Image Hash Spoofing

Amir Asgari, Azadeh January 2016 (has links)
With the intensively increasing of digital media new challenges has been created for authentication and protection of digital intellectual property. A hash function extracts certain features of a multimedia object e.g. an image and maps it to a fixed string of bits. A perceptual hash function unlike normal cryptographic hash is change tolerant for image processing techniques. Perceptual hash function also referred to as robust hash, like any other algorithm is prone to errors. These errors are false negative and false positive, of which false positive error is neglected compared to false negative errors. False positive occurs when an unknown object is identified as known. In this work a new method for raising false alarms in robust hash function is devised for evaluation purposes i.e. this algorithm modifies hash key of a target image to resemble a different image’s hash key without any significant loss of quality to the modified image. This algorithm is implemented in MATLAB using block mean value based hash function and successfully reduces hamming distance between target image and modified image with a good result and without significant loss to attacked imaged quality.
16

Design and implementation of a blockchain shipping application

Bouidani, Maher M. 31 January 2019 (has links)
The emerging Blockchain technology has the potential to shift the traditional centralized systems to become more flexible, efficient and decentralized. An important area to apply this capability is supply chain. Supply chain visibility and transparency has become an important aspect of a successful supply chain platform as it becomes more complex than ever before. The complexity comes from the number of participants involved and the intricate roles and relations among them. This puts more pressure on the system and the customers in terms of system availability and tamper-resistant data. This thesis presents a private and permisioned application that uses Blockchain and aims to automate the shipping processes among different participants in the supply chain ecosystem. Data in this private ledger is governed with the participants’ invocation of their smart contracts. These smart contracts are designed to satisfy the participants’ different roles in the supply chain. Moreover, this thesis discusses the performance measurements of this application results in terms of the transaction throughput, transaction average latency and resource utilization. / Graduate
17

以GeoJSON壓縮技術增進網路資料傳輸效能之研究 / Efficiency improvement of spatial data transmission by GeoJSON compression techniques

陳欣瑜, Chen,Hsin Yu Unknown Date (has links)
標準化的地理資料交換格式是開放式地理資訊系統不可或缺的一環,可提供不同的平台經由統一的交換格式而順利進行資料交換。GeoJSON除了具備基本的資料互通性外,其結構簡單且容易讀取的特性為地理資訊服務軟體帶來許多效益。然而座標資料量之多寡直接影響資料傳輸效能,為提昇傳輸效率,必須配合有效的資料壓縮技術以降低傳輸之資料量。此外,GeoJSON壓縮方法的設計與結構訴求,也須融合GeoJSON的概念,以簡單、方便運用與容易了解為目標。 本研究提出一套GeoJSON壓縮技術進行空間資料的壓縮,以降低資料量而增進Web GIS地理資料傳輸的效能。除評估其壓縮效率,並透過不同規模以及不同型態的地理資料,分析影響壓縮效率的原因。最後藉由HTTP壓縮技術輔助,並從資料傳遞的時間與壓縮率,評估本研究提出的方法所帶來的成效。我們同時以單向雜湊函數,建立資料傳遞時的檢查機制,以確保資料傳遞時的正確性與一致性。 實作中,我們採用GeoJSON壓縮技術,進行座標資料的大小減量實驗,結果顯示本研究方法可以得到不錯的壓縮成果與傳輸效能,並且可避免資料傳輸發生問題。 / Standardized GIS data exchanging format is an essential part of Open GIS. This enables GIS data providers, software developers, and system integrators to exchange GIS data from different platform. The simple structure of GeoJSON not only has the data interoperability potential but also has the characteristics of easy processed and readability. These properties directly benefit the GIS service software. However, the amount of spatial information encoded in GIS documents usually has direct impact to the efficiency of GIS data transmission. In order to improve the data transmission efficiency, one has to reduce the amount of spatial data transmitted through data compression techniques. In this thesis, we proposed a data compression mechanism for spatial data. Our mechanism, co-operated with the concept of GeoJSON, aim at simple, easy to understand, and easy to use, can reduce the amount of spatial data transmitted and improve the transmission efficiency. We analyzed the compression ratio of various data types and data amounts through different base parameters. We also measured the system response time reduced using this method and compared with the combination of using our method as well as the HTTP compression modules. A one-way hashing technique is used to ensure data accuracy and consistency during the transmission processes. The experimental results show that our GeoJSON-based compression mechanism can significantly reduce the file size of spatial data and improve the efficiency of spatial data transmission. In addition, the data communication errors can be avoided.
18

Ανάπτυξη σε FPGA κρυπτογραφικού συστήματος για υλοποίηση της JH hash function

Μπάρδης, Δημήτριος 31 May 2012 (has links)
Στόχος της παρούσας Διπλωματικής Εργασίας είναι ο σχεδιασμός και υλοποίηση ενός Κρυπτογραφικού Συστήματος με βάση τον Αλγόριθμο κατακερματισμού JH. Ο σχεδιασμός του κρυπτογραφικού αυτού συστήματος έγινε με τη χρήση γλώσσας VHDL (Very High Speed Integrated Circuits hardware description language) και στη συνέχεια η υλοποίηση αυτή έγινε πάνω σε πλατφόρμα FPGA (Field Programmable Gate Array). Ο αλγόριθμος JH είναι ένας αλγόριθμος κατακερματισμού (hash function) ο οποίος σχεδιάστηκε στα πλαίσια του διαγωνισμου κρυπτογραφιας NIST (National Institute of Standards and Technology). Η πρώτη του έκδοση έγινε στις 31 Οκτωβρίου 2008 ενώ η τελική του έκδοση έγινε στις 16 Ιανουαρίου 2011. Ο Αλγόριθμος JH έχει τρεις υποκατηγορίες. Υπάρχει ο JH-224, JH-256, JH-384 και ο JH-512. Βασικό χαρακτηριστικό του αλγορίθμου αυτού είναι το γεγονός πώς οι λειτουργίες που συμβαίνουν σε κάθε γύρο είναι ίδιες. Επίσης σημαντικό γνώρισμα ειναι η ασφάλεια που παρέχει ο αλγόριθμος αυτός καθώς ο μεγάλος αριθμός των ενεργών S-boxes που χρησιμοποιούνται και ταυτόχρονα το γεγονός ότι σε κάθε γύρο χρησιμοποιείται ένα διαφορετικό κλειδι το οποίο παράγεται εκεινη τη στιγμή και δεν ειναι αποθηκευμένο σε ένα σημείο, στο οποίο θα μπορούσε κάποιος να επέμβει, κάνει το σύστημά μας εξαιρετικά δυνατό και ανθεκτικό απέναντι σε επιθέσεις όπως είναι η διαφορική κρυπτανάλυση. Για την εξακρίβωση της ορθής λειτουργίας του συστήματος χρησιμοποιήθηκε μία υλοποίηση του Αλγορίθμου JH σε γλώσσα C. Χρησιμοποιώντας την υλοποίηση αυτή κάθε φορά που θέλουμε να κρυπτογραφήσουμε ένα μήνυμα το οποίο είναι μία σειρά από bit, λαμβάνουμε το κρυπτογραφημένο μήνυμα. Αυτο το κρυπτογραφημένο μήνυμα το συγκρίνουμε με αυτό που παίρνουμε στην έξοδο του συστήματος JH που σχεδιάσαμε και με αυτό το τρόπο επιβεβαιώνουμε την ορθότητα του αποτελέσματος. Ύστερα από την non-pipelined υλοποίηση του συστήματος αυτού, χρησιμοποιήθηκε η τεχνική της συσωλήνωσης (pipeline). Πιο συγκεκριμένα εγιναν 4 διαφορετικές pipelined υλοποιήσεις με 2,3,6 και 7 στάδια. Σκοπός είναι για κάθε μία pipelined υλοποίηση να γίνει έλεγχος σε θέματα απόδοσης, κατανάλωσης ισχύος καθώς επίσης και σε θέματα επιφάνειας. Στη συνέχεια γίνεται μία σύγκριση στα προαναφερθέντα θέματα μεταξύ των διαφορετικών pipelined υλοποιήσεων και με την non-pipelined υλοποίηση του κρυπτογραφικού συστήματος JH. Επίσης αξίζει να σημειωθεί πώς γίνεται ιδιαίτερη αναφορά στο throughput και στο throughput per area των pipelined υλοποιήσεων. Από τα πειραματικά αποτελέσματα που προέκυψαν η JH NON PIPELINED υλοποίηση έχει απόδοση 97 MHz με κατανάλωση ισχύος 137mW και συνολική επιφάνεια 2284 slices σε SPARTAN 3E FPGA συσκευή. Ενώ από την ανάλυση της JH NON PIPELINED υλοποίησης και των 4 pipelined υλοποιήσεων σε 4 διαφορετικά FPGA (2 της οικογένειας SPARTAN και 2 της οικογένειας VIRTEX) συμπεραίνουμε πώς στην οικογένεια VIRTEX η κατανάλωση ισχύος είναι πάντα μεγαλύτερη σε σχεση με την οικογένεια SPARTAN. / The purpose of this Thesis Project is the design and implementation of a Cryptographic System using the JH Hash Algorithm. The design of this Cryptographic System was performed using the VHDL language (Very High Speed Integrated Circuits hardware description language) and then this implementation was executed on a FPGA platform (Field Programmable Gate Array).The JH Algorithm is a hash algorithm that was developed during the NIST (National Institute of Standards and Technology) Cryptography Competition. Its first version was released on 31 October 2008 while its last version was released on 16 January 2011. The JH Hash Algorithm has three subcategories. There is JH-224, JH-256, JH-384, and JH-512. Basic characteristic of this Algorithm is the fact that the functions that are executed in each round are identical. Moreover important characteristic is the security that this Algorithm provides us while the big number of active S-Boxes that is used and in the same time the fact that in each round a different key is produced on the fly, and is not stored in a place that a third person could have access, makes our system really strong and resistant to attacks such as the differential attack. To confirm the right functionality of the system the implementation of the JH Algorithm in C Language is used. Using this implementation each time we want to cipher a message, which is a sequence of bits, we get the message digest. This message digest is compared with the message digest that we get from the JH system that we developed with VHDL and in this way we confirm the correctness of the result. After the non pipelined implementation of the JH system the pipeline technique was used. To be more specific 4 different pipelined implementations with 2, 3, 6 and 7 stages were performed. The target was to check the performance, area and power dissipation for each pipelined implementation. Next a comparison was performed between the various pipelined implementations and the non pipelined implementation for the above mentioned issues. In addition to this it is worth to mention that considerable reference is made for throughput and throughput per area for the pipelined implementations. According to the experimental results the JH NON PIPELINED implementation has a performance of 97 MHz, with power dissipation of 137mW and a total area of 2284 Slices on SPARTAN 3E FPGA device. From the JH NON PIPELINED implementation and the other 4 pipelined implementations on 4 different FPGA Devices (2 from the VIRTEX family and 2 from the SPARTAN family) we concluded that the power dissipation is bigger in VIRTEX family devices in comparison to SPARTAN family Devices.
19

Binární znaménkové reprezentace celých čísel v kryptoanalýze hashovacích funkcí / Binární znaménkové reprezentace celých čísel v kryptoanalýze hashovacích funkcí

Vábek, Jiří January 2014 (has links)
Title: Binary Signed Digit Representations of Integers in Cryptanalysis of Hash Functions Author: Jiří Vábek Department: Department of Algebra Supervisor: doc. RNDr. Jiří Tůma, DrSc., Department of Algebra Abstract: The work summarizes two main papers, A New Type of 2-block Colli- sions in MD5 and On the Number of Binary Signed Digit Representations of a Given Weight, while containing also the wider introduction to the topic of crypt- analysis of MD5 and binary signed digit representations (BSDR's). In the first paper we have implemented and applied Stevens algorithm to the newly proposed initial message differences and constructed a new type of collisions in MD5. In the second paper we have introduced and proved a new improved bound for the number of optimal BSDR's and also a new recursive bound for the number of BSDR's of a given integer with a given overweight. In addition to the results in mentioned papers, the generalized result is stated with the new bound for the number of optimal D-representations of natural numbers with D = {0, 1, 3}. Keywords: hash function, MD5, binary signed digit representation (BSDR), non- adjacent form (NAF) 1
20

OPTIMALIZACE ALGORITMŮ A DATOVÝCH STRUKTUR PRO VYHLEDÁVÁNÍ REGULÁRNÍCH VÝRAZŮ S VYUŽITÍM TECHNOLOGIE FPGA / OPTIMIZATION OF ALGORITHMS AND DATA STRUCTURES FOR REGULAR EXPRESSION MATCHING USING FPGA TECHNOLOGY

Kaštil, Jan Unknown Date (has links)
Disertační práce se zabývá rychlým vyhledáváním regulárních výrazů v síťovém provozu s použitím technologie FPGA. Vyhledávání regulárních výrazů v síťovém provozu je výpočetně náročnou operací využívanou převážně v oblasti síťové bezpečnosti a v oblasti monitorování provozu vysokorychlostních počítačových sítí. Současná řešení neumožňují dosáhnout požadovaných multigigabitových propustností při dodržení všech požadavků, které jsou na vyhledávací jednotky kladeny. Nejvyšších propustností dosahují implementace založené na využití inovativních hardwarových architektur implementovaných v FPGA případně v ASIC. Tato disertační práce popisuje nové architektury vyhledávací jednotky, které jsou vhodné pro implementaci jak v FPGA tak v ASIC. Základní myšlenkou navržených architektur je využití perfektní hashovací funkce pro implementaci přechodové tabulky konečného automatu. Dále byla navržena architektura, která umožňuje uživateli zanést malou pravděpodobnost chyby při vyhledávání a tím snížit paměťové nároky vyhledávací jednotky. Disertační práce analyzuje vliv pravděpodobnosti této chyby na celkovou spolehlivost systému a srovnává ji s řešením používaným v současnosti. V rámci disertační práce byla provedena měření vlastností regulárních výrazů používaných při analýze provozu moderních počítačových sítí. Z provedené analýzy vyplývá, že velká část regulárních výrazů je vhodná pro implementaci pomocí navržených architektur. Pro dosažení vysoké propustnosti vyhledávací jednotky práce navrhuje nový algoritmus transformace abecedy, který umožňuje, aby vyhledávací jednotka zpracovala více znaků v jednom kroku. Na rozdíl od současných metod, navržený algoritmus umožňuje konstrukci automatu zpracovávajícího libovolný počet symbolů v jednom taktu. Implementované architektury dosahují v porovnání se současnými metodami úspory paměti zlepšení až 200MB.

Page generated in 0.1057 seconds