• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 9
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards practical fully homomorphic encryption

Alperin-Sheriff, Jacob 21 September 2015 (has links)
Fully homomorphic encryption (FHE) allows for computation of arbitrary func- tions on encrypted data by a third party, while keeping the contents of the encrypted data secure. This area of research has exploded in recent years following Gentry’s seminal work. However, the early realizations of FHE, while very interesting from a theoretical and proof-of-concept perspective, are unfortunately far too inefficient to provide any use in practice. The bootstrapping step is the main bottleneck in current FHE schemes. This step refreshes the noise level present in the ciphertexts by homomorphically evaluating the scheme’s decryption function over encryptions of the secret key. Bootstrapping is necessary in all known FHE schemes in order to allow an unlimited amount of computation, as without bootstrapping, the noise in the ciphertexts eventually grows to a point where decryption is no longer guaranteed to be correct. In this work, we present two new bootstrapping algorithms for FHE schemes. The first works on packed ciphertexts, which encrypt many bits at a time, while the second works on unpacked ciphertexts, which encrypt a single bit at a time. Our algorithms lie at the heart of the fastest currently existing implementations of fully homomorphic encryption for packed ciphertexts and for single-bit encryptions, respectively, running hundreds of times as fast for practical parameters as the previous best implementations.
2

Privacy Preserving Distributed Data Mining

Lin, Zhenmin 01 January 2012 (has links)
Privacy preserving distributed data mining aims to design secure protocols which allow multiple parties to conduct collaborative data mining while protecting the data privacy. My research focuses on the design and implementation of privacy preserving two-party protocols based on homomorphic encryption. I present new results in this area, including new secure protocols for basic operations and two fundamental privacy preserving data mining protocols. I propose a number of secure protocols for basic operations in the additive secret-sharing scheme based on homomorphic encryption. I derive a basic relationship between a secret number and its shares, with which we develop efficient secure comparison and secure division with public divisor protocols. I also design a secure inverse square root protocol based on Newton's iterative method and hence propose a solution for the secure square root problem. In addition, we propose a secure exponential protocol based on Taylor series expansions. All these protocols are implemented using secure multiplication and can be used to develop privacy preserving distributed data mining protocols. In particular, I develop efficient privacy preserving protocols for two fundamental data mining tasks: multiple linear regression and EM clustering. Both protocols work for arbitrarily partitioned datasets. The two-party privacy preserving linear regression protocol is provably secure in the semi-honest model, and the EM clustering protocol discloses only the number of iterations. I provide a proof-of-concept implementation of these protocols in C++, based on the Paillier cryptosystem.
3

A Study on Cryptographic Protocols: Achieving Strong Security for Zero-knowledge Proofs and Secure Computation / 暗号プロトコルに関する研究 : ゼロ知識証明と秘密計算における高度な安全性の実現について

Kiyoshima, Susumu 26 March 2018 (has links)
京都大学 / 0048 / 新制・論文博士 / 博士(情報学) / 乙第13184号 / 論情博第94号 / 新制||情||116(附属図書館) / (主査)教授 石田 亨, 教授 中村 佳正, 教授 岡部 寿男, 教授 岡本 龍明 / 学位規則第4条第2項該当 / Doctor of Informatics / Kyoto University / DGAM
4

Search over Encrypted Data in Cloud Computing

Wang, Bing 25 June 2016 (has links)
Cloud computing which provides computation and storage resources in a pay-per-usage manner has emerged as the most popular computation model nowadays. Under the new paradigm, users are able to request computation resources dynamically in real-time to accommodate their workload requirements. The flexible resource allocation feature endows cloud computing services with the capability to offer affordable and efficient computation services. However, moving data and applications into the cloud exposes a privacy leakage risk of the user data. As the growing awareness of data privacy, more and more users begin to choose proactive protection for their data in the cloud through data encryption. One major problem of data encryption is that it hinders many necessary data utilization functions since most of the functions cannot be directly applied to the encrypted data. The problem could potentially jeopardize the popularity of the cloud computing, therefore, achieving efficient data utilization over encrypted data while preserving user data privacy is an important research problem in cloud computing. The focus of this dissertation is to design secure and efficient schemes to address essential data utilization functions over encrypted data in cloud computing. To this end, we studied three problems in this research area. The first problem that is studied in this dissertation is fuzzy multi-keyword search over encrypted data. As fuzzy search is one of the most useful and essential data utilization functions in our daily life, we propose a novel design that incorporates Bloom filter and Locality-Sensitive Hashing to fulfill the security and function requirements of the problem. Secondly, we propose a secure index which is based on the most popular index structure, i.e., the inverted index. Our innovative design provides privacy protection over the secure index, the user query as well as the search pattern and the search result. Also, users can verify the correctness of the search results to ensure the proper computation is performed by the cloud. Finally, we focus ourselves on the privacy-sensitive data application in cloud computing, i.e., genetic testings over DNA sequences. To provide secure and efficient genetic testings in the cloud, we utilize Predicate Encryption and design a bilinear pairing based secure sequence matching scheme to achieve strong privacy guarantee while fulfilling the functionality requirement efficiently. In all of the three research thrusts, we present thorough theoretical security analysis and extensive simulation studies to evaluate the performance of the proposed schemes. The results demonstrate that the proposed schemes can effectively and efficiently address the challenging problems in practice. / Ph. D.
5

Zero-knowledge proofs for secure computation / Preuves à divulgation nulle de connaissance pour le calcul sécurisé

Couteau, Geoffroy 30 November 2017 (has links)
Dans cette thèse, nous étudions les preuves à divulgation nulle de connaissance, une primitive cryptographique permettant de prouver une assertion en ne révélant rien de plus que sa véracité, et leurs applications au calcul sécurisé. Nous introduisons tout d’abord un nouveau type de preuves à divulgation nulle, appelées arguments implicites à divulgation nulle, intermédiaire entre deux notions existantes, les preuves interactives et les preuves non interactives à divulgation nulle. Cette nouvelle notion permet d’obtenir les mêmes bénéfices en terme d’efficacité que les preuves non-interactives dans le contexte de la construction de protocoles de calcul sécurisé faiblement interactifs, mais peut être instanciée à partir des mêmes hypothèses cryptographiques que les preuves interactives, permettant d’obtenir de meilleures garanties d’efficacité et de sécurité. Dans un second temps, nous revisitons un système de preuves à divulgation nulle de connaissance qui est particulièrement utile dans le cadre de protocoles de calcul sécurisé manipulant des nombres entiers, et nous démontrons que son analyse de sécurité classique peut être améliorée pour faire reposer ce système de preuve sur une hypothèse plus standard et mieux connue. Enfin, nous introduisons une nouvelle méthode de construction de systèmes de preuves à divulgation nulle sur les entiers, qui représente une amélioration par rapport aux méthodes existantes, tout particulièrement dans un modèle de type client-serveur, où un client à faible puissance de calcul participe à un protocole de calcul sécurisé avec un serveur à forte puissance de calcul. / In this thesis, we study zero-knowledge proofs, a cryptographic primitive that allows to prove a statement while yielding nothing beyond its truth, and their applications to secure computation. Specifically, we first introduce a new type of zero-knowledge proofs, called implicit zero-knowledge arguments, that stands between two existing notions, interactive zeroknowledge proofs and non-interactive zero-knowledge proofs. Our new notion provides the same efficiency benefits than the latter when used to design roundefficient secure computation protocols, but it can be built from essentially the same cryptographic assumptions than the former, which allows to get improved efficiency and security guarantees. Second, we revisit a zero-knowledge proof system that is particularly useful for secure computation protocols manipulating integers, and show that the known security analysis can be improved to base the proof system on a more wellstudied assumption. Eventually, we introduce a new method to build zero-knowledge proof systems over the integers, which particularly improves over existing methods in a client-server model, where a weak client executes a secure computation protocol with a powerful server.
6

Feasibility, Efficiency, and Robustness of Secure Computation

Hai H Nguyen (14206922) 02 December 2022 (has links)
<p>Secure computation allows mutually distrusting parties to compute over private data. Such collaborations have widespread applications in social, scientific, commercial, and security domains. However, the overhead of achieving security is a major bottleneck to the adoption of such technologies. In this context, this thesis aims to design the most secure protocol within budgeted computational or network resources by mathematically formulating it as an optimization problem. </p> <p>With the rise in CPU power and cheap RAM, the offline-online model for secure computation has become the prominent model for real-world security systems. This thesis investigates the above-mentioned optimization problem in the information-theoretic offline-online model. In particular, this thesis presents the following selected sample of our research in greater detail. </p> <p>Round and Communication Complexity: Chor-Kushilevitz-Beaver characterized the round and communication complexity of secure two-party computation. Since then, the case of functions with randomized output remained unexplored. We proved the decidability of determining these complexities. Next, if such a protocol exists, we construct the optimal protocol; otherwise, we present an obstruction to achieving security. </p> <p>Rate and Capacity of secure computation: The efficiency of converting the offline samples into secure computation during the online phase is essential. However, investigating this ``production rate'' for general secure computations seems analytically intractable. Towards this objective, we introduce a new model of secure computation -- one without any communication -- that has several practical applications. We lay the mathematical foundations of formulating rate and capacity questions in this framework. Our research identifies the first tight rate and capacity results (a la Shannon) in secure computation. </p> <p>Reverse multiplication embedding: We identify a new problem in algebraic complexity theory that unifies several efficiency objectives in cryptography. Reverse multiplication embedding seeks to implement as many (base field) multiplications as possible using one extension field multiplication. We present optimal construction using algebraic function fields. This embedding has subsequently led to efficient improvement of secure computation, homomorphic encryption, proof systems, and leakage-resilient cryptography. </p> <p>Characterizing the robustness to side-channel attacks: Side-channel attacks present a significant threat to the offline phase. We introduce the cryptographic analog of common information to characterize the offline phase's robustness quantitatively. We build a framework for security and attack analysis. In the context of robust threshold cryptography, we present a state-of-the-art attack, threat assessment, and security fix for Shamir's secret-sharing. </p> <p><br></p>
7

Lattice Codes for Secure Communication and Secret Key Generation

Vatedka, Shashank January 2017 (has links) (PDF)
In this work, we study two problems in information-theoretic security. Firstly, we study a wireless network where two nodes want to securely exchange messages via an honest-but-curious bidirectional relay. There is no direct link between the user nodes, and all communication must take place through the relay. The relay behaves like a passive eavesdropper, but otherwise follows the protocol it is assigned. Our objective is to design a scheme where the user nodes can reliably exchange messages such that the relay gets no information about the individual messages. We first describe a perfectly secure scheme using nested lattices, and show that our scheme achieves secrecy regardless of the distribution of the additive noise, and even if this distribution is unknown to the user nodes. Our scheme is explicit, in the sense that for any pair of nested lattices, we give the distribution used for randomization at the encoders to guarantee security. We then give a strongly secure lattice coding scheme, and we characterize the performance of both these schemes in the presence of Gaussian noise. We then extend our perfectly-secure and strongly-secure schemes to obtain a protocol that guarantees end-to-end secrecy in a multichip line network. We also briefly study the robustness of our bidirectional relaying schemes to channel imperfections. In the second problem, we consider the scenario where multiple terminals have access to private correlated Gaussian sources and a public noiseless communication channel. The objective is to generate a group secret key using their sources and public communication in a way that an eavesdropper having access to the public communication can obtain no information about the key. We give a nested lattice-based protocol for generating strongly secure secret keys from independent and identically distributed copies of the correlated random variables. Under certain assumptions on the joint distribution of the sources, we derive achievable secret key rates. The tools used in designing protocols for both these problems are nested lattice codes, which have been widely used in several problems of communication and security. In this thesis, we also study lattice constructions that permit polynomial-time encoding and decoding. In this regard, we first look at a class of lattices obtained from low-density parity-check (LDPC) codes, called Low-density Construction-A (LDA) lattices. We show that high-dimensional LDA lattices have several “goodness” properties that are desirable in many problems of communication and security. We also present a new class of low-complexity lattice coding schemes that achieve the capacity of the AWGN channel. Codes in this class are obtained by concatenating an inner Construction-A lattice code with an outer Reed-Solomon code or an expander code. We show that this class of codes can achieve the capacity of the AWGN channel with polynomial encoding and decoding complexities. Furthermore, the probability of error decays exponentially in the block length for a fixed transmission rate R that is strictly less than the capacity. To the best of our knowledge, this is the first capacity-achieving coding scheme for the AWGN channel which has an exponentially decaying probability of error and polynomial encoding/decoding complexities.
8

Secure and Efficient Comparisons between Untrusted Parties

Beck, Martin 11 September 2018 (has links)
A vast number of online services is based on users contributing their personal information. Examples are manifold, including social networks, electronic commerce, sharing websites, lodging platforms, and genealogy. In all cases user privacy depends on a collective trust upon all involved intermediaries, like service providers, operators, administrators or even help desk staff. A single adversarial party in the whole chain of trust voids user privacy. Even more, the number of intermediaries is ever growing. Thus, user privacy must be preserved at every time and stage, independent of the intrinsic goals any involved party. Furthermore, next to these new services, traditional offline analytic systems are replaced by online services run in large data centers. Centralized processing of electronic medical records, genomic data or other health-related information is anticipated due to advances in medical research, better analytic results based on large amounts of medical information and lowered costs. In these scenarios privacy is of utmost concern due to the large amount of personal information contained within the centralized data. We focus on the challenge of privacy-preserving processing on genomic data, specifically comparing genomic sequences. The problem that arises is how to efficiently compare private sequences of two parties while preserving confidentiality of the compared data. It follows that the privacy of the data owner must be preserved, which means that as little information as possible must be leaked to any party participating in the comparison. Leakage can happen at several points during a comparison. The secured inputs for the comparing party might leak some information about the original input, or the output might leak information about the inputs. In the latter case, results of several comparisons can be combined to infer information about the confidential input of the party under observation. Genomic sequences serve as a use-case, but the proposed solutions are more general and can be applied to the generic field of privacy-preserving comparison of sequences. The solution should be efficient such that performing a comparison yields runtimes linear in the length of the input sequences and thus producing acceptable costs for a typical use-case. To tackle the problem of efficient, privacy-preserving sequence comparisons, we propose a framework consisting of three main parts. a) The basic protocol presents an efficient sequence comparison algorithm, which transforms a sequence into a set representation, allowing to approximate distance measures over input sequences using distance measures over sets. The sets are then represented by an efficient data structure - the Bloom filter -, which allows evaluation of certain set operations without storing the actual elements of the possibly large set. This representation yields low distortion for comparing similar sequences. Operations upon the set representation are carried out using efficient, partially homomorphic cryptographic systems for data confidentiality of the inputs. The output can be adjusted to either return the actual approximated distance or the result of an in-range check of the approximated distance. b) Building upon this efficient basic protocol we introduce the first mechanism to reduce the success of inference attacks by detecting and rejecting similar queries in a privacy-preserving way. This is achieved by generating generalized commitments for inputs. This generalization is done by treating inputs as messages received from a noise channel, upon which error-correction from coding theory is applied. This way similar inputs are defined as inputs having a hamming distance of their generalized inputs below a certain predefined threshold. We present a protocol to perform a zero-knowledge proof to assess if the generalized input is indeed a generalization of the actual input. Furthermore, we generalize a very efficient inference attack on privacy-preserving sequence comparison protocols and use it to evaluate our inference-control mechanism. c) The third part of the framework lightens the computational load of the client taking part in the comparison protocol by presenting a compression mechanism for partially homomorphic cryptographic schemes. It reduces the transmission and storage overhead induced by the semantically secure homomorphic encryption schemes, as well as encryption latency. The compression is achieved by constructing an asymmetric stream cipher such that the generated ciphertext can be converted into a ciphertext of an associated homomorphic encryption scheme without revealing any information about the plaintext. This is the first compression scheme available for partially homomorphic encryption schemes. Compression of ciphertexts of fully homomorphic encryption schemes are several orders of magnitude slower at the conversion from the transmission ciphertext to the homomorphically encrypted ciphertext. Indeed our compression scheme achieves optimal conversion performance. It further allows to generate keystreams offline and thus supports offloading to trusted devices. This way transmission-, storage- and power-efficiency is improved. We give security proofs for all relevant parts of the proposed protocols and algorithms to evaluate their security. A performance evaluation of the core components demonstrates the practicability of our proposed solutions including a theoretical analysis and practical experiments to show the accuracy as well as efficiency of approximations and probabilistic algorithms. Several variations and configurations to detect similar inputs are studied during an in-depth discussion of the inference-control mechanism. A human mitochondrial genome database is used for the practical evaluation to compare genomic sequences and detect similar inputs as described by the use-case. In summary we show that it is indeed possible to construct an efficient and privacy-preserving (genomic) sequences comparison, while being able to control the amount of information that leaves the comparison. To the best of our knowledge we also contribute to the field by proposing the first efficient privacy-preserving inference detection and control mechanism, as well as the first ciphertext compression system for partially homomorphic cryptographic systems.
9

Privacy-Preserving Public Verification via Homomorphic Encryption

Becher, Kilian 07 February 2024 (has links)
Nachhaltige und ethisch vertretbare Beschaffung und Produktion gehören zu den großen Herausforderungen, die aus dem rasanten Klimawandel und der wachsenden Weltbevölkerung resultieren. Die Erneuerbare-Energien-Richtlinie II der EU und das deutsche Lieferkettensorgfaltspflichtengesetz sind nur zwei Beispiele für die Vielzahl von Gesetzen und Vorschriften, die Standards für nachhaltige und ethisch vertretbare Beschaffung und Produktion vorgeben. Sie implizieren einen Bedarf an Transparenz, Rückverfolgbarkeit und Verifizierbarkeit von Lieferketten und Transaktionen. Öffentliche Verifikationen von Transaktionen entlang von Lieferketten ermöglichen es Dritten, die Einhaltung von Standards und Richtlinien und den Wahrheitsgehalt von Nachhaltigkeitsversprechen zu überprüfen. Folglich kann die öffentliche Überprüfbarkeit Kunden, öffentlichen Stellen und Nichtregierungsorganisationen dabei helfen, Verstöße und Betrug in Lieferketten aufzudecken. Dies wiederum kann dazu beitragen, den Druck zur Einhaltung geltender Standards und Vorschriften zu erhöhen. Transaktionen in Lieferketten basieren oft auf vertraulichen Informationen, wie beispielsweise Mengen und Preise. Die Transparenz derartiger Daten könnte auf Geschäftsgeheimnisse schließen lassen, was direkten Einfluss auf die Wettbewerbsvorteile der beteiligten Firmen hätte. Die Vereinbarkeit von Transparenz und Vertraulichkeit scheint jedoch auf den ersten Blick widersprüchlich zu sein. Diese Dissertation stellt sich der Herausforderung, die öffentliche Verifizierbarkeit von Transaktionen in Lieferketten unter Wahrung der Vertraulichkeit zu ermöglichen. Ausgehend von zwei Fallbeispielen für Lieferketten-Verifikationen werden zunächst Anforderungen an Lösungen untersucht und fünf Forschungsfragen abgeleitet. Anschließend wird eine universelle Lösung entworfen, welche Transparenz und Vertraulichkeit in Einklang bringt. Das vorgestellte Systemmodell ermöglicht sichere öffentliche Verifikationen durch den Einsatz von Fully Homomorphic Encryption (FHE) und Proxy Re-Encryption (PRE). Um die Eignung des Systemmodells für eine Vielzahl realer Szenarien zu verdeutlichen, werden in dieser Dissertation Protokolle für verschiedene Verifikationsfunktionen entworfen. Dies umfasst die Verifikation von Bilanzen, motiviert durch den Handel mit nachhaltigem Palmöl, sowie die Verifikation von Verhältnissen, veranschaulicht durch die Verarbeitung verschiedener Arten von Kobalt. Durch theoretische und empirische Untersuchungen wird nachgewiesen, dass die Protokolle sichere öffentliche Verifikationen für realitätsnahe Szenarien in praktikabler Zeit ermöglichen. Im Weiteren werden die Sicherheitseigenschaften und -implikationen des vorgeschlagenen Systemmodells und der Protokolle untersucht. Dies beinhaltet eine formale Analyse des Risikos, vertrauliche Informationen im Falle wiederholter, gleicher Verifikationen preiszugeben. Aufgrund der Anfälligkeit gegenüber derartigen Angriffen beim Verwenden probabilistischer Output Obfuscation, wird das Paradigma der Data-Dependent Deterministic Obfuscation (D3O) vorgestellt. D3O ist ein universelles Konzept und damit unabhängig vom Anwendungsfall der Lieferketten-Verifikation. Daher kann es in einer Vielzahl weiterer Protokolle für sichere Berechnungen eingesetzt werden, um das Abfließen vertraulicher Informationen zu reduzieren. / Sustainable and ethical sourcing and production are major challenges that arise from rapid climate change and our growing world population. The EU's Renewable Energy Directive II and the German Supply Chain Act are just two examples of the multitude of laws and regulations that define standards for sustainable and ethical sourcing and production. They imply a need for supply chain transparency, traceability, and verification. Public verification of supply chain transactions gives any third-party verifier the chance to evaluate compliance and the correctness of claims based on supply chain transaction details. Therefore, public verification can help customers, buyers, regulators, and non-governmental organizations uncover non-compliance and fraud committed by supply chain actors. This, in turn, can help increase the pressure to comply with applicable standards and regulations. Supply chain transactions often involve confidential data like amounts or prices. Transparency of such data could leak trade secrets and affect companies' competitive advantages. However, reconciling transparency with confidentiality seems contradictory at first glance. This thesis takes up the challenge of enabling privacy-preserving public verification of confidential supply chain transactions. Given two exemplary real-world use cases for supply chain verification, the thesis first investigates requirements for valid solutions and infers five research questions. It then designs a universal solution that combines transparency with confidentiality. The proposed system model achieves privacy-preserving public verification by employing the cryptographic techniques of fully homomorphic encryption (FHE) and proxy re-encryption (PRE). To demonstrate the suitability of the system model for a large variety of lifelike supply chain verification scenarios, the thesis designs privacy-preserving protocols for different verification functions. This includes the verification of balances, using the trade in sustainable palm oil as an example, as well as the verification of ratios, motivated by different forms of cobalt sourcing. These protocols are evaluated both theoretically and empirically. Through extensive empirical evaluation, the proposed protocols prove to enable privacy-preserving public verification for the mentioned supply chain scenarios in practical time. Additionally, this thesis investigates the security implications of the proposed system model and protocols and formally analyzes the risk of leaking information through repeated similar verifications. Based on the identified vulnerability to such attacks in the case of probabilistically obfuscated protocol outputs, the thesis introduces and investigates the paradigm of data-dependent deterministic obfuscation (D3O). D3O is a universal concept that is independent of the field of supply chain verification. It can reduce the leakage of confidential information in a large class of privacy-preserving protocols.

Page generated in 0.1061 seconds