• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 32
  • 24
  • 12
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimization of the Mainzelliste software for fast privacy-preserving record linkage

Rohde, Florens, Franke, Martin, Sehili, Ziad, Lablans, Martin, Rahm, Erhard 11 February 2022 (has links)
Background: Data analysis for biomedical research often requires a record linkage step to identify records from multiple data sources referring to the same person. Due to the lack of unique personal identifiers across these sources, record linkage relies on the similarity of personal data such as first and last names or birth dates. However, the exchange of such identifying data with a third party, as is the case in record linkage, is generally subject to strict privacy requirements. This problem is addressed by privacy-preserving record linkage (PPRL) and pseudonymization services. Mainzelliste is an open-source record linkage and pseudonymization service used to carry out PPRL processes in real-world use cases. Methods: We evaluate the linkage quality and performance of the linkage process using several real and near-real datasets with different properties w.r.t. size and error-rate of matching records. We conduct a comparison between (plaintext) record linkage and PPRL based on encoded records (Bloom filters). Furthermore, since the Mainzelliste software offers no blocking mechanism, we extend it by phonetic blocking as well as novel blocking schemes based on locality-sensitive hashing (LSH) to improve runtime for both standard and privacy-preserving record linkage. Results: The Mainzelliste achieves high linkage quality for PPRL using field-level Bloom filters due to the use of an error-tolerant matching algorithm that can handle variances in names, in particular missing or transposed name compounds. However, due to the absence of blocking, the runtimes are unacceptable for real use cases with larger datasets. The newly implemented blocking approaches improve runtimes by orders of magnitude while retaining high linkage quality. Conclusion: We conduct the first comprehensive evaluation of the record linkage facilities of the Mainzelliste software and extend it with blocking methods to improve its runtime. We observed a very high linkage quality for both plaintext as well as encoded data even in the presence of errors. The provided blocking methods provide order of magnitude improvements regarding runtime performance thus facilitating the use in research projects with large datasets and many participants.
32

Rigorous and Flexible Privacy Protection Framework for Utilizing Personal Spatiotemporal Data / 個人時空間データ利活用のための厳密で柔軟なプライバシ保護フレムワーク

Yang, Cao 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20508号 / 情博第636号 / 新制||情||110(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 田中 克己, 教授 岡部 寿男 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
33

Hardware accelerators for post-quantum cryptography and fully homomorphic encryption

Agrawal, Rashmi 16 January 2023 (has links)
With the monetization of user data, data breaches have become very common these days. In the past five years, there were more than 7000 data breaches involving theft of personal information of billions of people. In the year 2020 alone, the global average cost per data breach was $3.86 million, and this number rose to $4.24 million in 2021. Therefore, the need for maintaining data security and privacy is becoming increasingly critical. Over the years, various data encryption schemes including RSA, ECC, and AES are being used to enable data security and privacy. However, these schemes are deemed vulnerable to quantum computers with their enormous processing power. As quantum computers are expected to become main stream in the near future, post-quantum secure encryption schemes are required. To this end, through NIST’s standardization efforts, code-based and lattice-based encryption schemes have emerged as one of the plausible way forward. Both code-based and lattice-based encryption schemes enable public key cryptosystems, key exchange mechanisms, and digital signatures. In addition, lattice-based encryption schemes support fully homomorphic encryption (FHE) that enables computation on encrypted data. Over the years, there have been several efforts to design efficient FPGA-based and ASIC-based solutions for accelerating the code-based and lattice-based encryption schemes. The conventional code-based McEliece cryptosystem uses binary Goppa code, which has good code rate and error correction capability, but suffers from high encoding and decoding complexity. Moreover, the size of the generated public key is in several MBs, leading to cryptosystem designs that cannot be accommodated on low-end FPGAs. In lattice-based encryption schemes, large polynomial ring operations form the core compute kernel and remain a key challenge for many hardware designers. To extend support for large modular arithmetic operations on an FPGA, while incurring low latency and hardware resource utilization requires substantial design efforts. Moreover, prior FPGA solutions for lattice-based FHE include hardware acceleration of basic FHE primitives for impractical parameter sets without the support for bootstrapping operation that is critical to building real-time privacy-preserving applications. Similarly, prior ASIC proposals of FHE that include bootstrapping are heavily memory bound, leading to large execution times, underutilized compute resources, and cost millions of dollars. To respond to these challenges, in this dissertation, we focus on the design of efficient hardware accelerators for code-based and lattice-based public key cryptosystems (PKC). For code-based PKC, we propose the design of a fully-parameterized en/decryption co-processor based on a new variant of McEliece cryptosystem. This co-processor takes advantage of the non-binary Orthogonal Latin Square Code (OLSC) to achieve a lower computational complexity along with smaller key size than that of the binary Goppa code. Our FPGA-based implementation of the co-processor is ∼3.5× faster than an existing classic McEliece cryptosystem implementation. For lattice-based PKC, we propose the design of a co-processor that implements large polynomial ring operations. It uses a fully-pipelined NTT polynomial multiplier to perform fast polynomial multiplications. We also propose the design of a highly-optimized Gaussian noise sampler, capable of sampling millions of high-precision samples per second. Through an FPGA-based implementation of this lattice-based PKC co-processor, we achieve a speedup of 6.5× while utilizing 5× less hardware resources as compared to state-of-the-art implementations. Leveraging our work on lattice-based PKC implementation, we explore the design of hardware accelerators that perform FHE operations using Cheon-Kim-Kim-Song (CKKS) scheme. Here, we first perform an in-depth architectural analysis of various FHE operations in the CKKS scheme so as to explore ways to accelerate an end-to-end FHE application. For this analysis, we develop a custom architecture modeling tool, SimFHE, to measure the compute and memory bandwidth requirements of hardware-accelerated CKKS. Our analysis using SimFHE reveals that, without a prohibitively large cache, all FHE operations exhibit low arithmetic intensity (<1 Op/byte). To address the memory bottleneck resulting from the low arithmetic intensity, we propose several memory-aware design (MAD) techniques, including caching and algorithmic optimizations, to reduce the memory requirements of CKKS-based application execution. We show that the use of our MAD techniques can yield an ASIC design that is at least 5-10× cheaper than the large-cache proposals, but only ∼2-3× slower. We also design FAB, an FPGA-based accelerator for bootstrappable FHE. FAB, for the first time ever, accelerates bootstrapping (along with basic FHE primitives) on an FPGA for a secure and practical parameter set. FAB tackles the memory-bounded nature of bootstrappable FHE through judicious datapath modification, smart operation scheduling, and on-chip memory management techniques to maximize the overall FHE-based compute throughput. FAB outperforms all prior CPU/GPU works by 9.5× to 456× and provides a practical performance for our target application: secure training of logistic regression models. / 2025-01-16T00:00:00Z
34

An Architecture For High-performance Privacy-preserving And Distributed Data Mining

Secretan, James 01 January 2009 (has links)
This dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to the problem, trying to find ways that organizations can collaborate to mine their databases collectively, while at the same time preserving the privacy of their records. Developing data mining algorithms for DDM and PPDM environments can be difficult and there is little software to support it. In addition, because these tasks can be computationally demanding, taking hours of even days to complete data mining tasks, organizations should be able to take advantage of high-performance and parallel computing to accelerate these tasks. Unfortunately there is no such framework that is able to provide all of these services easily for a developer. In this dissertation such a framework is developed to support the creation and execution of DDM and PPDM applications, called APHID (Architecture for Private, High-performance Integrated Data mining). The architecture allows users to flexibly and seamlessly integrate cluster and grid resources into their DDM and PPDM applications. The architecture is scalable, and is split into highly de-coupled services to ensure flexibility and extensibility. This dissertation first develops a comprehensive example algorithm, a privacy-preserving Probabilistic Neural Network (PNN), which serves a basis for analysis of the difficulties of DDM/PPDM development. The privacy-preserving PNN is the first such PNN in the literature, and provides not only a practical algorithm ready for use in privacy-preserving applications, but also a template for other data intensive algorithms, and a starting point for analyzing APHID's architectural needs. After analyzing the difficulties in the PNN algorithm's development, as well as the shortcomings of researched systems, this dissertation presents the first concrete programming model joining high performance computing resources with a privacy preserving data mining process. Unlike many of the existing PPDM development models, the platform of services is language independent, allowing layers and algorithms to be implemented in popular languages (Java, C++, Python, etc.). An implementation of a PPDM algorithm is developed in Java utilizing the new framework. Performance results are presented, showing that APHID can enable highly simplified PPDM development while speeding up resource intensive parts of the algorithm.
35

PULMONARY FUNCTION MONITORING USING PORTABLE ULTRASONOGRAPHY AND PRIVACY-PRESERVING LEARNING

Liu, Menghan 08 February 2017 (has links)
No description available.
36

Efficient Building Blocks for Secure Multiparty Computation and Their Applications

Donghang Lu (13157568) 27 July 2022 (has links)
<p>Secure multi-party computation (MPC) enables mutually distrusting parties to compute securely over their private data. It is a natural approach for building distributed applications with strong privacy guarantees, and it has been used in more and more real-world privacy-preserving solutions such as privacy-preserving machine learning, secure financial analysis, and secure auctions.</p> <p><br></p> <p>The typical method of MPC is to represent the function with arithmetic circuits or binary circuits, then MPC can be applied to compute each gate privately. The practicality of secure multi-party computation (MPC) has been extensively analyzed and improved over the past decade, however, we are hitting the limits of efficiency with the traditional approaches as the circuits become more complicated. Therefore, we follow the design principle of identifying and constructing fast and provably-secure MPC protocols to evaluate useful high-level algebraic abstractions; thus, improving the efficiency of all applications relying on them. </p> <p><br></p> <p>To begin with, we construct an MPC protocol to efficiently evaluate the powers of a secret value. Then we use it as a building block to form a secure mixing protocol, which can be directly used for anonymous broadcast communication. We propose two different protocols to achieve secure mixing offering different tradeoffs between local computation and communication. Meanwhile, we study the necessity of robustness and fairness in many use cases, and provide these properties to general MPC protocols. As a follow-up work in this direction, we design more efficient MPC protocols for anonymous communication through the use of permutation matrices. We provide three variants targeting different MPC frameworks and input volumes. Besides, as the core of our protocols is a secure random permutation, our protocol is of independent interest to more applications such as secure sorting and secure two-way communication.</p> <p><br></p> <p>Meanwhile, we propose the solution and analysis for another useful arithmetic operation: secure multi-variable high-degree polynomial evaluation over both scalar and matrices. Secure polynomial evaluation is a basic operation in many applications including (but not limited to) privacy-preserving machine learning, secure Markov process evaluation, and non-linear function approximation. In this work, we illustrate how our protocol can be used to efficiently evaluate decision tree models, with both the client input and the tree models being private. We implement the prototypes of this idea and the benchmark shows that the polynomial evaluation becomes significantly faster and this makes the secure comparison the only bottleneck. Therefore, as a follow-up work, we design novel protocols to evaluate secure comparison efficiently with the help of pre-computed function tables. We implement and test this idea using Falcon, a state-of-the-art privacy-preserving machine learning framework and the benchmark results illustrate that we get significant performance improvement by simply replacing their secure comparison protocol with ours.</p> <p><br></p>
37

Transmitter Authentication in Dynamic Spectrum Sharing

Kumar, Vireshwar 02 February 2017 (has links)
Recent advances in spectrum access technologies, such as software-defined radios, have made dynamic spectrum sharing (DSS) a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of "rogue" transmitter radios which may cause significant interference to other radios in DSS. One approach for countering such threats is to employ a transmitter authentication scheme at the physical (PHY) layer. In PHY-layer authentication, an authentication signal is generated by the transmitter, and embedded into the message signal. This enables a regulatory enforcement entity to extract the authentication signal from the received signal, uniquely identify a transmitter, and collect verifiable evidence of a rogue transmission that can be used later during an adjudication process. There are two primary technical challenges in devising a transmitter authentication scheme for DSS: (1) how to generate and verify the authentication signal such that the required security and privacy criteria are met; and (2) how to embed and extract the authentication signal without negatively impacting the performance of the transmitters and the receivers in DSS. With regard to dealing with the first challenge, the authentication schemes in the prior art, which provide privacy-preserving authentication, have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. In this dissertation, the novel approaches which significantly improve scalability of the transmitter authentication with respect to revocation, are proposed. With regard to dealing with the second challenge, in the existing PHY-layer authentication techniques, the authentication signal is embedded into the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal's signal to interference and noise ratio (SINR) and the authentication signal's SINR. In this dissertation, the novel approaches which are not constrained by the aforementioned tradeoff between message and authentication signals, are proposed. / Ph. D.
38

Study of Sensing Issues in Dynamic Spectrum Access

Ye, Yuxian 14 June 2019 (has links)
Dynamic Spectrum Access (DSA) is now a commonly used spectrum sharing paradigm to mitigate the spectrum shortage problem. DSA technology allows unlicensed secondary users to access the unused frequency bands without interfering with the incumbent users. The key technical challenges in DSA systems lie in spectrum allocation problems and spectrum user's security issues. This thesis mainly focuses on spectrum monitoring technology in spectrum allocation and incumbent users' (IU) privacy issue. Spectrum monitoring is a powerful tool in DSA to help commercial users to access the unused bands. We proposed a crowdsourcing-based unknown IU pattern monitoring scheme that leverages the power of masses of portable mobile devices to reduce the cost of the spectrum monitoring and demonstrate the ability of our system to capture not only the existing spectrum access patterns but also the unknown patterns where no historical spectrum information exist. Due to the energy limit of the battery-based system, we then leverage solar energy harvesting and develop an energy management scheme to support our spectrum monitoring system. We also provide best privacy-protection strategies for both static and mobile IUs in terms of hiding their true location under the detection of Environmental Sensing Capabilities system. In this thesis, the heuristic approach for our mathematical formulations and simulation results are described in detail. The simulation results show our spectrum monitoring system can obtain a high spectrum monitoring coverage and low energy consumption. Our IU privacy scheme provides great protection for IU's location privacy. / Master of Science / Spectrum relates to the radio frequencies allocated to the federal users and commercial users for communication over the airwaves. It is a sovereign asset that is overseen by the government in each country to manage the radio spectrum and issue spectrum licenses. In addition, spectrum bands are utilized for various purposes because different bands have different characteristics. However, the overly crowded US frequency allocation chart shows the scarcity of usable radio frequencies. The actual spectrum usage measurements reflect that multiple prized spectrum bands lay idle at most time and location, which indicates that the spectrum shortage is caused by the spectrum management policies rather than the physical scarcity of available frequencies. Dynamic spectrum access (DSA) was proposed as a new paradigm of spectrum sharing that allows commercial users to access the abundant white spaces in the licensed spectrum bands to mitigate the spectrum shortage problem and increase spectrum utilization. In DSA, two of the key technical challenges lie in how to dynamically allocate the spectrum and how to protect spectrum users’ security. This thesis focuses on the development of two types of mechanisms for addressing the above two challenges: (1) developing efficient spectrum monitoring schemes to help secondary users (SU) to accurately and dynamically access the white space in spectrum allocation and (2) developing privacy preservation schemes for incumbent users (IU) to protect their location privacy. Specifically, we proposed an unknown IU pattern monitoring scheme that leverages the power of masses of portable mobile devices to reduce the cost of common spectrum monitoring systems. We demonstrate that our system can track not only the existing IU spectrum access patterns but also the unknown patterns where no historical spectrum information exists. We then leverage the solar energy harvesting and design energy management scheme to support our spectrum monitoring system. Finally, we provide a strategy for both static and mobile IUs to hide their true location under the monitoring of Environmental Sensing Capabilities systems.
39

Towards Secure Outsourced Data Services in the Public Cloud

Sun, Wenhai 25 July 2018 (has links)
Past few years have witnessed a dramatic shift for IT infrastructures from a self-sustained model to a centralized and multi-tenant elastic computing paradigm -- Cloud Computing, which significantly reshapes the landscape of existing data utilization services. In truth, public cloud service providers (CSPs), e.g. Google, Amazon, offer us unprecedented benefits, such as ubiquitous and flexible access, considerable capital expenditure savings and on-demand resource allocation. Cloud has become the virtual ``brain" as well to support and propel many important applications and system designs, for example, artificial intelligence, Internet of Things, and so forth; on the flip side, security and privacy are among the primary concerns with the adoption of cloud-based data services in that the user loses control of her/his outsourced data. Encrypting the sensitive user information certainly ensures the confidentiality. However, encryption places an extra layer of ambiguity and its direct use may be at odds with the practical requirements and defeat the purpose of cloud computing technology. We believe that security in nature should not be in contravention of the cloud outsourcing model. Rather, it is expected to complement the current achievements to further fuel the wide adoption of the public cloud service. This, in turn, requires us not to decouple them from the very beginning of the system design. Drawing the successes and failures from both academia and industry, we attempt to answer the challenges of realizing efficient and useful secure data services in the public cloud. In particular, we pay attention to security and privacy in two essential functions of the cloud ``brain", i.e. data storage and processing. Our first work centers on the secure chunk-based deduplication of encrypted data for cloud backup and achieves the performance comparable to the plaintext cloud storage deduplication while effectively mitigating the information leakage from the low-entropy chunks. On the other hand, we comprehensively study the promising yet challenging issue of search over encrypted data in the cloud environment, which allows a user to delegate her/his search task to a CSP server that hosts a collection of encrypted files while still guaranteeing some measure of query privacy. In order to accomplish this grand vision, we explore both software-based secure computation research that often relies on cryptography and concentrates on algorithmic design and theoretical proof, and trusted execution solutions that depend on hardware-based isolation and trusted computing. Hopefully, through the lens of our efforts, insights could be furnished into future research in the related areas. / Ph. D.
40

Computing Compliant Anonymisations of Quantified ABoxes w.r.t. EL Policies

Baader, Franz, Kriegel, Francesco, Nuradiansyah, Adrian, Peñaloza, Rafael 28 December 2021 (has links)
We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL EL. We provide a chacterization of compliance of such ABoxes w.r.t. EL policies, and show how optimal compliant anonymisations of ABoxes that are non-compliant can be computed. This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.

Page generated in 0.0785 seconds