• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Confidential Computing in Public Clouds : Confidential Data Translations in hardware-based TEEs: Intel SGX with Occlum support

Yulianti, Sri January 2021 (has links)
As enterprises migrate their data to cloud infrastructure, they increasingly need a flexible, scalable, and secure marketplace for collaborative data creation, analysis, and exchange among enterprises. Security is a prominent research challenge in this context, with a specific question on how two mutually distrusting data owners can share their data. Confidential Computing helps address this question by allowing to perform data computation inside hardware-based Trusted Execution Environments (TEEs) which we refer to as enclaves, a secured memory that is allocated by CPU. Examples of hardware-based TEEs are Advanced Micro Devices (AMD)-Secure Encrypted Virtualization (SEV), Intel Software Guard Extensions (SGX) and Intel Trust Domain Extensions (TDX). Intel SGX is considered as the most popular hardware-based TEEs since it is widely available in processors targeting desktop and server platforms. Intel SGX can be programmed using Software Development Kit (SDK) as development framework and Library Operating Systems (Library OSes) as runtimes. However, communication with software in the enclave such as the Library OS through system calls may result in performance overhead. In this project, we design confidential data transactions among multiple users, using Intel SGX as TEE hardware and Occlum as Library OS. We implement the design by allowing two clients as data owners share their data to a server that owns Intel SGX capable platform. On the server side, we run machine learning model inference with inputs from both clients inside an enclave. In this case, we aim to evaluate Occlum as a memory-safe Library Operating System (OS) that enables secure and efficient multitasking on Intel SGX by measuring two evaluation aspects such as performance overhead and security benefits. To evaluate the measurement results, we compare Occlum with other runtimes: baseline Linux and Graphene-SGX. The evaluation results show that our design with Occlum outperforms Graphene-SGX by 4x in terms of performance. To evaluate the security aspects, we propose 11 threat scenarios potentially launched by both internal and external attackers toward the design in SGX platform. The results show that Occlum security features succeed to mitigate 10 threat scenarios out of 11 scenarios overall. / När företag migrerar sin data till molninfrastruktur behöver de i allt högre grad en flexibel, skalbar och säker marknadsplats för gemensam dataskapande, analys och utbyte mellan företag. Säkerhet är en framstående forskningsutmaning i detta sammanhang, med en specifik fråga om hur två ömsesidigt misstroende dataägare kan dela sina data. Confidential Computing hjälper till att ta itu med den här frågan genom att tillåta att utföra databeräkning i hårdvarubaserad TEEs som vi kallar enklaver, ett säkert minne som allokeras av CPU. Exempel på maskinvarubaserad TEEs är AMD-SEV, Intel SGX och Intel TDX. Intel SGX anses vara den mest populära maskinvarubaserade TEEs eftersom det finns allmänt tillgängligt i processorer som riktar sig mot stationära och serverplattformar. Intel SGX kan programmeras med hjälp av SDK som utvecklingsram och Library Operating System (Library OSes) som körtid. Kommunikation med programvara i enklaven, till exempel Library OS via systemanrop, kan dock leda till prestandakostnader. I det här projektet utformar vi konfidentiella datatransaktioner mellan flera användare, med Intel SGX som TEE-hårdvara och Occlum som Library OS. Vi implementerar designen genom att låta två klienter som dataägare dela sina data till en server som äger Intel SGX-kompatibel plattform. På serversidan kör vi maskininlärningsmodell slutsats med ingångar från båda klienterna i en enklav. I det här fallet strävar vi efter att utvärdera Occlum som ett minnessäkert Library OS som möjliggör säker och effektiv multitasking på Intel SGX genom att mäta två utvärderingsaspekter som prestandakostnader och säkerhetsfördelar. För att utvärdera mätresultaten jämför vi Occlum med andra driftstider: baslinjen Linux och Graphene-SGX. Utvärderingsresultaten visar att vår design med Occlum överträffar Graphene-SGX av 4x när det gäller prestanda. För att utvärdera säkerhetsaspekterna föreslår vi elva hotscenarier som potentiellt lanseras av både interna och externa angripare mot designen i SGX-plattformen. Resultaten visar att Occlums säkerhetsfunktioner lyckas mildra 10 hotscenarier av 11 scenarier totalt.
2

Systems Support for Trusted Execution Environments

Trach, Bohdan 09 February 2022 (has links)
Cloud computing has become a default choice for data processing by both large corporations and individuals due to its economy of scale and ease of system management. However, the question of trust and trustoworthy computing inside the Cloud environments has been long neglected in practice and further exacerbated by the proliferation of AI and its use for processing of sensitive user data. Attempts to implement the mechanisms for trustworthy computing in the cloud have previously remained theoretical due to lack of hardware primitives in the commodity CPUs, while a combination of Secure Boot, TPMs, and virtualization has seen only limited adoption. The situation has changed in 2016, when Intel introduced the Software Guard Extensions (SGX) and its enclaves to the x86 ISA CPUs: for the first time, it became possible to build trustworthy applications relying on a commonly available technology. However, Intel SGX posed challenges to the practitioners who discovered the limitations of this technology, from the limited support of legacy applications and integration of SGX enclaves into the existing system, to the performance bottlenecks on communication, startup, and memory utilization. In this thesis, our goal is enable trustworthy computing in the cloud by relying on the imperfect SGX promitives. To this end, we develop and evaluate solutions to issues stemming from limited systems support of Intel SGX: we investigate the mechanisms for runtime support of POSIX applications with SCONE, an efficient SGX runtime library developed with performance limitations of SGX in mind. We further develop this topic with FFQ, which is a concurrent queue for SCONE's asynchronous system call interface. ShieldBox is our study of interplay of kernel bypass and trusted execution technologies for NFV, which also tackles the problem of low-latency clocks inside enclave. The two last systems, Clemmys and T-Lease are built on a more recent SGXv2 ISA extension. In Clemmys, SGXv2 allows us to significantly reduce the startup time of SGX-enabled functions inside a Function-as-a-Service platform. Finally, in T-Lease we solve the problem of trusted time by introducing a trusted lease primitive for distributed systems. We perform evaluation of all of these systems and prove that they can be practically utilized in existing systems with minimal overhead, and can be combined with both legacy systems and other SGX-based solutions. In the course of the thesis, we enable trusted computing for individual applications, high-performance network functions, and distributed computing framework, making a <vision of trusted cloud computing a reality.
3

Confidential Federated Learning with Homomorphic Encryption / Konfidentiellt federat lärande med homomorf kryptering

Wang, Zekun January 2023 (has links)
Federated Learning (FL), one variant of Machine Learning (ML) technology, has emerged as a prevalent method for multiple parties to collaboratively train ML models in a distributed manner with the help of a central server normally supplied by a Cloud Service Provider (CSP). Nevertheless, many existing vulnerabilities pose a threat to the advantages of FL and cause potential risks to data security and privacy, such as data leakage, misuse of the central server, or the threat of eavesdroppers illicitly seeking sensitive information. Promisingly advanced cryptography technologies such as Homomorphic Encryption (HE) and Confidential Computing (CC) can be utilized to enhance the security and privacy of FL. However, the development of a framework that seamlessly combines these technologies together to provide confidential FL while retaining efficiency remains an ongoing challenge. In this degree project, we develop a lightweight and user-friendly FL framework called Heflp, which integrates HE and CC to ensure data confidentiality and integrity throughout the entire FL lifecycle. Heflp supports four HE schemes to fit diverse user requirements, comprising three pre-existing schemes and one optimized scheme that we design, named Flashev2, which achieves the highest time and spatial efficiency across most scenarios. The time and memory overheads of all four HE schemes are also evaluated and a comparison between the pros and cons of each other is summarized. To validate the effectiveness, Heflp is tested on the MNIST dataset and the Threat Intelligence dataset provided by CanaryBit, and the results demonstrate that it successfully preserves data privacy without compromising model accuracy. / Federated Learning (FL), en variant av Maskininlärning (ML)-teknologi, har framträtt som en dominerande metod för flera parter att samarbeta om att distribuerat träna ML-modeller med hjälp av en central server som vanligtvis tillhandahålls av en molntjänstleverantör (CSP). Trots detta utgör många befintliga sårbarheter ett hot mot FL:s fördelar och medför potentiella risker för datasäkerhet och integritet, såsom läckage av data, missbruk av den centrala servern eller risken för avlyssnare som olagligt söker känslig information. Lovande avancerade kryptoteknologier som Homomorf Kryptering (HE) och Konfidentiell Beräkning (CC) kan användas för att förbättra säkerheten och integriteten för FL. Utvecklingen av en ramverk som sömlöst kombinerar dessa teknologier för att erbjuda konfidentiellt FL med bibehållen effektivitet är dock fortfarande en pågående utmaning. I detta examensarbete utvecklar vi en lättviktig och användarvänlig FL-ramverk som kallas Heflp, som integrerar HE och CC för att säkerställa datakonfidentialitet och integritet under hela FLlivscykeln. Heflp stöder fyra HE-scheman för att passa olika användarbehov, bestående av tre befintliga scheman och ett optimerat schema som vi designar, kallat Flashev2, som uppnår högsta tids- och rumeffektivitet i de flesta scenarier. Tids- och minneskostnaderna för alla fyra HE-scheman utvärderas också, och en jämförelse mellan fördelar och nackdelar sammanfattas. För att validera effektiviteten testas Heflp på MNIST-datasetet och Threat Intelligence-datasetet som tillhandahålls av CanaryBit, och resultaten visar att det framgångsrikt bevarar datasekretessen utan att äventyra modellens noggrannhet.

Page generated in 0.0829 seconds