Spelling suggestions: "subject:"1ibrary OS"" "subject:"fibrary OS""
1 |
Confidential Computing in Public Clouds : Confidential Data Translations in hardware-based TEEs: Intel SGX with Occlum supportYulianti, Sri January 2021 (has links)
As enterprises migrate their data to cloud infrastructure, they increasingly need a flexible, scalable, and secure marketplace for collaborative data creation, analysis, and exchange among enterprises. Security is a prominent research challenge in this context, with a specific question on how two mutually distrusting data owners can share their data. Confidential Computing helps address this question by allowing to perform data computation inside hardware-based Trusted Execution Environments (TEEs) which we refer to as enclaves, a secured memory that is allocated by CPU. Examples of hardware-based TEEs are Advanced Micro Devices (AMD)-Secure Encrypted Virtualization (SEV), Intel Software Guard Extensions (SGX) and Intel Trust Domain Extensions (TDX). Intel SGX is considered as the most popular hardware-based TEEs since it is widely available in processors targeting desktop and server platforms. Intel SGX can be programmed using Software Development Kit (SDK) as development framework and Library Operating Systems (Library OSes) as runtimes. However, communication with software in the enclave such as the Library OS through system calls may result in performance overhead. In this project, we design confidential data transactions among multiple users, using Intel SGX as TEE hardware and Occlum as Library OS. We implement the design by allowing two clients as data owners share their data to a server that owns Intel SGX capable platform. On the server side, we run machine learning model inference with inputs from both clients inside an enclave. In this case, we aim to evaluate Occlum as a memory-safe Library Operating System (OS) that enables secure and efficient multitasking on Intel SGX by measuring two evaluation aspects such as performance overhead and security benefits. To evaluate the measurement results, we compare Occlum with other runtimes: baseline Linux and Graphene-SGX. The evaluation results show that our design with Occlum outperforms Graphene-SGX by 4x in terms of performance. To evaluate the security aspects, we propose 11 threat scenarios potentially launched by both internal and external attackers toward the design in SGX platform. The results show that Occlum security features succeed to mitigate 10 threat scenarios out of 11 scenarios overall. / När företag migrerar sin data till molninfrastruktur behöver de i allt högre grad en flexibel, skalbar och säker marknadsplats för gemensam dataskapande, analys och utbyte mellan företag. Säkerhet är en framstående forskningsutmaning i detta sammanhang, med en specifik fråga om hur två ömsesidigt misstroende dataägare kan dela sina data. Confidential Computing hjälper till att ta itu med den här frågan genom att tillåta att utföra databeräkning i hårdvarubaserad TEEs som vi kallar enklaver, ett säkert minne som allokeras av CPU. Exempel på maskinvarubaserad TEEs är AMD-SEV, Intel SGX och Intel TDX. Intel SGX anses vara den mest populära maskinvarubaserade TEEs eftersom det finns allmänt tillgängligt i processorer som riktar sig mot stationära och serverplattformar. Intel SGX kan programmeras med hjälp av SDK som utvecklingsram och Library Operating System (Library OSes) som körtid. Kommunikation med programvara i enklaven, till exempel Library OS via systemanrop, kan dock leda till prestandakostnader. I det här projektet utformar vi konfidentiella datatransaktioner mellan flera användare, med Intel SGX som TEE-hårdvara och Occlum som Library OS. Vi implementerar designen genom att låta två klienter som dataägare dela sina data till en server som äger Intel SGX-kompatibel plattform. På serversidan kör vi maskininlärningsmodell slutsats med ingångar från båda klienterna i en enklav. I det här fallet strävar vi efter att utvärdera Occlum som ett minnessäkert Library OS som möjliggör säker och effektiv multitasking på Intel SGX genom att mäta två utvärderingsaspekter som prestandakostnader och säkerhetsfördelar. För att utvärdera mätresultaten jämför vi Occlum med andra driftstider: baslinjen Linux och Graphene-SGX. Utvärderingsresultaten visar att vår design med Occlum överträffar Graphene-SGX av 4x när det gäller prestanda. För att utvärdera säkerhetsaspekterna föreslår vi elva hotscenarier som potentiellt lanseras av både interna och externa angripare mot designen i SGX-plattformen. Resultaten visar att Occlums säkerhetsfunktioner lyckas mildra 10 hotscenarier av 11 scenarier totalt.
|
2 |
Network and storage stack specialisation for performanceMarinos, Ilias January 2018 (has links)
In order to serve hundreds of millions of users, contemporary content providers employ tens of thousands of servers to scale their systems. The system software in these environments, however, is struggling to keep up with the increase in demand: contemporary network and storage stacks, as well as related APIs (e.g., BSD socket API) follow a `one-size-fits-all' design, heavily emphasising generality and feature richness at the cost of performance, leaving crucial hardware resources unexploited. Despite considerable prior research in improving I/O performance for conventional stacks, substantial hardware potential still remains unexploited because most of these proposals are fundamentally limited in their scope and effectiveness, as they still have to fit in a general-purpose design. In this dissertation, I argue that specialisation and microarchitectural awareness are necessary in system software design to effectively exploit hardware capabilities, and scale I/O performance. In particular, I argue that trading off generality and compatibility, allows us to radically re-architect the stack emphasising application-specific optimisations and efficient data movement throughout the hardware to improve performance. I first demonstrate that conventional general-purpose stacks fail to effectively utilise contemporary hardware while serving critical Internet workloads, and show why modern microarchitectural properties play a critical role in scaling I/O performance. I then identify core decisions in Operating Systems design that, although they were originally introduced to optimise performance, are now proven redundant or even detrimental. I propose clean-slate, specialised architectures for network and storage stacks designed to exploit modern hardware properties, and application domain-specific knowledge in order to sidestep historical bottlenecks in systems I/O performance, and achieve great scalability. With thorough evaluation of my systems, I illustrate how specialisation and greater microarchitectural awareness could lead to dramatic performance improvements, which could ultimately translate to improved scalability and reduced capital expenditure simultaneously.
|
Page generated in 0.036 seconds