41 |
Hardening High-Assurance Security Systems with Trusted ComputingOzga, Wojciech 12 August 2022 (has links)
We are living in the time of the digital revolution in which the world we know changes beyond recognition every decade. The positive aspect is that these changes also drive the progress in quality and availability of digital assets crucial for our societies. To name a few examples, these are broadly available communication channels allowing quick exchange of knowledge over long distances, systems controlling automatic share and distribution of renewable energy in international power grid networks, easily accessible applications for early disease detection enabling self-examination without burdening the health service, or governmental systems assisting citizens to settle official matters without leaving their homes. Unfortunately, however, digitalization also opens opportunities for malicious actors to threaten our societies if they gain control over these assets after successfully exploiting vulnerabilities in the complex computing systems building them. Protecting these systems, which are called high-assurance security systems, is therefore of utmost importance.
For decades, humanity has struggled to find methods to protect high-assurance security systems. The advancements in the computing systems security domain led to the popularization of hardware-assisted security techniques, nowadays available in commodity computers, that opened perspectives for building more sophisticated defense mechanisms at lower costs. However, none of these techniques is a silver bullet. Each one targets particular use cases, suffers from limitations, and is vulnerable to specific attacks. I argue that some of these techniques are synergistic and help overcome limitations and mitigate specific attacks when used together. My reasoning is supported by regulations that legally bind high-assurance security systems' owners to provide strong security guarantees. These requirements can be fulfilled with the help of diverse technologies that have been standardized in the last years.
In this thesis, I introduce new techniques for hardening high-assurance security systems that execute in remote execution environments, such as public and hybrid clouds. I implemented these techniques as part of a framework that provides technical assurance that high-assurance security systems execute in a specific data center, on top of a trustworthy operating system, in a virtual machine controlled by a trustworthy hypervisor or in strong isolation from other software. I demonstrated the practicality of my approach by leveraging the framework to harden real-world applications, such as machine learning applications in the eHealth domain. The evaluation shows that the framework is practical. It induces low performance overhead (<6%), supports software updates, requires no changes to the legacy application's source code, and can be tailored to individual trust boundaries with the help of security policies.
The framework consists of a decentralized monitoring system that offers better scalability than traditional centralized monitoring systems. Each monitored machine runs a piece of code that verifies that the machine's integrity and geolocation conform to the given security policy. This piece of code, which serves as a trusted anchor on that machine, executes inside the trusted execution environment, i.e., Intel SGX, to protect itself from the untrusted host, and uses trusted computing techniques, such as trusted platform module, secure boot, and integrity measurement architecture, to attest to the load-time and runtime integrity of the surrounding operating system running on a bare metal machine or inside a virtual machine. The trusted anchor implements my novel, formally proven protocol, enabling detection of the TPM cuckoo attack.
The framework also implements a key distribution protocol that, depending on the individual security requirements, shares cryptographic keys only with high-assurance security systems executing in the predefined security settings, i.e., inside the trusted execution environments or inside the integrity-enforced operating system. Such an approach is particularly appealing in the context of machine learning systems where some algorithms, like the machine learning model training, require temporal access to large computing power. These algorithms can execute inside a dedicated, trusted data center at higher performance because they are not limited by security features required in the shared execution environment. The evaluation of the framework showed that training of a machine learning model using real-world datasets achieved 0.96x native performance execution on the GPU and a speedup of up to 1560x compared to the state-of-the-art SGX-based system.
Finally, I tackled the problem of software updates, which makes the operating system's integrity monitoring unreliable due to false positives, i.e., software updates move the updated system to an unknown (untrusted) state that is reported as an integrity violation. I solved this problem by introducing a proxy to a software repository that sanitizes software packages so that they can be safely installed. The sanitization consists of predicting and certifying the future (after the specific updates are installed) operating system's state. The evaluation of this approach showed that it supports 99.76% of the packages available in Alpine Linux main and community repositories.
The framework proposed in this thesis is a step forward in verifying and enforcing that high-assurance security systems execute in an environment compliant with regulations. I anticipate that the framework might be further integrated with industry-standard security information and event management tools as well as other security monitoring mechanisms to provide a comprehensive solution hardening high-assurance security systems. Read more
|
42 |
Trusted Execution Environment deployment through cloud Virtualization : Aproject on scalable deployment of virtual machines / Implementering av Trusted Execution Environment genom Cloud Virtualization : Ett projekt om skalbar distribution av virtuella maskinerStaboli, Luca January 2022 (has links)
In the context of cloud computing, Trusted Execution Environments (TEE) are isolated areas of application software that can be executed with better security, building a trusted and secure environment that is detached from the rest of the memory. Trusted Execution Environment is a technology that become available only in the last few years, and it is not widespread yet. This thesis investigates the most popular approaches to build a TEE, namely the process-based and the virtualization-based, and will abstract them as much as possible to design a common infrastructure that can deploy TEEs on an external cloud provider, no matter which technology approach is used. The thesis is relevant and novel because the project will give the possibility to use different technologies for the deployment, such as Intel SGX and AMD SEV, which are the two main solutions, but without being reliant on any particular one. If in the future new technologies or vendors’ solutions will become popular, they can be simply added to the list of options. The same can be said for the cloud provider choice. The results show that it is possible to abstract the common features of different TEE’s technologies and to use a unique Application Programming Interface (API) to deploy different TEE´s technologies. We will also ran a performance and quality evaluation, and the results show that the API is performant and respect the common standard quality. This tool is useful for the problem owner and future works on the topic of cloud security. / I samband med cloud computing är Trusted Execution Environments (TEE) isolerade områden av applikationsprogramvara som kan köras med bättre säkerhet, bygga en pålitlig och säker miljö som är frikopplad från resten av minnet. Trusted Execution Environment är en teknik som blivit tillgänglig först under de senaste åren, och den är inte utbredd ännu. Denna avhandling undersöker de mest populära metoderna för att bygga en TEE, nämligen den processbaserade och den virtualiseringsbaserade, och kommer att abstrahera dem så mycket som möjligt för att designa en gemensam infrastruktur som kan distribuera TEEs på en extern molnleverantör, oavsett vilken teknik tillvägagångssätt används. Avhandlingen är relevant och ny eftersom projektet kommer att ge möjligheten att använda olika teknologier för implementeringen, såsom Intel SGX och AMD SEV, som är de två huvudlösningarna, men utan att vara beroende av någon speciell. Om i framtiden nya teknologier eller leverantörers lösningar kommer att bli populära kan de helt enkelt läggas till i listan över alternativ. Detsamma kan sägas om valet av molnleverantör. Resultaten visar att det är möjligt att abstrahera de gemensamma egenskaperna hos olika TEE:s teknologier och att använda ett unikt Application Programming Interface (API) för att distribuera olika TEE:s teknologier. Vi kommer också att göra en prestanda- och kvalitetsutvärdering, och resultaten visar att API:et är prestanda och respekterar den gemensamma standardkvaliteten. Det här verktyget är användbart för problemägaren och framtida arbeten på ämnet molnsäkerhet. Read more
|
43 |
Using ARM TrustZone for Secure Resource Monitoring of IoT Devices Running Contiki-NG / Använda ARM TrustZone för säker resursövervakning av IoT-enheter som kör Contiki-NGGeorgiou, Nikolaos January 2023 (has links)
The rapid development of Internet of Things (IoT) devices has brought unparalleled convenience and efficiency to our daily lives. However, with this exponential growth comes the pressing need to address the critical security challenges posed by these interconnected devices. IoT devices are typically resource-constrained, lacking the robust computing power and memory capacity of traditional computing systems, which often leads to a lack of adequate security mechanisms and leaves them vulnerable to various attacks. This master’s thesis contributes by investigating a secure mechanism that utilizes the hardware isolation provided by the TrustZone technology found in ARM’s Cortex-M processors. TrustZone is a hardware-based security extension in ARM processors that enables a secure, isolated environment for executing sensitive code alongside a regular, non-secure operating system. This thesis uses this mechanism and implements a Trusted Execution Environment (TEE) in the secure environment of TrustZone that monitors the resource usage of applications running in the non-secure operating system. The aim of the TEE is to monitor the network communication and the CPU usage of the applications running on the IoT device, protecting its integrity and detecting any abnormal behavior. The implementation is done inside the Contiki-NG operating system, a well-known operating system designed for constrained IoT devices. The thesis conducts a comprehensive evaluation of the developed security solution through extensive experiments using two micro-benchmarks. It analyzes the impact of the security mechanism on various aspects of the IoT device, such as runtime overhead, energy consumption, and memory requirements, while taking into account the resource constraints. Furthermore, the effectiveness of the security solution in identifying malicious activities and abnormal behaviors is thoroughly assessed. The findings demonstrate that the TrustZone-based security mechanism introduces relatively minimal overhead to the device’s operation, making it a viable option for IoT devices that can accommodate such slight performance impacts. The research sheds light on the critical issue of IoT device security, emphasizing the need for tailored solutions that consider the resource constraints of these devices. It presents an alternative solution that utilizes TrustZone’s hardware isolation to effectively monitor the applications running in IoT devices and opens a new approach to securing such kinds of devices. / Den snabba utvecklingen av Internet of Things (IoT)-enheter har gett oöverträffad bekvämlighet och effektivitet i våra dagliga liv. Men med denna exponentiella tillväxt kommer det trängande behovet att ta itu med de kritiska säkerhetsutmaningarna som dessa sammankopplade enheter utgör. IoT-enheter är vanligtvis resursbegränsade och saknar den robusta datorkraften och minneskapaciteten hos traditionella datorsystem, vilket ofta leder till brist på adekvata säkerhetsmekanismer och gör dem sårbara för olika attacker. Denna rapport bidrar genom att undersöka en säker mekanism som använder hårdvaruisoleringen som tillhandahålls av TrustZone-teknologin som finns i ARMs Cortex-M-processorer. TrustZone är ett hårdvarubaserad säkerhetstillägg i ARM-processorer som möjliggör en säker, isolerad miljö för exekvering av känslig kod tillsammans med ett vanligt, osäkrat operativsystem. Denna rapport använder denna mekanism och implementerar ett Trusted Execution Environment (TEE) i den säkra miljön i TrustZone som övervakar resursanvändningen av applikationer som körs i det osäkra operativsystemet. Syftet med TEE är att övervaka nätverkskommunikationen och CPU-användningen för de applikationer som körs på IoT-enheten, skydda dess integritet och upptäcka eventuellt onormalt beteende. Implementeringen görs i operativsystemet Contiki-NG, ett välkänt operativsystem designat för begränsade IoT-enheter. Rapporten genomför en omfattande utvärdering av den utvecklade säkerhetslösningen genom omfattande experiment med två mikroriktmärken. Den analyserar effekten av säkerhetsmekanismen på olika aspekter av IoTenheten, såsom overhead under drift, energiförbrukning och minneskrav, samtidigt som resursbegränsningarna tas i beaktande. Dessutom utvärderas effektiviteten grundligt hos säkerhetslösningen för att identifiera skadliga aktiviteter och onormala beteenden. Resultaten visar att den TrustZonebaserade säkerhetsmekanismen introducerar relativt minimal overhead för enhetens drift, vilket gör det till ett genomförbart alternativ för IoT-enheter som kan hantera en liten prestandapåverkan. Forskningen belyser den kritiska frågan om IoT-enhetssäkerhet och betonar behovet av skräddarsydda lösningar som tar hänsyn till dessa enheters resursbegränsningar. Den presenterar en alternativ lösning som använder TrustZones hårdvaruisolering för att effektivt övervaka applikationer som körs i IoT-enheter och öppnar ett nytt tillvägagångssätt för att säkra sådana typer av enheter. Read more
|
44 |
Authoritative and Unbiased Responses to Geographic QueriesAdhikari, Naresh 01 May 2020 (has links)
Trust in information systems stem from two key properties of responses to queries regarding the state of the system, viz., i) authoritativeness, and ii) unbiasedness. That the response is authoritative implies that i) the provider (source) of the response, and ii) the chain of delegations through which the provider obtained the authority to respond, can be verified. The property of unbiasedness implies that no system data relevant to the query is deliberately or accidentally suppressed. The need for guaranteeing these two important properties stem from the impracticality for the verifier to exhaustively verify the correctness of every system process, and the integrity of the platform on which system processes are executed. For instance, the integrity of a process may be jeopardized by i) bugs (attacks) in computing hardware like Random Access Memory (RAM), input/output channels (I/O), and Central Processing Unit( CPU), ii) exploitable defects in an operating system, iii) logical bugs in program implementation, and iv) a wide range of other embedded malfunctions, among others. A first step in ensuing AU properties of geographic queries is the need to ensure AU responses to a specific type of geographic query, viz., point-location. The focus of this dissertation is on strategies to leverage assured point-location, for i) ensuring authoritativeness and unbiasedness (AU) of responses to a wide range of geographic queries; and ii) useful applications like Secure Queryable Dynamic Maps (SQDM) and trustworthy redistricting protocol. The specific strategies used for guaranteeing AU properties of geographic services include i) use of novel Merkle-hash tree- based data structures, and ii) blockchain networks to guarantee the integrity of the processes. Read more
|
45 |
ADVANCED TELEMETRY PROCESSING SYSTEM (ATPS)Finegan, Brian H., Singer, Gary 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / The Advanced Telemetry Processing System (ATPS) is the result of a joint
development project between Harris Corporation and Veda Systems, Incorporated.
The mission of the development team was to produce a high-performance,
cost-effective, supportable telemetry system; one that would utilize
commercial-off-the-shelf (COTS) hardware and software, thereby eliminating costly
customization typically required for range and telemetry applications. A critical
element in the 'cost-effective, supportable' equation was the ability to easily
incorporate system performance upgrades as well as future hardware and software
technology advancements.
The ATPS combines advanced hardware and software technology that includes a
high-speed, top-down data management environment; a mature man-machine
interface; a B1-level Trusted operating system and network; and stringent real-time
multiprocessing capabilities into a single, fully integrated, 'open' platform. In addition,
the system incorporates a unique direct memory transfer feature that allows incoming
data to pass directly into local memory space where it can be displayed and analyzed,
thereby reducing I/O bottleneck and freeing processors for other specialized tasks. Read more
|
46 |
Trustworthy services through attestationLyle, John January 2011 (has links)
Remote attestation is a promising mechanism for assurance of distributed systems. It allows users to identify the software running on a remote system before trusting it with an important task. This functionality is arriving at exactly the right time as security-critical systems, such as healthcare and financial services, are increasingly being hosted online. However, attestation has limitations and has been criticized for being impractical. Too much effort is required for too little reward: a large, rapidly-changing list of software must be maintained by users, who then have insufficient information to make a trust decision. As a result attestation is rarely used today. This thesis evaluates attestation in a service-oriented context to determine whether it can be made practical for assurance of servers rather than client machines. There are reasons to expect that it can: servers run fewer programs and the overhead of integrity reporting is more appropriate on a server which may be protecting important assets. However, a literature review and new experiments show that problems remain, many stemming from the large trusted computing base as well as the lack of information linking software identity to expected behaviour. Three novel solutions are proposed. Web service middleware is restructured to minimize the software running at the endpoint, thus lowering the effort for the relying party. A key advantage of the proposed two-tier structure is that strong integrity guarantees can be made without loss of conformance with service standards. Secondly, a program modelling approach is investigated to further automate the attestation and verification process and add more information about system behaviour. Several sets of programs are modelled, including the bootloader, a web service and a menu-based shell. Finally, service behaviour is attested through source code properties established during compilation. This provides a trustworthy and verifiable connection between the identity of the software on a service platform and its expected runtime behaviour. This approach is applicable to any programming language and verification method, and has the advantage of not requiring a runtime monitor. These contributions are evaluated using an example e-voting service to show the level of assurance attestation can provide. Overall, this thesis demonstrates that attestation can be made significantly more practical through the described new techniques. Although some problem remain, with further improvements to operating systems and better software engineering methods, attestation may become a trustworthy and reliable assurance mechanism for web services. Read more
|
47 |
Towards a trusted grid architectureCooper, Andrew January 2010 (has links)
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale. Read more
|
48 |
Trusted Computing & Digital Rights Management : Theory & EffectsGustafsson, Daniel, Stewén, Tomas January 2004 (has links)
<p>Trusted Computing Platform Alliance (TCPA), now known as Trusted Computing Group (TCG), is a trusted computing initiative created by IBM, Microsoft, HP, Compaq, Intel and several other smaller companies. Their goal is to create a secure trusted platform with help of new hardware and software. TCG have developed a new chip, the Trusted Platform Module (TPM) that is the core of this initiative, which is attached to the motherboard. An analysis is made on the TCG’s specifications and a summary is written of the different parts and functionalities implemented by this group. Microsoft is in the development stage for an operating system that can make use of TCG’s TPM and other new hardware. This initiative of the operating system is called NGSCB (Next Generation Secure Computing Base) former known as Palladium. This implementation makes use of TCG’s main functionalities with a few additions. An analysis is made on Microsoft’s NGSCB specifications and a summary is written of how this operating system will work. NGSCB is expected in Microsoft’s next operating system Longhorn version 2 that will have its release no sooner than 2006. Intel has developed hardware needed for a trusted platform and has come up with a template on how operating system developers should implement their OS to make use of this hardware. TCG’s TPM are also a part of the system. This system is called LaGrande. An analysis is also made on this trusted computing initiative and a sum up of it is written. This initiative is very similar to NGSCB, but Microsoft and Intel are not willing to comment on that. DRM (Digital Rights Management) is a technology that protects digital content (audio, video, documents, e-books etc) with rights. A DRM system is a system that manages the rights connected to the content and provides security for those by encryption. First, Microsoft’s RMS (Rights Management System) that controls the rights of documents within a company is considered. Second, a general digital media DRM system is considered that handles e-commerce for digital content. Analysis and discussion are made on what effects TC (Trusted Computing) and DRM could result in for home users, companies and suppliers of TC hardware and software. The different questions stated in the problemformulation is also discussed and considered. There are good and bad effects for every group but if TC will work as stated today, then the pros will outweigh the cons. The same goes for DRM on a TC platform. Since the benefits outweights the drawbacks, we think that TC should be fullfilled and taken into production. TC and DRM provides a good base of security and it is then up to the developers to use this in a good and responsible way.</p> Read more
|
49 |
Improving System Security Through TCB ReductionKauer, Bernhard 16 April 2015 (has links) (PDF)
The OS (operating system) is the primary target of todays attacks. A single exploitable defect can be sufficient to break the security of the system and give fully control over all the software on the machine. Because current operating systems are too large to be defect free, the best approach to improve the system security is to reduce their code to more manageable levels. This work shows how the security-critical part of the OS, the so called TCB (Trusted Computing Base), can be reduced from millions to less than hundred thousand lines of code to achieve these security goals. Shrinking the software stack by more than an order of magnitude is an open challenge since no single technique can currently achieve this.
We therefore followed a holistic approach and improved the design as well as implementation of several system layers starting with a new OS called NOVA. NOVA provides a small TCB for both newly written applications but also for legacy code running inside virtual machines. Virtualization is thereby the key technique to ensure that compatibility requirements will not increase the minimal TCB of our system. The main contribution of this work is to show how the virtual machine monitor for NOVA was implemented with significantly less lines of code without affecting the performance of its guest OS. To reduce the overall TCB of our system, other parts had to be improved as well. Additional contributions are the simplification of the OS debugging interface, the reduction of the boot stack and a new programming language called B1 that can be more easily compiled. Read more
|
50 |
Protection des Accélérateurs Matériels de Cryptographie SymétriqueGuilley, Sylvain 14 December 2012 (has links) (PDF)
Les contremesures de masquage et de dissimulation permettent de rendre plus compliquées les attaques sur les implémentations de chiffrement symétrique. Elles sont aussi toutes deux aisément implémentables (et ce de façon automatisable) dans des flots EDA (Electronic Design Automation) pour ASIC (Application Specific Integrated Circuit) ou FPGA (Field Programmable Gates Array), avec certes différents niveaux d'expertise requis selon la contremesure concernée. Le masquage assure une protection "dynamique" s'appuyant sur un mélange d'aléa en cours de calcul. Nous montrons comment optimiser l'usage de cet aléa grâce à un codage qui permet de compresser les fuites d'information (leakage squeezing). Les limites du masquage s'étudient grâce à des outils de statistique, en analysant des distributions de probabilités. L'outil maître pour évaluer les imperfections des logiques DPL (Dual-rail with Precharge Logic style) est l'analyse stochastique, qui tente de modéliser des fuites "statiques" combinant plusieurs bits. L'inconvénient du masquage est que les attaques sont structurelles à l'utilisation d'aléa : si une attaque réussit sur une partie de la clé (e.g. un octet), alors a priori tous les autres octets sont de façon consistante vulnérables à la même attaque. La situation est différente avec les DPL : en cas de problème d'implémentation, seuls les octets de clés impliqués dans les parties déséquilibrées sont compromis, et non toute la clé. Une façon encore moins coûteuse de protéger les implémentations cryptographiques contre les attaques physiques est la résilience. C'est un usage astucieux de primitives a priori non protégées qui permet d'assurer la protection des secrets. L'avantage des approches résilientes est leur simplicité de mise en oeuvre et (idéalement), leur prouvabilité. Le principal inconvénient est que les contraintes d'usage ne sont souvent pas compatibles avec les standards actuels. Ainsi, nous pensons que davantage de recherche dans ce domaine pourrait globalement être profitable à l'industrie de la sécurité de systèmes embarqués. Read more
|
Page generated in 0.0591 seconds