• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • Tagged with
  • 15
  • 15
  • 15
  • 15
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hardware-Assisted Dependable Systems

Kuvaiskii, Dmitrii 22 March 2018 (has links) (PDF)
Unpredictable hardware faults and software bugs lead to application crashes, incorrect computations, unavailability of internet services, data losses, malfunctioning components, and consequently financial losses or even death of people. In particular, faults in microprocessors (CPUs) and memory corruption bugs are among the major unresolved issues of today. CPU faults may result in benign crashes and, more problematically, in silent data corruptions that can lead to catastrophic consequences, silently propagating from component to component and finally shutting down the whole system. Similarly, memory corruption bugs (memory-safety vulnerabilities) may result in a benign application crash but may also be exploited by a malicious hacker to gain control over the system or leak confidential data. Both these classes of errors are notoriously hard to detect and tolerate. Usual mitigation strategy is to apply ad-hoc local patches: checksums to protect specific computations against hardware faults and bug fixes to protect programs against known vulnerabilities. This strategy is unsatisfactory since it is prone to errors, requires significant manual effort, and protects only against anticipated faults. On the other extreme, Byzantine Fault Tolerance solutions defend against all kinds of hardware and software errors, but are inadequately expensive in terms of resources and performance overhead. In this thesis, we examine and propose five techniques to protect against hardware CPU faults and software memory-corruption bugs. All these techniques are hardware-assisted: they use recent advancements in CPU designs and modern CPU extensions. Three of these techniques target hardware CPU faults and rely on specific CPU features: ∆-encoding efficiently utilizes instruction-level parallelism of modern CPUs, Elzar re-purposes Intel AVX extensions, and HAFT builds on Intel TSX instructions. The rest two target software bugs: SGXBounds detects vulnerabilities inside Intel SGX enclaves, and “MPX Explained” analyzes the recent Intel MPX extension to protect against buffer overflow bugs. Our techniques achieve three goals: transparency, practicality, and efficiency. All our systems are implemented as compiler passes which transparently harden unmodified applications against hardware faults and software bugs. They are practical since they rely on commodity CPUs and require no specialized hardware or operating system support. Finally, they are efficient because they use hardware assistance in the form of CPU extensions to lower performance overhead.
2

Handling Tradeoffs between Performance and Query-Result Quality in Data Stream Processing

Ji, Yuanzhen 27 March 2018 (has links) (PDF)
Data streams in the form of potentially unbounded sequences of tuples arise naturally in a large variety of domains including finance markets, sensor networks, social media, and network traffic management. The increasing number of applications that require processing data streams with high throughput and low latency have promoted the development of data stream processing systems (DSPS). A DSPS processes data streams with continuous queries, which are issued once and return query results to users continuously as new tuples arrive. For stream-based applications, both the query-execution performance (in terms of, e.g., throughput and end-to-end latency) and the quality of produced query results (in terms of, e.g., accuracy and completeness) are important. However, a DSPS often needs to make tradeoffs between these two requirements, either because of the data imperfection within the streams, or because of the limited computation capacity of the DSPS itself. Performance versus result-quality tradeoffs caused by data imperfection are inevitable, because the quality of the incoming data is beyond the control of a DSPS, whereas tradeoffs caused by system limitations can be alleviated—even erased—by enhancing the DSPS itself. This dissertation seeks to advance the state of the art on handling the performance versus result-quality tradeoffs in data stream processing caused by the above two aspects of reasons. For tradeoffs caused by data imperfection, this dissertation focuses on the typical data-imperfection problem of stream disorder and proposes the concept of quality-driven disorder handling (QDDH). QDDH enables a DSPS to make flexible and user-configurable tradeoffs between the end-to-end latency and the query-result quality when dealing with stream disorder. Moreover, compared to existing disorder handling approaches, QDDH can significantly reduce the end-to-end latency, and at the same time provide users with desired query-result quality. In this dissertation, a generic buffer-based QDDH framework and three instantiations of the generic framework for distinct query types are presented. For tradeoffs caused by system limitations, this dissertation proposes a system-enhancement approach that combines the row-oriented and the column-oriented data layout and processing techniques in data stream processing to improve the throughput. To fully exploit the potential of such hybrid execution of continuous queries, a static, cost-based query optimizer is introduced. The optimizer works at the operator level and takes the unique property of execution plans of continuous queries—feasibility—into account.
3

Virtualized Reconfigurable Resources and Their Secured Provision in an Untrusted Cloud Environment

Genßler, Paul R. 09 January 2018 (has links) (PDF)
The cloud computing business grows year after year. To keep up with increasing demand and to offer more services, data center providers are always searching for novel architectures. One of them are FPGAs, reconfigurable hardware with high compute power and energy efficiency. But some clients cannot make use of the remote processing capabilities. Not every involved party is trustworthy and the complex management software has potential security flaws. Hence, clients’ sensitive data or algorithms cannot be sufficiently protected. In this thesis state-of-the-art hardware, cloud and security concepts are analyzed and com- bined. On one side are reconfigurable virtual FPGAs. They are a flexible resource and fulfill the cloud characteristics at the price of security. But on the other side is a strong requirement for said security. To provide it, an immutable controller is embedded enabling a direct, confidential and secure transfer of clients’ configurations. This establishes a trustworthy compute space inside an untrusted cloud environment. Clients can securely transfer their sensitive data and algorithms without involving vulnerable software or a data center provider. This concept is implemented as a prototype. Based on it, necessary changes to current FPGAs are analyzed. To fully enable reconfigurable yet secure hardware in the cloud, a new hybrid architecture is required. / Das Geschäft mit dem Cloud Computing wächst Jahr für Jahr. Um mit der steigenden Nachfrage mitzuhalten und neue Angebote zu bieten, sind Betreiber von Rechenzentren immer auf der Suche nach neuen Architekturen. Eine davon sind FPGAs, rekonfigurierbare Hardware mit hoher Rechenleistung und Energieeffizienz. Aber manche Kunden können die ausgelagerten Rechenkapazitäten nicht nutzen. Nicht alle Beteiligten sind vertrauenswürdig und die komplexe Verwaltungssoftware ist anfällig für Sicherheitslücken. Daher können die sensiblen Daten dieser Kunden nicht ausreichend geschützt werden. In dieser Arbeit werden modernste Hardware, Cloud und Sicherheitskonzept analysiert und kombiniert. Auf der einen Seite sind virtuelle FPGAs. Sie sind eine flexible Ressource und haben Cloud Charakteristiken zum Preis der Sicherheit. Aber auf der anderen Seite steht ein hohes Sicherheitsbedürfnis. Um dieses zu bieten ist ein unveränderlicher Controller eingebettet und ermöglicht eine direkte, vertrauliche und sichere Übertragung der Konfigurationen der Kunden. Das etabliert eine vertrauenswürdige Rechenumgebung in einer nicht vertrauenswürdigen Cloud Umgebung. Kunden können sicher ihre sensiblen Daten und Algorithmen übertragen ohne verwundbare Software zu nutzen oder den Betreiber des Rechenzentrums einzubeziehen. Dieses Konzept ist als Prototyp implementiert. Darauf basierend werden nötige Änderungen von modernen FPGAs analysiert. Um in vollem Umfang eine rekonfigurierbare aber dennoch sichere Hardware in der Cloud zu ermöglichen, wird eine neue hybride Architektur benötigt.
4

Entwicklung und Betrieb eines Anonymisierungsdienstes für das WWW

Köpsell, Stefan 10 March 2010 (has links) (PDF)
Die Dissertation erläutert, wie ein Anonymisierungsdienst zu gestalten ist, so daß er für den durchschnittlichen Internetnutzer benutzbar ist. Ein Schwerpunkt dabei war die Berücksichtigung einer möglichst holistischen Sichtweise auf das Gesamtsystem "Anonymisierungsdienst". Es geht daher um die ingenieurmäßige Berücksichtigung der vielschichtigen Anforderungen der einzelnen Interessengruppen. Einige dieser Anforderungen ergeben sich aus einem der zentralen Widersprüche: auf der einen Seite die Notwendigkeit von Datenschutz und Privatheit für den Einzelnen, auf der anderen Seite die ebenso notwendige Überwachbarkeit und Zurechenbarkeit, etwa für die Strafverfolgung. Die Dissertation beschäftigt sich mit dem Aufzeigen und Entwickeln von technischen Möglichkeiten, die zur Lösung dieses Widerspruches herangezogen werden können.
5

Secure Virtualization of Latency-Constrained Systems

Lackorzynski, Adam 16 April 2015 (has links) (PDF)
Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism.
6

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 09 July 2014 (has links) (PDF)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.
7

Untersuchungen zur Risikominimierungstechnik Stealth Computing für verteilte datenverarbeitende Software-Anwendungen mit nutzerkontrollierbar zusicherbaren Eigenschaften / Investigations of the risk minimisation technique Stealth Computing for distributed data-processing software applications with user-controllable guaranteed properties

Spillner, Josef 05 July 2016 (has links) (PDF)
Die Sicherheit und Zuverlässigkeit von Anwendungen, welche schutzwürdige Daten verarbeiten, lässt sich durch die geschützte Verlagerung in die Cloud mit einer Kombination aus zielgrößenabhängiger Datenkodierung, kontinuierlicher mehrfacher Dienstauswahl, dienstabhängiger optimierter Datenverteilung und kodierungsabhängiger Algorithmen deutlich erhöhen und anwenderseitig kontrollieren. Die Kombination der Verfahren zu einer anwendungsintegrierten Stealth-Schutzschicht ist eine notwendige Grundlage für die Konstruktion sicherer Anwendungen mit zusicherbaren Sicherheitseigenschaften im Rahmen eines darauf angepassten Softwareentwicklungsprozesses. / The security and reliability of applications processing sensitive data can be significantly increased and controlled by the user by a combination of techniques. These encompass a targeted data coding, continuous multiple service selection, service-specific optimal data distribution and coding-specific algorithms. The combination of the techniques towards an application-integrated stealth protection layer is a necessary precondition for the construction of safe applications with guaranteeable safety properties in the context of a custom software development process.
8

Secure Network Coding: Dependency of Efficiency on Network Topology

Pfennig, Stefan, Franz, Elke 25 November 2013 (has links) (PDF)
Network Coding is a new possibility to transmit data through a network. By combining different packets instead of simply forwarding, network coding offers the opportunity to reach the Min-Cut/Max-Flow capacity in multicast data transmissions. However, the basic schemes are vulnerable to so-called pollution attacks, where an attacker can jam large parts of the transmission by infiltrating only one bogus message. In the literature we found several approaches which aim at handling this kind of attack with different amounts of overhead. Though, the cost for a specific secure network coding scheme highly depends on the underlying network. The goal of this paper is on the one hand to describe which network parameters influence the efficiency of a certain scheme and on the other hand to provide concrete suggestions for selecting the most efficient secure network coding scheme considering a given network. We will illustrate that there does not exist “the best” secure network scheme concerning efficiency, but all selected schemes are more or less suited under certain network topologies.
9

Dudle: Mehrseitig sichere Web 2.0-Terminabstimmung / Dudle: Multilateral Secure Web 2.0-Event Scheduling

Kellermann, Benjamin 21 December 2011 (has links) (PDF)
Es existiert eine Vielzahl an Web 2.0-Applikationen, welche es einer Gruppe von Personen ermöglichen, einen gemeinsamen Termin zu finden (z. B. doodle.com, moreganize.ch, whenisgood.net, agreeadate.com, meetomatic.com, etc.) Der Ablauf ist simpel: Ein Initiator legt eine Terminumfrage an und schickt den Link zu der Umfrage zu den potentiellen Teilnehmern. Nachdem jeder Teilnehmer der Anwendung seine Verfügbarkeiten mitgeteilt hat, kann anhand dieser Informationen ein Termin gefunden werden, der am besten passt. Maßnahmen um die Vertraulichkeit und Integrität der Daten zu schützen finden in allen bestehenden Applikationen zu wenig Beachtung. In dieser Dissertation wurde eine Web 2.0-Applikation entwickelt, welche es zulässt Terminabstimmungen zwischen mehreren Teilnehmern durchzuführen und dabei möglichst wenige Vertrauensannahmen über alle Beteiligten zu treffen. / Applications which help users to schedule events are becoming more and more important. A drawback of most existing applications is, that the preferences of all participants are revealed to the others. We propose a schemes, which are able to schedule events in a privacy-enhanced way. In addition, Dudle, a Web 2.0 application is presented which implements these schemes.
10

Run-time Variability with Roles

Taing, Nguonly 11 April 2018 (has links) (PDF)
Adaptability is an intrinsic property of software systems that require adaptation to cope with dynamically changing environments. Achieving adaptability is challenging. Variability is a key solution as it enables a software system to change its behavior which corresponds to a specific need. The abstraction of variability is to manage variants, which are dynamic parts to be composed to the base system. Run-time variability realizes these variant compositions dynamically at run time to enable adaptation. Adaptation, relying on variants specified at build time, is called anticipated adaptation, which allows the system behavior to change with respect to a set of predefined execution environments. This implies the inability to solve practical problems in which the execution environment is not completely fixed and often unknown until run time. Enabling unanticipated adaptation, which allows variants to be dynamically added at run time, alleviates this inability, but it holds several implications yielding system instability such as inconsistency and run-time failures. Adaptation should be performed only when a system reaches a consistent state to avoid inconsistency. Inconsistency is an effect of adaptation happening when the system changes the state and behavior while a series of methods is still invoking. A software bug is another source of system instability. It often appears in a variant composition and is brought to the system during adaptation. The problem is even more critical for unanticipated adaptation as the system has no prior knowledge of the new variants. This dissertation aims to achieve anticipated and unanticipated adaptation. In achieving adaptation, the issues of inconsistency and software failures, which may happen as a consequence of run-time adaptation, are evidently addressed as well. Roles encapsulate dynamic behavior used to adapt players representing the base system, which is the rationale to select roles as the software system's variants. Based on the role concept, this dissertation presents three mechanisms to comprehensively address adaptation. First, a dynamic instance binding mechanism is proposed to loosely bind players and roles. Dynamic binding of roles enables anticipated and unanticipated adaptation. Second, an object-level tranquility mechanism is proposed to avoid inconsistency by allowing a player object to adapt only when its consistent state is reached. Last, a rollback recovery mechanism is proposed as a proactive mechanism to embrace and handle failures resulting from a defective composition of variants. A checkpoint of a system configuration is created before adaptation. If a specialized bug sensor detects a failure, the system rolls back to the most recent checkpoint. These mechanisms are integrated into a role-based runtime, called LyRT. LyRT was validated with three case studies to demonstrate the practical feasibility. This validation showed that LyRT is more advanced than the existing variability approaches with respect to adaptation due to its consistency control and failure handling. Besides, several benchmarks were set up to quantify the overhead of LyRT concerning the execution time of adaptation. The results revealed that the overhead introduced to achieve anticipated and unanticipated adaptation to be small enough for practical use in adaptive software systems. Thus, LyRT is suitable for adaptive software systems that frequently require the adaptation of large sets of objects.

Page generated in 0.4571 seconds