<p>Computer systems not only provide unprecedented efficiency and</p><p>numerous benefits, but also offer powerful means and tools for</p><p>abuse. This reality is increasingly more evident as deployed software</p><p>spans across trust domains and enables the interactions of</p><p>self-interested participants with potentially conflicting goals. With</p><p>systems growing more complex and interdependent, there is a growing</p><p>need to localize, identify, and isolate faults and unfaithful behavior. </p><p>Conventional techniques for building secure systems, such as secure</p><p>perimeters and Byzantine fault tolerance, are insufficient to ensure</p><p>that trusted users and software components are indeed</p><p><italic>trustworthy</italic>. Secure perimeters do not work across trust domains and fail</p><p>when a participant acts within the limits of the existing security</p><p>policy and deliberately manipulates the system to her own</p><p>advantage. Byzantine fault tolerance offers techniques to tolerate</p><p>misbehavior, but offers no protection when replicas collude or are</p><p>under the control of a single entity. </p><p>Complex interdependent systems necessitate new mechanisms that</p><p>complement the existing solutions to identify improper behavior and</p><p>actions, limit the propagation of incorrect information, and assign</p><p>responsibility when things go wrong. This thesis </p><p>addresses the problems of misbehavior and abuse by offering tools and</p><p>techniques to integrate <italic>accountability</italic> into computer systems. A</p><p>system is accountable if it offers means to identify and expose</p><p><italic>semantic</italic> misbehavior by its participants. An accountable system</p><p>can construct undeniable evidence to demonstrate its correctness---the</p><p>evidence serves as explicit proof of misbehavior and can be strong enough</p><p>to be used as a basis for social sanction external to the</p><p>system. </p><p>Accountability offers strong disincentives for abuse and</p><p>misbehavior but may have to be ``designed-in'' to an application's</p><p>specific protocols, logic, and internal representation; achieving</p><p>accountability using general techniques is a challenge. Extending</p><p>responsibility to end users for actions performed by software</p><p>components on their behalf is not trivial, as it requires an ability </p><p>to determine whether a component correctly represents a</p><p>user's intentions. Leaks of private information are yet another</p><p>concern---even correctly functioning</p><p>applications can leak sensitive information, for which their owners</p><p>may be accountable. Important infrastructure services, such as</p><p>distributed virtual resource economies, offer a range of application-specific</p><p>issues such as fine-grain resource delegation, virtual</p><p>currency models, and complex work-flows.</p><p>This thesis work addresses the aforementioned problems by designing,</p><p>implementing, applying, and evaluating a generic methodology for</p><p>integrating accountability into network services and applications. Our</p><p><italic>state-based</italic> approach decouples application state management from</p><p>application logic to enable services to demonstrate that they maintain</p><p>their state in compliance with user requests, i.e., state changes do take</p><p>place, and the service presents a consistent view to all clients and</p><p>observers. Internal state managed in this way, can then be used to feed</p><p>application-specific verifiers to determine the correctness the service's</p><p>logic and to identify the responsible party. The state-based approach</p><p>provides support for <italic>strong</italic> accountability---any detected violation</p><p>can be proven to a third party without depending on replication and</p><p>voting. </p><p>In addition to the generic state-based approach, this thesis explores how</p><p>to leverage application-specific knowledge to integrate accountability in</p><p>an example application. We study the invariants and accountability</p><p>requirements of an example application--- a lease-based virtual resource</p><p>economy. We present the design and implementation of several key elements</p><p>needed to provide accountability in the system. In particular, we describe</p><p>solutions to the problems of resource delegation, currency spending, and</p><p>lease protocol compliance. These solutions illustrate a complementary</p><p>technique to the general-purpose state-based approach, developed in the</p><p>earlier parts of this thesis. </p><p>Separating the actions of software and its user is at the heart of the</p><p>third component of this dissertation. We design, implement, and evaluate</p><p>an approach to detect information leaks in a commodity operating system.</p><p>Our novel OS abstraction---a <italic>doppelganger</italic> process---helps track</p><p>information flow without requiring application rewrite or instrumentation.</p><p>Doppelganger processes help identify sensitive data as they are about to</p><p>leave the confines of the system. Users can then be alerted about the</p><p>potential breach and can choose to prevent the leak to avoid becoming</p><p>accountable for the actions of software acting on their behalf.</p> / Dissertation
Identifer | oai:union.ndltd.org:DUKE/oai:dukespace.lib.duke.edu:10161/1236 |
Date | January 2009 |
Creators | Yumerefendi, Aydan Rafet |
Contributors | Chase, Jeffrey S |
Source Sets | Duke University |
Language | en_US |
Detected Language | English |
Type | Dissertation |
Format | 2937376 bytes, application/pdf |
Page generated in 0.0022 seconds