321 |
System Support for Strong AccountabilityYumerefendi, Aydan Rafet January 2009 (has links)
<p>Computer systems not only provide unprecedented efficiency and</p><p>numerous benefits, but also offer powerful means and tools for</p><p>abuse. This reality is increasingly more evident as deployed software</p><p>spans across trust domains and enables the interactions of</p><p>self-interested participants with potentially conflicting goals. With</p><p>systems growing more complex and interdependent, there is a growing</p><p>need to localize, identify, and isolate faults and unfaithful behavior. </p><p>Conventional techniques for building secure systems, such as secure</p><p>perimeters and Byzantine fault tolerance, are insufficient to ensure</p><p>that trusted users and software components are indeed</p><p><italic>trustworthy</italic>. Secure perimeters do not work across trust domains and fail</p><p>when a participant acts within the limits of the existing security</p><p>policy and deliberately manipulates the system to her own</p><p>advantage. Byzantine fault tolerance offers techniques to tolerate</p><p>misbehavior, but offers no protection when replicas collude or are</p><p>under the control of a single entity. </p><p>Complex interdependent systems necessitate new mechanisms that</p><p>complement the existing solutions to identify improper behavior and</p><p>actions, limit the propagation of incorrect information, and assign</p><p>responsibility when things go wrong. This thesis </p><p>addresses the problems of misbehavior and abuse by offering tools and</p><p>techniques to integrate <italic>accountability</italic> into computer systems. A</p><p>system is accountable if it offers means to identify and expose</p><p><italic>semantic</italic> misbehavior by its participants. An accountable system</p><p>can construct undeniable evidence to demonstrate its correctness---the</p><p>evidence serves as explicit proof of misbehavior and can be strong enough</p><p>to be used as a basis for social sanction external to the</p><p>system. </p><p>Accountability offers strong disincentives for abuse and</p><p>misbehavior but may have to be ``designed-in'' to an application's</p><p>specific protocols, logic, and internal representation; achieving</p><p>accountability using general techniques is a challenge. Extending</p><p>responsibility to end users for actions performed by software</p><p>components on their behalf is not trivial, as it requires an ability </p><p>to determine whether a component correctly represents a</p><p>user's intentions. Leaks of private information are yet another</p><p>concern---even correctly functioning</p><p>applications can leak sensitive information, for which their owners</p><p>may be accountable. Important infrastructure services, such as</p><p>distributed virtual resource economies, offer a range of application-specific</p><p>issues such as fine-grain resource delegation, virtual</p><p>currency models, and complex work-flows.</p><p>This thesis work addresses the aforementioned problems by designing,</p><p>implementing, applying, and evaluating a generic methodology for</p><p>integrating accountability into network services and applications. Our</p><p><italic>state-based</italic> approach decouples application state management from</p><p>application logic to enable services to demonstrate that they maintain</p><p>their state in compliance with user requests, i.e., state changes do take</p><p>place, and the service presents a consistent view to all clients and</p><p>observers. Internal state managed in this way, can then be used to feed</p><p>application-specific verifiers to determine the correctness the service's</p><p>logic and to identify the responsible party. The state-based approach</p><p>provides support for <italic>strong</italic> accountability---any detected violation</p><p>can be proven to a third party without depending on replication and</p><p>voting. </p><p>In addition to the generic state-based approach, this thesis explores how</p><p>to leverage application-specific knowledge to integrate accountability in</p><p>an example application. We study the invariants and accountability</p><p>requirements of an example application--- a lease-based virtual resource</p><p>economy. We present the design and implementation of several key elements</p><p>needed to provide accountability in the system. In particular, we describe</p><p>solutions to the problems of resource delegation, currency spending, and</p><p>lease protocol compliance. These solutions illustrate a complementary</p><p>technique to the general-purpose state-based approach, developed in the</p><p>earlier parts of this thesis. </p><p>Separating the actions of software and its user is at the heart of the</p><p>third component of this dissertation. We design, implement, and evaluate</p><p>an approach to detect information leaks in a commodity operating system.</p><p>Our novel OS abstraction---a <italic>doppelganger</italic> process---helps track</p><p>information flow without requiring application rewrite or instrumentation.</p><p>Doppelganger processes help identify sensitive data as they are about to</p><p>leave the confines of the system. Users can then be alerted about the</p><p>potential breach and can choose to prevent the leak to avoid becoming</p><p>accountable for the actions of software acting on their behalf.</p> / Dissertation
|
322 |
Enabling Scalable Information Sharing for Distributed Applications Through Dynamic ReplicationChang, Tianying 29 November 2005 (has links)
As broadband connections to the Internet become more common, new
information sharing applications that provide rich services to
distributed users will emerge. Furthermore, as computing devices
become pervasive and better connected, the scalability requirements
for Internet-based services are also increasing. Distributed object
middleware has been widely used to develop such applications since
it made it easier to rapidly develop distributed applications for
heterogeneous computing and communication systems. As the
application's scale increases, however, the client/server
architecture limits the performance due to the bottleneck at the
centralized servers. The recent development in peer-to-peer
technologies creates a new opportunity for addressing scalability
and performance problems for services that are used by many nodes.
In a peer-to-peer system, peer nodes can contribute a fraction of
their resources to the system, enabling more flexible and extended
sharing between the entities in the system. When peer nodes are
required to contribute their resources by replicating a service for
self and others, however, several new challenges arise.
Our thesis is that non-dedicated resources in a distributed system
can be utilized to replicate shared objects dynamically so that the
quality and scalability of a distributed service can be achieved
with lower cost by replicating the objects at right places and
updates to those shared objects can be disseminated efficiently and
quickly. The following are the contributions of our work that has
been done to validate the thesis.
1. A new fair and self-managing replication algorithm that
allows distributed non-dedicated resources to be used to improve
service performance with lower cost.
2. A multicast grouping algorithm that is used to disseminate
updates to the shared objects among a large set of heterogeneous
peer nodes to keep consistent view for all peer nodes. It groups
nodes with similar interests into same group and multicasts all the
required data to the group so that the unwanted data received by
each node can be minimized.
3. An overlay construction algorithm that aims at reducing both
network latency and total network traffic when delivering data
through the built overlay network.
4. An implementation of a distributed object framework, GT-RMI,
that allows peer nodes to invoke dynamically replicated objects
transparently. The framework can be configured for a particular peer
node through a policy file.
|
323 |
Distributed Beamforming with Compressed Feedback in Time-Varying Cooperative NetworksJian, Miao-Fen 27 August 2010 (has links)
This thesis proposes a distributed beamforming technique in wireless networks with half-duplex amplify-and-forward relays. With full channel state information, it has been shown that transmit beamforming is able to achieve significant diversity and coding gain. However, it takes large amount of overhead. First, we adopt the Generalized Lloyd Algorithm to design codebooks which minimize average SNR, and reduce the feedback rate by quantizing the channel state information. Furthermore, we utilize the correlation property of time-varying channels to compress the size of feedback message required to accomplish distributed beamforming. We model time-varying channels as a first-order finite-state Markov chain, namely the emph{channel state Markov chain}. Then, we propose two methods to compress the feedback bits according to the property of the transition probabilities among different channel states. One method is to compress the feedback by discarding some channel states which is less likely to be transited given current state. In the other method, we reserve all channel states and adopt Huffman coding to compress the feedback bits based on the transition probabilities. Simulations also show that distributed beamforming with compressed feedback performs closely to the beamforming with infinite feedback.
|
324 |
On the Design and Implementation of Thread Migration for CDPthread-based Systemchiang, Yi-huang 10 November 2010 (has links)
One of the primary goals of Distributed Shared Memory (DSM) research is to minimize network traffic and reduce the latency. One way to solve this problem is to use thread migration. In this thesis, we show how thread migration is implemented in a CDPthread-based system. To maintain high portability and flexibility, a generic thread migration package is implemented as a user library. This mechanism can be used to better utilize system resources and improve performance of a CDPthread-based system. It also provides programmer an easy way to migrate threads between different nodes. Moreover, we use thread migration to implement dynamic load balance. Our experimental results show that the dynamic load balance can improve system performance significantly in the average case.
|
325 |
Utilizing Distributed Temperature Sensors in Predicting Flow Rates in Multilateral WellsAl Mulla, Jassim Mohammed A. 2012 May 1900 (has links)
The new advancement in well monitoring tools have increased the amount of data that could be retrieved with great accuracy. Downhole pressure and temperature could be precisely determined now by using modern instruments. The new challenge that we are facing today is to maximize the benefits of the large amount of data that is being provided by these tools and thus justify the investment of more capital in such gadgets. One of these benefits is to utilize the continuous stream of temperature and pressure data to determine the flow rate in real time out of a multilateral well. Temperature and pressure changes are harder to predict in horizontal laterals compared with vertical wells because of the lack of variation in elevation and geothermal gradient. Thus the need of accurate and high precision gauges becomes critical. The trade-off of high resolution sensors is the related cost and resulting complication in modeling. Interpreting measured data at real-time to a downhole flow profile in multilateral and horizontal wells for production optimization is another challenge.
In this study, a theoretical model is developed to predict temperature and pressure in trilateral wells based on given flow conditions. The model is used as a forward engine in the study and inversion procedure is then added to interpret the data to flow profiles. The forward model starts from an assumed well flow pressure in a specified reservoir with a defined well structure. Pressure, temperature and flow rate in the well system are calculated in the motherbore and in the laterals. These predicted temperature and pressure profiles provide the connection between the flow conditions and the temperature and pressure behavior.
Then we use an inverse model to interpret the flow rate profiles from the temperature and pressure data measured by the downhole sensors. A gradient-based inversion algorithm is used in this work, which is fast and applicable for real-time monitoring of production performance. In the inverse model, the flow profile is calculated until the one that generates the matching temperature and pressure profiles in the well is identified. The production distribution from each lateral is determined based on this approach.
At the end of the study, the results showed that we were able to successfully predict flow rates in the field within 10% of the actual rate. We then used the model to optimize completion design in the field.
In conclusion, we were able to build a dependable model capable of predicting flow rates in trilateral wells using pressure and temperature data provided by downhole sensors.
|
326 |
Intrusion Detection on Distributed AttacksCheng, Wei-Cheng 29 July 2003 (has links)
The number of significant security incidents tends to increase day by day in recent years. The distributed denial of service attacks and worm attacks extensively influence the network and cause serious damages.
In the thesis, we analyze these two critical distributed attacks. We propose an intrusion detection approach against this kind of attacks and implement an attack detection system based on the approach. We use anomaly detection of intrusion detecting techniques and observed the anomalous distribution of packet fields to perform the detection. The proposed approach records the characteristics of normal traffic volumes so that to make detections more flexible and more precise. Finally, we evaluated our approach by experiments.
|
327 |
Closed-loop real-time control on distributed networksAmbike, Ajit Dilip 15 November 2004 (has links)
This thesis is an effort to develop closed-loop control strategies on computer networks and study their stability in the presence of network delays and packet losses. An algorithm using predictors was designed to ensure the system stability in presence of network delays and packet losses. A single actuator magnetic ball levitation system was used as a test bed to validate the proposed algorithm. A brief study of real-time requirements of the networked control system is presented and a client-server architecture is developed using real-time operating environment to implement the proposed algorithm. Real-time performance of the communication on Ethernet based on user datagram protocol (UDP) was explored and UDP is presented as a suitable protocol for networked control systems. Predictors were designed based on parametric estimation models. Autoregressive (AR) and autoregressive moving average (ARMA) models of various orders were designed using MATLAB and an eighth order AR model was adopted based on the best-fit criterion. The system output was predicted several steps ahead using these predictors and control output was calculated using the predictions. This control output output was used in the events of excessive network delays to maintain system stability. Experiments employing simulations of consecutive packet losses and network delays were performed to validate the satisfactory performance of the predictor based algorithm. The current system compensates for up to 20 percent data losses in the network without loosing stability.
|
328 |
Layered Wyner-Ziv video coding for noisy channelsXu, Qian 01 November 2005 (has links)
The growing popularity of video sensor networks and video celluar phones has generated the need for low-complexity and power-efficient multimedia systems that can handle multiple video input and output streams. While standard video coding techniques fail to satisfy these requirements, distributed source coding is a promising technique for ??uplink?? applications. Wyner-Ziv coding refers to lossy source coding with side information at the decoder. Based on recent theoretical result on successive Wyner-Ziv coding, we propose in this thesis a practical layered Wyner-Ziv video codec using the DCT, nested scalar quantizer, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information) for noiseless channel. The DCT is applied as an approximation to the conditional KLT, which makes the components of the transformed block conditionally independent given the side information. NSQ is a binning scheme that facilitates layered bit-plane coding of the bin indices while reducing the bit rate. LDPC code based Slepian-Wolf coding exploits the correlation between the quantized version of the source and the side information to achieve further compression. Different from previous works, an attractive feature of our proposed system is that video encoding is done only once but decoding allowed at many lower bit rates without quality loss. For Wyner-Ziv coding over discrete noisy channels, we present a Wyner-Ziv video codec using IRA codes for Slepian-Wolf coding based on the idea of two equivalent channels. For video streaming applications where the channel is packet based, we apply unequal error protection scheme to the embedded Wyner-Ziv coded video stream to find the optimal source-channel coding
trade-off for a target transmission rate over packet erasure channel.
|
329 |
A reliability assessment methodology for distribution systems with distributed generationDuttagupta, Suchismita Sujaya 16 August 2006 (has links)
Reliability assessment is of primary importance in designing and planning distribution
systems that operate in an economic manner with minimal interruption of
customer loads. With the advances in renewable energy sources, Distributed Generation
(DG), is forecasted to increase in distribution networks. The study of reliability
evaluation of such networks is a relatively new area. This research presents a new
methodology that can be used to analyze the reliability of such distribution systems
and can be applied in preliminary planning studies for such systems. The method uses
a sequential Monte Carlo simulation of the distribution systemÂs stochastic model to
generate the operating behavior and combines that with a path augmenting Max flow
algorithm to evaluate the load status for each state change of operation in the system.
Overall system and load point reliability indices such as hourly loss of load, frequency
of loss of load and expected energy unserved can be computed using this technique.
On addition of DG in standby mode of operation at specific locations in the network,
the reliability indices can be compared for different scenarios and strategies for
placement of DG and their capacities can be determined using this methodology.
|
330 |
Design and analysis of distributed primitives for mobile ad hoc networksChen, Yu 30 October 2006 (has links)
This dissertation focuses on the design and analysis of distributed primitives for
mobile ad hoc networks, in which mobile hosts are free to move arbitrarily. Arbitrary
mobility adds unpredictability to the topology changes experienced by the network, which
poses a serious challenge for the design and analysis of reliable protocols. In this work,
three different approaches are used to handle mobility. The first part of the dissertation
employs the simple technique of ignoring the mobility and showing a lower bound for the
static case, which also holds in the mobile case. In particular, a lower bound on the worstcase
running time of a previously known token circulation algorithm is proved. In the
second part of the dissertation, a self-stabilizing mutual exclusion algorithm is proposed
for mobile ad hoc networks, which is based on dynamic virtual rings formed by circulating
tokens. The difficulties resulting from mobility are dealt with in the analysis by showing
which properties hold for several kinds of mobile behavior; in particular, it is shown that
mutual exclusion always holds and different levels of progress hold depending on how
the mobility affects the token circulation. The third part of the dissertation presents two
broadcasting protocols which propagate a message from a source node to all of the nodes in
the network. Instead of relying on the frequently changing topology, the protocols depend
on a less frequently changing and more stable characteristic â the distribution of mobile
hosts. Constraints on distribution and mobility of mobile nodes are given which guarantee
that all the nodes receive the broadcast data.
|
Page generated in 0.0535 seconds