• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • Tagged with
  • 31
  • 31
  • 31
  • 31
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modeling, early detection, and mitigation of Internet worm attacks

Zou, Changchun 01 January 2005 (has links)
In recent years, fast spreading worms have become one of the major threats to the security of the Internet. Code Red, Nimda, Slammer, Blaster, MyDoom... these worms kept hitting the Internet and caused severe damage to our society. However, until now we have not fully understand the behaviors of Internet worms; our defense techniques are still one-step behind attack techniques deployed by attackers. In this dissertation, we present our research on modeling, analysis, and mitigation of Internet worm attacks. In modeling and analysis of Internet worms, we first present a " two fact" worm model, which considers the impacts of human countermeasures and network congestions on a worm's propagation behavior. Through infinitesimal analysis, we derive a uniform-scan worm propagation model that is described by concrete parameters of worms instead of the abstract parameter in traditional epidemic models. Then based on this model, we derive and analyze how a worm propagates under various worm scanning strategies, such as uniform scan, routing scan, hit-list scan, cooperative scan, local preference scan, sequential scan, divide-and-conquer scan, target scan, etc. We also provide an analytical model to accurately model Witty worm's destructive behavior. By using the same modeling framework, we reveal the underlying similarity and relationship between different worm scanning strategies. For mass-mailing email worms, we use simulation experiments to study their propagation behaviors and the effectiveness of partial immunization. To ensure us having enough time for defense, it is critical to detect the presence of a worm in the Internet as early as possible. In this research area, we present a novel model-based detection methodology, "trend detection", which de ploys Kalman filter estimation to detect the exponential growth trend, not the traffic burst, of monitored malicious traffic since we believe a fast-spreading Internet worm propagates exponentially at its beginning stage. In addition, we can accurately predict the total vulnerable population in the Internet at the early stage of a worm's propagation, and estimate the number of globally infected hosts based on our limited monitoring resource. In the area of worm mitigation, we derive two fundamental defense principles: "preemptive quarantine" and "feedback adjustment ". Based on the first principle, we present a novel "dynamic quarantine" defense system. Based on the second principle, we present an adaptive defense system to defend against various network attacks, including worm attack and Distributed Denial-of-Service (DDoS) attacks.
12

On the interaction among self -interested users of network resources

Zhang, Honggang 01 January 2006 (has links)
We study the interactions among self-interested users of network resources in the context of congestion control and routing protocols in computer networks. In the context of congestion control, we propose a game-theoretic study of the selfish behavior of TCP users when they are allowed to use multiple concurrent TCP connections so as to maximize their goodputs (or other utility function). We refer to this as the TCP connection game. We demonstrate the existence and uniqueness of Nash equilibrium in several variants of this game. We also generalize this game to model peer-to-peer unstructured file sharing networks. The bad news is that the loss of efficiency (the price of anarchy) at the Nash Equilibrium can be arbitrarily large if users have no resource limitations and are not socially responsible. The good news is that, if either of these two factors is considered, the loss of efficiency is bounded. We then study the interaction between overlay routing controller and underlay native network routing controller using two approaches. In the first approach, we formulate this interaction as a two-player game, in which the overlay network seeks to minimize the delay of its overlay traffic while the underlay network seeks to minimize the network cost as a whole. We show that the selfish behavior of the overlay network can cause huge cost increase and oscillation in the entire network. Even worse, we have identified cases in which the overlay network's cost increases as the game proceeds even though the overlay plays optimally in response to the underlay network's routing at each round. To solve this conflict, we propose that the overlay network plays as a leader in a Stackelberg game. In the second approach, we investigate the ability of an overlay network to compensate for "careless" routing in the native network layer, i.e., for network-layer routes not optimized for the performance of application overlays. In particular, we investigate the extent to which overlay-over-careless-underlay can achieve performance close to that attainable when underlay routing is performed in an optimal ("careful") manner. We find that the overlay network can compensate for careless underlay routing only when the sub-graph formed by the underlay network's routes is rich, which can be collectively characterized by three graph-theoretic metrics. This result suggests that ISPs can simplify underlay network management by relegating responsibility for performance to application overlays.
13

File access characterization and analysis of nonvolatile write caches

Biswas, Prabuddha 01 January 1993 (has links)
The I/O subsystem of computer systems is becoming the bottleneck as a result of recent dramatic improvements in processor speed. Enhancing the performance of the disk and file system is becoming increasingly important to alleviate the effect of the I/O bottleneck. Disk and file system caches have been effective in closing the performance gap between the processor and the I/O subsystem. However, their benefit in commercial systems is restricted only to read operations as write operations are committed to disk to maintain consistency and to allow crash recovery. Consequently, write operations begin to dominate the traffic to the disk. Therefore, it has become increasingly important to address the performance of write operations. To modify an existing I/O subsystem or to architect a new one, it is important to understand the current file access patterns. Therefore, in the first part of the dissertation we provide a comprehensive analysis of several file I/O traces collected at commercial, production computer systems. Next, we investigate the use of non-volatile disk caches to address the write I/O bottleneck problem. The non-volatile caches can be easily incorporated into an existing file system. We address the issues around managing such a cache using a detailed trace driven simulation. We propose cache management policies that reduce the number of disk writes to a small fraction of the total number of application write operations. Our schemes also improve the write response time by cleaning the write cache asynchronously when the disk is idle and by piggybacking dirty blocks on disk read operations. Finally, we extend the use of non-volatile write caches to the distributed file system environment. We develop a synthetic file access workload model based on the I/O traces and use it to evaluate alternative write caching policies. We show that small non-volatile caches at both the clients and at the server is quite effective. They reduce the write response time and the load on the file server, thus improving the performance and scalability of the system.
14

Algorithms for distributed systems under imperfect status information

Choi, Myungwhan 01 January 1990 (has links)
A recent trend in computer system design has been to distribute the tasks among the multiple processors and a wide variety of computer systems can be proposed. The potential benefits of this trend include modular expansion, increased cost-performance, availability, and reliability. To fully exploit these potential advantages, the efficient control of the resources is essential. In this dissertation, we attack some of important problems which arise in the distributed system environment. Central to this dissertation is the issue of imperfect information, and how to make control decisions in the face of imperfect information. While a great deal of attention has been lavished on token-ring systems over the past few years, little has been said about synthesizing protocols to satisfy complex priority requirements. We present an algorithm that partly fills this gap. It is a simple, adaptive algorithm which can ensure prescribed differential service to the various job classes. Applications include maintaining a throughput differential between job classes, integrating voice and data packets, and maintaining mean waiting times of the various job classes in prescribed ratios. Such an algorithm is likely to be useful whenever disparate traffic classes must coexist on the same ring. We also study the effect of the decay of status information on the goodness of load balancing algorithms. We study empirically the relationship between the characterizing parameters of a very simple model using the join-the-shortest-queue algorithm. This is done using regression analysis. Our results indicate that these parameters--number of queues, status update rate, and coefficient of variation for the job service time--are far from orthogonal in their impact on system performance. We have empirically obtained expressions for the sensitivity of the response time to the characterizing parameters. Designers can use this expression to determine appropriate status update rates. We also present a comparison of some simple distributed load balancing algorithms which update status information regularly with those which do so adaptively. The adaptive algorithms only broadcast status when it changes by more than a threshold. They are almost as simple to implement as algorithms which periodically exchange status information, and provide better performance. Finally, we introduce a simple, adaptive buffer allocation algorithm. We study the performance of an adaptive buffer allocation algorithm in the face of the imperfect status information for the control of distributed systems in which multiple classes of customers contend for a finite buffer which is served by a single server. We believe that this algorithm is especially useful when the number of the classes changes dynamically, or when the input loading changes dynamically. The adaptive algorithm is especially useful when buffer sizes are small.
15

Sharing patterns and the single cachable coherence scheme for shared memory systems

Mendelson, Abraham 01 January 1990 (has links)
This thesis presents a new cache coherence protocol for shared bus multicache systems, and addresses the mutual relations among sharing patterns, performance of multicache systems and hardware level support for synchronization primitives. First we look at the impact of the software environment on the performance of multicache systems. We present new models for predicting the ratio between the number of dead and live lines in an interleaved environment, and use these results to define and model two important classes of shared data in multicache systems: truly and falsely shared data. An optimal cache coherence protocol, termed Par-Opt, is defined based on the observation that only modification of truly shared data has to be propagated to other caches. The new coherence is presented in the second part. The protocol is termed Single Cache Data Coherence (SCCDC), and calls to keep at most a single copy of a shared writable data in the multicache system. The protocol overhead is only for accessing truly shared data, while trading off the overhead caused for multiple read. We show that in software environments where the amount of falsely shared data is significant, the proposed protocol outperforms all the other cache coherence protocols. For many other environments, an integration of software based mechanisms to tag read only shared data and the use of SCCDC protocol can be used to handle cache coherence efficiently. A hardware support for "critical section free" access to multi-access shared data is presented. We show that if the software guarantees to access the shared data only by using two hardware based mechanisms, many algorithms can be developed to manipulate commonly used data structures such as trees, link-lists and hashing tables without locking the entire data structure. These mechanisms, when implemented for multicache architectures, can only use SCCDC based systems for efficient implementation. The support for "critical section free" access, gives the SCCDC protocol another advantage when being used to support software environments such as data-bases, AI, and data retrieval applications.
16

Mapping of algorithms onto programmable data-driven arrays

Mendelson, Bilha 01 January 1991 (has links)
Control-driven arrays (e.g., systolic arrays) provide high levels of parallelism and pipelining for inherently regular computations. Data-driven arrays can provide the same for algorithms with no internal regularity. The purpose of this research is to establish a method for speeding up an algorithm by mapping and executing it on a data driven array. The array being used is an homogeneous, hexagonal, data driven processor array. Mapping a general algorithm, given in the data flow language SISAL, consists of translating the algorithm to a data flow graph (DFG) and assigning every node in the DFG to a processing element (PE) in the array. This research aims to find an efficient mapping that minimizes the area, and maximizes the performance of the given algorithm or find a tradeoff between the two.
17

Polymorphic multiple-processor networks

Rana, Deepak 01 January 1991 (has links)
Many existing multiple-processor architectures are designed to efficiently exploit parallelism in a specific narrow range, where the extremes are fine-grained data parallelism and coarse-grained control parallelism. Most real world problems are comprised of multiple tasks which vary in their range of parallelism. Future multiple-processor architectures must be "flexible" in supporting multiple forms of parallelism to tackle these problems. This thesis addresses issues related to communication in "flexible" multiple-processor systems. Intermediate-level vision is chosen as the application domain for demonstrating multi-modal parallelism. The specific problem addressed is to determine the communication requirements of the intermediate level processors of the Image Understanding Architecture, explore the design space of potential solutions, develop a network design that meets the requirements, demonstrate the feasibility of constructing the design, and show both analytically and empirically that the design meets the requirements. The major contributions of this thesis are: (1) Crossbars and other dense networks are viable design alternatives even for large parallel processors; (2) Central control is viable for reasonably large network sizes, which is contrary to conventional wisdom; (3) It is shown that by using a special search memory to implement part of the Clos and Benes network routing algorithm in hardware, it is feasible to quickly reconfigure these networks so that they may be used in fine-grained, data-dependent communication; (4) The feasibility of constructing easily reconfigurable communication networks for "flexible" multiple-processor systems is shown. These networks can quickly reconfigure their topologies to best suit a particular algorithm, can be controlled efficiently (in SIMD as well as MIMD mode), and can efficiently route messages (especially with low overhead in SIMD mode); (5) During the course of this investigation it was discovered that, flexible communication as well as shared memory support is much more critical for supporting intermediate-level vision than providing a variety of fixed communication patterns. This observation may also have implications for general-purpose parallel processing; and (6) It was also discovered that supporting a symbolic token database at the intermediate level is a more fundamental requirement than supporting particular algorithms.
18

Spatial and temporal grouping in the interpretation of image motion

Sawhney, Harpreet S 01 January 1992 (has links)
Interpretation of the three-dimensional (3D) world from two-dimensional (2D) images is a primary goal of vision. Image motion is an important source of 3D information. The 3D motion (relative to a camera), and the 3D structure of the environment manifest themselves as image motion. The primary focus of this thesis is the reliable derivation of scene structure through an interpretation of motion in monocular images over extended time intervals. An approach is presented for the integration of spatial models of the 3D world with temporal models of 3D motion in order to robustly derive 3D structure and perform tracking and segmentation. Two specific problems are addressed to illustrate this approach. First, a model of a class of 3D objects is combined with smooth 3D motion to track, identify and reconstruct shallow structures in the scene. Second, a specific model of 3D motion along with general spatial constraints is employed for 3D reconstruction of motion trajectories. Both parts rely fundamentally upon the quantitative modeling of common fate of a structure in motion. In many man-made environments, obstacles in the path of a mobile robot can be characterized as shallow, i.e. they have relatively small extent in depth compared to the distance from the camera. Shallowness can be quantified as affine describability. This is embedded in a tracking system to discriminate between shallow and non-shallow structures based on their affine trackability. The temporal evolution of a structure, derived using affine constraints, is used for verifying its identity as a shallow structure and for its 3D reconstruction. Spatio-temporal analysis and integration is further demonstrated through a two-stage technique for the reconstruction of 3D structure and motion of a scene undergoing a rotational motion with respect to the camera. First, the spatio-temporal grouping of discrete point correspondences as a set of conic trajectories in the image plane is obtained, by exploiting their common motion. This leads to a description that is reliable when compared to the independent fitting of trajectories. The second stage uses a new closed-form solution for computing the 3D trajectories from the computed image trajectories under perspective projection.
19

Connection management in very-high-speed interconnection systems

Kienzle, Martin G 01 January 1992 (has links)
The requirement to connect high-performance components of supercomputer systems has caused an increasing demand for connection-oriented, high-speed peer-to-peer communication. The importance of this type of communication has led to the advent of the HIPPI standard, designed to support interconnection systems for this environment. While much research has been directed towards the physical layer of these systems, not much effort has been expended to address system control issues. This thesis explores the connection management, service disciplines, configuration, and performance of very-high-speed interconnection systems. We propose several connection management policies that represent different trade-offs of cost, efficiency, and system performance. The centralized connection management policy assumes knowledge of the connection state, requires a complex implementation, but gives the best performance. At the other end of the spectrum, the distributed connection management policy assumes no knowledge of the connection state and is very simple to implement. However, its performance is lower, and its best performance can be achieved only at the cost of significant overhead to the node systems. For the implementation of the centralized policy, we introduce a connection management algorithm that gives preferential access to important connections according to a non-preemptive priority discipline, that treats connections within the same priority class equitably, and that achieves these objectives at high performance and low node overhead. We discuss the implementation of this connection management algorithm in two concrete system contexts. To support the evaluation of these policies and of the priority algorithm, we develop analytic performance models that capture the salient features of the interconnection systems, their configurations, their connection management policies, and their service disciplines. Using these models, we compare the performance and the node overhead of the connection management policies and interconnection system configurations, and we demonstrate the impact of the choice of service discipline.
20

Design, modeling, and evaluation of high-performance I/O subsystems

Chen, Shenze 01 January 1992 (has links)
In today's computer systems, the disk I/O subsystem is often identified as a major bottleneck to system performance. Removing this bottleneck has proven to be a challenging research problem. The research reported in this dissertation is motivated by the design and performance issues of disk I/O subsystems for a variety of today's computing systems. The goal is to design and evaluate high performance I/O subsystems for computing systems which support various applications. Our contributions are three-fold: for different application areas, we (1) propose new architecture and scheduling policies for the disk I/O subsystems; (2) develop analytic models and simulators for these disk subsystems; and (3) study and compare the performance of these architectures and scheduling policies. First, we study a mirrored disk which can be found in various fault tolerant systems, where each data item is duplicated. While the primary purpose of disk mirroring is to provide the fault tolerance, we are interested in how to achieve performance gain by taking advantage of the two data copies. In particular, we propose and examine several policies which differ according to the manner in which read requests are scheduled to one of the two copies. Analytic models are developed to model the behavior of these policies and the best policies are proposed. In addition, the modeling techniques developed in this study are of independent interest, which can be used or extended to other system studies. Second, we investigate existing and propose new disk array architectures for high performance computing systems. Depending on the applications, we propose scheduling policies suitable for these architectures. In particular, we study three variations of RAID 1, mirrored declustering, chained declustering, and group-rotate declustering. We compare the performance of RAID 5 versus Parity Striping. Finally, we examine the performance/cost trade-off of RAID 1 and RAID 5. The performance studies of the various disk array architectures coupled with the proposed scheduling policies are based on our analytic models and simulators. Third, we examine the disk I/O subsystems for real-time systems, where each I/O is expected to complete before a deadline. The goal is to minimize the fraction of requests that miss their deadlines. In doing so, we first propose two new real-time disk scheduling algorithms, and compare them with other known real-time or conventional algorithms in an integrated real-time transaction system model. We then extend this study to a mirrored disk and disk arrays by combining the real-time algorithms with the architectures and policies studied before. Last, we study disk I/O in a non-removal real-time system environment.

Page generated in 0.1479 seconds