• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 211
  • 43
  • 1
  • 1
  • Tagged with
  • 646
  • 646
  • 646
  • 572
  • 521
  • 133
  • 110
  • 104
  • 79
  • 72
  • 71
  • 68
  • 66
  • 65
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Flow and congestion control for reliable multicast communication in wide -area networks

Bhattacharyya, Supratik 01 January 2000 (has links)
Applications involving the reliable transfer of large volumes of data from a source to multiple destinations across wide-area networks are expected to become increasingly important in the near future. A few examples are point-to-multipoint ftp, news distributions, Web caching and software updates. Multicasting technology promises to enhance the capabilities of wide-area networks for Supporting these applications. Flow and congestion control have emerged as one of the biggest challenges in the large-scale deployment of reliable multicast in the Internet. This thesis addresses two specific problems in the design of transport-level flow/congestion control schemes for reliable multicast data transfer: (1) How should feedback-based congestion control schemes be designed for regulating the rate of transmission of data to a single multicast group? (2) How can multiple multicast groups be used to improve the efficiency of bulk data delivery? The first part of this thesis focuses on the design and evaluation of multicast congestion control schemes in which a source regulates its transmission rate in response to packet loss indications from its receivers. We identify and analyze an important problem that arises because a transmitted packet may get lost on one or more of the many end-to-end paths in a multicast tree, and also study its impact on fair bandwidth sharing among co-existing multicast and unicast sessions. An outcome of this work is a fair congestion control approach that scales well to large multicast groups. We also design and examine a prototype protocol that is “TCP-friendly”. The second part of this thesis considers the problem of efficiently transferring data to a large number of destinations in the presence of heterogeneous bandwidth constraints in different parts of a network. We propose a novel approach in which the sender stripes data across multiple multicast groups and transmits it to different sub-groups of receivers at different rates. We also design and evaluate simple and bandwidth-efficient algorithms for determining the transmission rates associated with each multicast group.
12

On multi-scale differential features and their representations for image retrieval and recognition

Ravela, Srinivas S 01 January 2003 (has links)
Visual appearance is described as a cue with which we discriminate images. It has been conjectured that appearance similarity emerges from similarity between features of image surfaces. However, the design of effective appearance features and their efficient representations is an open problem. In this dissertation, appearance features are developed by decomposing image brightness surfaces differentially in space, and in scale. Image representations constructed from multi-scale differential features are compared to determine appearance similarity. The first part of this thesis explores image structure in scale and space. Multi-scale differential features are generated by filtering images with Gaussian derivatives at multiple scales (GMDFs). This provides a robust local characterization of the brightness surface; filtered outputs can be transformed to seek rotation, illumination, view and scale tolerance. Differential features are also shown to be descriptive; both local and global representations of images can be composed from them. The second part of this thesis begins by illustrating local and global representations including feature-templates, -graphs, -ensembles and -distributions. It continues by developing one algorithm, CO-1, in detail. In this algorithm, two robust differential-features, the orientation of the local gradient and the shape-index, are selected for constructing representations. GMDF distributions of the first type are used to represent images and euclidean distance measure is used to determine similarity between representations. The first application of CO-1 is to image retrieval, a task central to developing search and organization tools for digital multimedia collections. CO-1 is applied to example-based browsing of image collections and trademark: retrieval, where appearance similarity can be important for adjudicating relevance. The second application of this work is to image-based and view-based object recognition. Results are demonstrated for face recognition using several standard collections. The central contribution of this work in the words of a reviewer is “… in the simplicity and elegance of the approach of using low-level multi-scale differential image structure.” We posit that this thesis highlights the utility of exploring differential image structure to synthesize features effective in a wide range of appearance-based retrieval and recognition tasks.
13

Toward quantified control for organizationally situated agents

Wagner, Thomas Anderson 01 January 2000 (has links)
Software agents are situated in an environment that they can sense and effect, flexible, having choices and responding to the environment, and autonomous, making choices independently. This dissertation focuses on the issue of choice in autonomous agents. We use the expression agent control to denote the choice process—the process of deciding which tasks to perform, when to perform them, and possibly with whom a given agent should cooperate to perform a given task. In this dissertation, we explore a detailed approach to local agent control called Design-to-Criteria Scheduling and then move agent control to a higher level of abstraction where agents reason about their larger organizational context and the motivations for performing particular tasks. This higher reasoning level is called the Motivational Quantities level.
14

Equivalence checking of arithmetic expressions with applications in DSP synthesis

Zhou, Zheng 01 January 1996 (has links)
Numerous formal verification systems have been proposed and developed for FSM based control units (notably SMV (71) as well as others). However, most research on the equivalence checking of datapaths is still confined to the bit- or word-level. Formal verification of arithmetic expressions and synthesized datapaths, especially considering finite word-length computation, has not been addressed. Thus formal verification techniques have been prohibited from more extensive applications in numerical and Digital Signal Processing. In this dissertation a formal system, called Conditional Term Rewriting on Attribute Syntax Trees (ConTRAST) is developed and demonstrated for verifying the equivalence between two differently synthesized datapaths. This result arises from a sophisticated integration of three key techniques: attribute grammars, which contribute expressive data structures for syntactic and semantic information about designed datapaths, term rewrite systems, which transform functionally equivalent datapaths into the same canonical form, and LR parsing, which provides an efficient tool for integrating the attribute grammar and the term rewriting system. Unlike other canonical representations, such as BDD (15), and BMD$\sp*$ (17), ConTRAST makes canonicity by manipulating symbolic expressions instead of enumerating values of expressions at the bit- or word-level. Furthermore, the effect of finite word-lengths and their associated arithmetic precision are also considered in the definition of equivalence classes. As a particular application of ConTRAST, a DSP design verification tool called Fixed-Point Verifier (FPV) has been developed. Similar to present DSP hardware design tools, FPV allows users to describe filters in the form of arithmetic expressions and to specify arbitrary fixed-point wordlengths on various signals. However, unlike simulation-based verification methods like Cadence/Alta's Fixed Point Optimizer and Mentor's DSPstation, FPV can automatically perform correctness-checking and equivalence-checking for a given filter design under the effect of finite word length.
15

Modeling, early detection, and mitigation of Internet worm attacks

Zou, Changchun 01 January 2005 (has links)
In recent years, fast spreading worms have become one of the major threats to the security of the Internet. Code Red, Nimda, Slammer, Blaster, MyDoom... these worms kept hitting the Internet and caused severe damage to our society. However, until now we have not fully understand the behaviors of Internet worms; our defense techniques are still one-step behind attack techniques deployed by attackers. In this dissertation, we present our research on modeling, analysis, and mitigation of Internet worm attacks. In modeling and analysis of Internet worms, we first present a " two fact" worm model, which considers the impacts of human countermeasures and network congestions on a worm's propagation behavior. Through infinitesimal analysis, we derive a uniform-scan worm propagation model that is described by concrete parameters of worms instead of the abstract parameter in traditional epidemic models. Then based on this model, we derive and analyze how a worm propagates under various worm scanning strategies, such as uniform scan, routing scan, hit-list scan, cooperative scan, local preference scan, sequential scan, divide-and-conquer scan, target scan, etc. We also provide an analytical model to accurately model Witty worm's destructive behavior. By using the same modeling framework, we reveal the underlying similarity and relationship between different worm scanning strategies. For mass-mailing email worms, we use simulation experiments to study their propagation behaviors and the effectiveness of partial immunization. To ensure us having enough time for defense, it is critical to detect the presence of a worm in the Internet as early as possible. In this research area, we present a novel model-based detection methodology, "trend detection", which de ploys Kalman filter estimation to detect the exponential growth trend, not the traffic burst, of monitored malicious traffic since we believe a fast-spreading Internet worm propagates exponentially at its beginning stage. In addition, we can accurately predict the total vulnerable population in the Internet at the early stage of a worm's propagation, and estimate the number of globally infected hosts based on our limited monitoring resource. In the area of worm mitigation, we derive two fundamental defense principles: "preemptive quarantine" and "feedback adjustment ". Based on the first principle, we present a novel "dynamic quarantine" defense system. Based on the second principle, we present an adaptive defense system to defend against various network attacks, including worm attack and Distributed Denial-of-Service (DDoS) attacks.
16

On the interaction among self -interested users of network resources

Zhang, Honggang 01 January 2006 (has links)
We study the interactions among self-interested users of network resources in the context of congestion control and routing protocols in computer networks. In the context of congestion control, we propose a game-theoretic study of the selfish behavior of TCP users when they are allowed to use multiple concurrent TCP connections so as to maximize their goodputs (or other utility function). We refer to this as the TCP connection game. We demonstrate the existence and uniqueness of Nash equilibrium in several variants of this game. We also generalize this game to model peer-to-peer unstructured file sharing networks. The bad news is that the loss of efficiency (the price of anarchy) at the Nash Equilibrium can be arbitrarily large if users have no resource limitations and are not socially responsible. The good news is that, if either of these two factors is considered, the loss of efficiency is bounded. We then study the interaction between overlay routing controller and underlay native network routing controller using two approaches. In the first approach, we formulate this interaction as a two-player game, in which the overlay network seeks to minimize the delay of its overlay traffic while the underlay network seeks to minimize the network cost as a whole. We show that the selfish behavior of the overlay network can cause huge cost increase and oscillation in the entire network. Even worse, we have identified cases in which the overlay network's cost increases as the game proceeds even though the overlay plays optimally in response to the underlay network's routing at each round. To solve this conflict, we propose that the overlay network plays as a leader in a Stackelberg game. In the second approach, we investigate the ability of an overlay network to compensate for "careless" routing in the native network layer, i.e., for network-layer routes not optimized for the performance of application overlays. In particular, we investigate the extent to which overlay-over-careless-underlay can achieve performance close to that attainable when underlay routing is performed in an optimal ("careful") manner. We find that the overlay network can compensate for careless underlay routing only when the sub-graph formed by the underlay network's routes is rich, which can be collectively characterized by three graph-theoretic metrics. This result suggests that ISPs can simplify underlay network management by relegating responsibility for performance to application overlays.
17

File access characterization and analysis of nonvolatile write caches

Biswas, Prabuddha 01 January 1993 (has links)
The I/O subsystem of computer systems is becoming the bottleneck as a result of recent dramatic improvements in processor speed. Enhancing the performance of the disk and file system is becoming increasingly important to alleviate the effect of the I/O bottleneck. Disk and file system caches have been effective in closing the performance gap between the processor and the I/O subsystem. However, their benefit in commercial systems is restricted only to read operations as write operations are committed to disk to maintain consistency and to allow crash recovery. Consequently, write operations begin to dominate the traffic to the disk. Therefore, it has become increasingly important to address the performance of write operations. To modify an existing I/O subsystem or to architect a new one, it is important to understand the current file access patterns. Therefore, in the first part of the dissertation we provide a comprehensive analysis of several file I/O traces collected at commercial, production computer systems. Next, we investigate the use of non-volatile disk caches to address the write I/O bottleneck problem. The non-volatile caches can be easily incorporated into an existing file system. We address the issues around managing such a cache using a detailed trace driven simulation. We propose cache management policies that reduce the number of disk writes to a small fraction of the total number of application write operations. Our schemes also improve the write response time by cleaning the write cache asynchronously when the disk is idle and by piggybacking dirty blocks on disk read operations. Finally, we extend the use of non-volatile write caches to the distributed file system environment. We develop a synthetic file access workload model based on the I/O traces and use it to evaluate alternative write caching policies. We show that small non-volatile caches at both the clients and at the server is quite effective. They reduce the write response time and the load on the file server, thus improving the performance and scalability of the system.
18

Algorithms for distributed systems under imperfect status information

Choi, Myungwhan 01 January 1990 (has links)
A recent trend in computer system design has been to distribute the tasks among the multiple processors and a wide variety of computer systems can be proposed. The potential benefits of this trend include modular expansion, increased cost-performance, availability, and reliability. To fully exploit these potential advantages, the efficient control of the resources is essential. In this dissertation, we attack some of important problems which arise in the distributed system environment. Central to this dissertation is the issue of imperfect information, and how to make control decisions in the face of imperfect information. While a great deal of attention has been lavished on token-ring systems over the past few years, little has been said about synthesizing protocols to satisfy complex priority requirements. We present an algorithm that partly fills this gap. It is a simple, adaptive algorithm which can ensure prescribed differential service to the various job classes. Applications include maintaining a throughput differential between job classes, integrating voice and data packets, and maintaining mean waiting times of the various job classes in prescribed ratios. Such an algorithm is likely to be useful whenever disparate traffic classes must coexist on the same ring. We also study the effect of the decay of status information on the goodness of load balancing algorithms. We study empirically the relationship between the characterizing parameters of a very simple model using the join-the-shortest-queue algorithm. This is done using regression analysis. Our results indicate that these parameters--number of queues, status update rate, and coefficient of variation for the job service time--are far from orthogonal in their impact on system performance. We have empirically obtained expressions for the sensitivity of the response time to the characterizing parameters. Designers can use this expression to determine appropriate status update rates. We also present a comparison of some simple distributed load balancing algorithms which update status information regularly with those which do so adaptively. The adaptive algorithms only broadcast status when it changes by more than a threshold. They are almost as simple to implement as algorithms which periodically exchange status information, and provide better performance. Finally, we introduce a simple, adaptive buffer allocation algorithm. We study the performance of an adaptive buffer allocation algorithm in the face of the imperfect status information for the control of distributed systems in which multiple classes of customers contend for a finite buffer which is served by a single server. We believe that this algorithm is especially useful when the number of the classes changes dynamically, or when the input loading changes dynamically. The adaptive algorithm is especially useful when buffer sizes are small.
19

Sharing patterns and the single cachable coherence scheme for shared memory systems

Mendelson, Abraham 01 January 1990 (has links)
This thesis presents a new cache coherence protocol for shared bus multicache systems, and addresses the mutual relations among sharing patterns, performance of multicache systems and hardware level support for synchronization primitives. First we look at the impact of the software environment on the performance of multicache systems. We present new models for predicting the ratio between the number of dead and live lines in an interleaved environment, and use these results to define and model two important classes of shared data in multicache systems: truly and falsely shared data. An optimal cache coherence protocol, termed Par-Opt, is defined based on the observation that only modification of truly shared data has to be propagated to other caches. The new coherence is presented in the second part. The protocol is termed Single Cache Data Coherence (SCCDC), and calls to keep at most a single copy of a shared writable data in the multicache system. The protocol overhead is only for accessing truly shared data, while trading off the overhead caused for multiple read. We show that in software environments where the amount of falsely shared data is significant, the proposed protocol outperforms all the other cache coherence protocols. For many other environments, an integration of software based mechanisms to tag read only shared data and the use of SCCDC protocol can be used to handle cache coherence efficiently. A hardware support for "critical section free" access to multi-access shared data is presented. We show that if the software guarantees to access the shared data only by using two hardware based mechanisms, many algorithms can be developed to manipulate commonly used data structures such as trees, link-lists and hashing tables without locking the entire data structure. These mechanisms, when implemented for multicache architectures, can only use SCCDC based systems for efficient implementation. The support for "critical section free" access, gives the SCCDC protocol another advantage when being used to support software environments such as data-bases, AI, and data retrieval applications.
20

Mapping of algorithms onto programmable data-driven arrays

Mendelson, Bilha 01 January 1991 (has links)
Control-driven arrays (e.g., systolic arrays) provide high levels of parallelism and pipelining for inherently regular computations. Data-driven arrays can provide the same for algorithms with no internal regularity. The purpose of this research is to establish a method for speeding up an algorithm by mapping and executing it on a data driven array. The array being used is an homogeneous, hexagonal, data driven processor array. Mapping a general algorithm, given in the data flow language SISAL, consists of translating the algorithm to a data flow graph (DFG) and assigning every node in the DFG to a processing element (PE) in the array. This research aims to find an efficient mapping that minimizes the area, and maximizes the performance of the given algorithm or find a tradeoff between the two.

Page generated in 0.1564 seconds