• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 477
  • 477
  • 353
  • 335
  • 186
  • 99
  • 64
  • 63
  • 58
  • 53
  • 52
  • 52
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimization and enhancement strategies for data flow systems

Dunkelman, Laurence William. January 1984 (has links)
No description available.
72

Analytic modelling of agent-based network routing algorithms

Costa, Andre. January 2002 (has links) (PDF)
"November 4, 2002." Includes bibliographical references (leaves 180-814) Applies analytic modelling techniques to the study of agent-based routing algorithms
73

Analytic modelling of agent-based network routing algorithms.

Costa, Andre January 2002 (has links)
Interest in adaptive and distributed systems for routing control in networks has led to the development of a new class of algorithms, which is inspired by the shortest path finding behaviours observed in biological ant colonies. This class utilizes ant-like agents, which autonomously traverse the network and collectively construct a distributed routing policy. Agent-based routing algorithms belonging to this class do not require a complete model of the network, and are able to adapt autonomously to network changes in dynamic and unpredictable environments. Previous studies of these algorithms have been carried out exclusively via the use of simulation-based models. In this thesis, we apply analytic modelling techniques to the study of agent-based routing algorithms. Our aim is to broaden the research in this field, as well as to gain a greater theoretical understanding of some fundamental properties of this class of algorithms. / Thesis (Ph.D.)--School of Applied Mathematics, 2002.
74

Solving multiparty private matching problems using Bloom-filters

Lai, Ka-ying. January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
75

Le problème de jobshop avec contraintes: modélisation et optimisation

Caumond, Anthony 18 December 2006 (has links) (PDF)
Les algorithmes d'optimisation les plus performants pour résoudre le problème de jobshop utilisent des méthodes et outils spécifiques comme le modèle de graphe disjonctif et les voisinages basés sur ce graphe. Afin d'utiliser ces méthodes sur des problèmes réels, nous avons du enrichir le problème de jobshop. Nous nous sommes ainsi intéressés aux problèmes de jobshop avec time lags et jobshop avec transport. Pour chacun de ces deux problèmes, le modèle de graphe disjonctif et ses voisinages ont été modifiés et adaptés. Pour le problème de jobshop avec time lags, nous avons proposé des heuristiques et des métaheuristiques performantes, la difficulté principale étant de proposer une solution qui respecte toutes les contraintes de time lags maximum. Pour le problème de jobshop avec transport , nous avons proposé un modèle linéaire et une métaheuristique qui traitent toutes le même problème (i.e. prennent en compte strictement en compte les mêmes contraintes). Dans les deux cas, une modélisation sous forme de graphe disjonctif et une adaptation des voisinages ont été proposés. En outre, l'implantation des métaheuristique pour chacun de ces problèmes nous a montré qu'une grande partie du développement est redondant. Nous avons donc proposé un cadriciel orienté objet pour l'optimisation (BCOO) dont l'objectif est de factoriser la plus grande partie de code possible
76

Sparsely Faceted Arrays: A Mechanism Supporting Parallel Allocation, Communication, and Garbage Collection

Brown, Jeremy Hanford 01 June 2002 (has links)
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
77

Conflict detection and resolution during restructuring of XML data

Teterovskaya, Anna. January 2000 (has links) (PDF)
Thesis (M.S.)--University of Florida, 2000. / Title from first page of PDF file. Document formatted into pages; contains v, 111 p.; also contains graphics. Vita. Includes bibliographical references (p. 106-110).
78

Cache Oblivious Data Structures

Ohashi, Darin January 2001 (has links)
This thesis discusses cache oblivious data structures. These are structures which have good caching characteristics without knowing Z, the size of the cache, or L, the length of a cache line. Since the structures do not require these details for good performance they are portable across caching systems. Another advantage of such structures isthat the caching results hold for every level of cache within a multilevel cache. Two simple data structures are studied; the array used for binary search and the linear list. As well as being cache oblivious, the structures presented in this thesis are space efficient, requiring little additional storage. We begin the discussion with a layout for a search tree within an array. This layout allows Searches to be performed in O(log n) time and in O(log n/log L) (the optimal number) cache misses. An algorithm for building this layout from a sorted array in linear time is given. One use for this layout is a heap-like implementation of the priority queue. This structure allows Inserts, Heapifies and ExtractMaxes in O(log n) time and O(log nlog L) cache misses. A priority queue using this layout can be builtfrom an unsorted array in linear time. Besides the n spaces required to hold the data, this structure uses a constant amount of additional storage. The cache oblivious linear list allows scans of the list taking Theta(n) time and incurring Theta(n/L) (the optimal number) cache misses. The running time of insertions and deletions is not constant, however it is sub-polynomial. This structure requires e*n additional storage, where e is any constant greater than zero.
79

Cache Oblivious Data Structures

Ohashi, Darin January 2001 (has links)
This thesis discusses cache oblivious data structures. These are structures which have good caching characteristics without knowing Z, the size of the cache, or L, the length of a cache line. Since the structures do not require these details for good performance they are portable across caching systems. Another advantage of such structures isthat the caching results hold for every level of cache within a multilevel cache. Two simple data structures are studied; the array used for binary search and the linear list. As well as being cache oblivious, the structures presented in this thesis are space efficient, requiring little additional storage. We begin the discussion with a layout for a search tree within an array. This layout allows Searches to be performed in O(log n) time and in O(log n/log L) (the optimal number) cache misses. An algorithm for building this layout from a sorted array in linear time is given. One use for this layout is a heap-like implementation of the priority queue. This structure allows Inserts, Heapifies and ExtractMaxes in O(log n) time and O(log nlog L) cache misses. A priority queue using this layout can be builtfrom an unsorted array in linear time. Besides the n spaces required to hold the data, this structure uses a constant amount of additional storage. The cache oblivious linear list allows scans of the list taking Theta(n) time and incurring Theta(n/L) (the optimal number) cache misses. The running time of insertions and deletions is not constant, however it is sub-polynomial. This structure requires e*n additional storage, where e is any constant greater than zero.
80

Multi-writer consistency conditions for shared memory objects

Shao, Cheng 15 May 2009 (has links)
Regularity is a shared memory consistency condition that has received considerable attention, notably in connection with quorum-based shared memory. Lamport's original definition of regularity assumed a single-writer model, however, and is not well defined when each shared variable may have multiple writers. In this thesis, we address this need by formally extending the notion of regularity to a multi-writer model. We have shown that the extension is not trivial. While there exist various ways to extend the single-writer definition, the resulting definitions will have different strengths. Specifically, we give several possible definitions of regularity in the presence of multiple writers. We then present a quorum-based algorithm to implement each of the proposed definitions and prove them correct. We study the relationships between these definitions and a number of other well-known consistency conditions, and give a partial order describing the relative strengths of these consistency conditions. Finally, we provide a practical context for our results by studying the correctness of two well-known algorithms for mutual exclusion under each of our proposed consistency conditions.

Page generated in 0.0806 seconds