• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cache Oblivious Data Structures

Ohashi, Darin January 2001 (has links)
This thesis discusses cache oblivious data structures. These are structures which have good caching characteristics without knowing Z, the size of the cache, or L, the length of a cache line. Since the structures do not require these details for good performance they are portable across caching systems. Another advantage of such structures isthat the caching results hold for every level of cache within a multilevel cache. Two simple data structures are studied; the array used for binary search and the linear list. As well as being cache oblivious, the structures presented in this thesis are space efficient, requiring little additional storage. We begin the discussion with a layout for a search tree within an array. This layout allows Searches to be performed in O(log n) time and in O(log n/log L) (the optimal number) cache misses. An algorithm for building this layout from a sorted array in linear time is given. One use for this layout is a heap-like implementation of the priority queue. This structure allows Inserts, Heapifies and ExtractMaxes in O(log n) time and O(log nlog L) cache misses. A priority queue using this layout can be builtfrom an unsorted array in linear time. Besides the n spaces required to hold the data, this structure uses a constant amount of additional storage. The cache oblivious linear list allows scans of the list taking Theta(n) time and incurring Theta(n/L) (the optimal number) cache misses. The running time of insertions and deletions is not constant, however it is sub-polynomial. This structure requires e*n additional storage, where e is any constant greater than zero.
2

Cache Oblivious Data Structures

Ohashi, Darin January 2001 (has links)
This thesis discusses cache oblivious data structures. These are structures which have good caching characteristics without knowing Z, the size of the cache, or L, the length of a cache line. Since the structures do not require these details for good performance they are portable across caching systems. Another advantage of such structures isthat the caching results hold for every level of cache within a multilevel cache. Two simple data structures are studied; the array used for binary search and the linear list. As well as being cache oblivious, the structures presented in this thesis are space efficient, requiring little additional storage. We begin the discussion with a layout for a search tree within an array. This layout allows Searches to be performed in O(log n) time and in O(log n/log L) (the optimal number) cache misses. An algorithm for building this layout from a sorted array in linear time is given. One use for this layout is a heap-like implementation of the priority queue. This structure allows Inserts, Heapifies and ExtractMaxes in O(log n) time and O(log nlog L) cache misses. A priority queue using this layout can be builtfrom an unsorted array in linear time. Besides the n spaces required to hold the data, this structure uses a constant amount of additional storage. The cache oblivious linear list allows scans of the list taking Theta(n) time and incurring Theta(n/L) (the optimal number) cache misses. The running time of insertions and deletions is not constant, however it is sub-polynomial. This structure requires e*n additional storage, where e is any constant greater than zero.
3

Samoupravující seznamy / Self-organizing linear lists

Kulman, Igor January 2011 (has links)
Self-organizing linear lists Self-organizing linear lists are data structures for fast search, provided that certain elements stored in them are searched more frequently than others, while the probability of access to individual elements is generally not known in advance. Efficient search is achieved using different permutation rules that keep changing the list structure so that the more frequently searched elements are closer to the beginning. This thesis gives an overview of known algorithms for solving this problem (with the theoretical results about their complexity, if they are known), and experimental study of their behavior (using its own or freely available implementations and software for generating input data, testing algorithms and processing the results of experiments).

Page generated in 0.0507 seconds