• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 18
  • 18
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 190
  • 190
  • 121
  • 65
  • 48
  • 44
  • 35
  • 27
  • 20
  • 18
  • 16
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A cache-based prefetching memory system for mediaprocessors /

Berg, Stefan Georg. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (p. 126-131).
12

Cache characterization and performance studies using locality surfaces /

Sorenson, Elizabeth S. January 2005 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Computer Science, 2005. / Includes bibliographical references (p. 363-374).
13

Asymmetric clustering using a register cache /

Morrison, Roger Allen. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2006. / Printout. Includes bibliographical references (leaves 19-20). Also available on the World Wide Web.
14

Hardware techniques to reduce communication costs in multiprocessors

Huh, Jaehyuk, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
15

Visible synchronization based cache coherence

Kumar, Krishna, January 1997 (has links)
Thesis (M.Comp. Sc.)--Dept. of Computer Science, Concordia University, 1997. / "April 1997." Includes bibliographical references (leaves 74-77). Available also on the Internet.
16

A weblet environment to facilitate proxy cashing of web processing components /

Hao, Wei, January 2007 (has links)
Thesis (Ph.D.)--University of Texas at Dallas, 2007. / Includes vita. Includes bibliographical references (leaves 199-209)
17

Design of disk cache for high performance computing.

January 1995 (has links)
by Vincent, Kwan Chi Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 123-127). / Abstract --- p.i / Acknowledgement --- p.ii / List of Tables --- p.vii / List of Figures --- p.viii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- I/O System --- p.2 / Chapter 1.2 --- Disk Cache --- p.4 / Chapter 1.3 --- Dissertation Outline --- p.5 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Prefetching --- p.7 / Chapter 2.2 --- Cache Partitioning --- p.9 / Chapter 2.2.1 --- Hardware Assisted Mechanism --- p.9 / Chapter 2.2.2 --- Software Assisted Mechanism --- p.10 / Chapter 2.3 --- Replacement Policy --- p.12 / Chapter 2.4 --- Caching Write Operation --- p.13 / Chapter 2.5 --- Others --- p.14 / Chapter 2.6 --- Summary --- p.15 / Chapter 3 --- Methodology and Models --- p.17 / Chapter 3.1 --- Performance Measurement --- p.17 / Chapter 3.1.1 --- Partial Hit --- p.17 / Chapter 3.1.2 --- Time Model --- p.17 / Chapter 3.2 --- Terminology --- p.19 / Chapter 3.2.1 --- Transfer Block --- p.19 / Chapter 3.2.2 --- Multiple-sector Request --- p.19 / Chapter 3.2.3 --- "Dynamic Block, Heading Sectors and Content Sectors" --- p.20 / Chapter 3.2.4 --- Heading Reuse and Non-heading Reuse --- p.22 / Chapter 3.3 --- New Models --- p.23 / Chapter 3.3.1 --- Unified Cache with Always Prefetch --- p.24 / Chapter 3.3.2 --- Partitioned Cache: Branch Target Cache and Prefetch Buffer --- p.25 / Chapter 3.3.3 --- BTC + PB with Alternative Storing Sector Technique --- p.29 / Chapter 3.3.4 --- BTC + PB with ASST Applying to Dynamic Block --- p.34 / Chapter 3.3.5 --- BTC + PB with Storing Enough Head Technique --- p.35 / Chapter 3.4 --- Impact of Block Size --- p.38 / Chapter 4 --- Trace Driven Simulation --- p.41 / Chapter 4.1 --- Simulation Environment --- p.41 / Chapter 4.2 --- Two Kinds Of Disk --- p.43 / Chapter 4.3 --- Control Models --- p.43 / Chapter 4.3.1 --- Model 1: No Cache --- p.43 / Chapter 4.3.2 --- Model 2: Unified Cache without Prefetch --- p.44 / Chapter 4.3.3 --- Model 3: Unified Cache with Prefetch on Miss --- p.44 / Chapter 4.4 --- Two Comparison Standards --- p.45 / Chapter 4.5 --- Trace Properties --- p.46 / Chapter 5 --- Performance Evaluation of Common Disk --- p.54 / Chapter 5.1 --- The Effect Of Cache Size --- p.54 / Chapter 5.1.1 --- Trends of Absolute Reduction in Time --- p.55 / Chapter 5.1.2 --- Trends of Relative Reduction in Time --- p.55 / Chapter 5.2 --- The Effect Of Block Size --- p.68 / Chapter 5.2.1 --- Trends of Absolute Reduction in Time --- p.68 / Chapter 5.2.2 --- Trends of Relative Reduction in Time --- p.73 / Chapter 5.3 --- The Effect Of Set Associativity --- p.77 / Chapter 5.3.1 --- Trends of Absolute Reduction in Time --- p.77 / Chapter 5.4 --- The Effect Of Start-up Time C1 --- p.79 / Chapter 5.4.1 --- Trends of Absolute Reduction in Time --- p.80 / Chapter 5.4.2 --- Trends of Relative Reduction in Time --- p.80 / Chapter 5.5 --- The Effect Of Transfer Time C2 --- p.83 / Chapter 5.5.1 --- Trends of Absolute Reduction in Time --- p.83 / Chapter 5.5.2 --- Trends of Relative Reduction in Time --- p.83 / Chapter 5.5.3 --- Impact of C2=0.5 on Cache Size --- p.86 / Chapter 5.5.4 --- Impact of C2=0.5 on Block Size --- p.87 / Chapter 5.6 --- The Effect Of Prefetch Buffer Size --- p.90 / Chapter 5.7 --- Others --- p.93 / Chapter 5.7.1 --- In The Case of Very Small Cache with Large Block Size --- p.93 / Chapter 5.7.2 --- Comparing Performance of Model 6 and Model 7 --- p.94 / Chapter 5.8 --- Conclusion --- p.95 / Chapter 5.8.1 --- The Number of Actual Sectors Transferred between Disk and Cache . --- p.95 / Chapter 5.8.2 --- The Efficiency of Our Models on Common Disk --- p.96 / Chapter 6 --- Performance Evaluation of High Performance Disk --- p.98 / Chapter 6.1 --- Difference Between Common Disk And High Performance Disk --- p.98 / Chapter 6.2 --- The Effect Of Cache Size --- p.99 / Chapter 6.2.1 --- Trends of Absolute Reduction in Time --- p.99 / Chapter 6.2.2 --- Trends of Relative Reduction in Time --- p.99 / Chapter 6.3 --- The Effect Of Block Size --- p.103 / Chapter 6.3.1 --- Trends of Absolute Reduction in Time --- p.105 / Chapter 6.3.2 --- Trends of Relative Reduction in Time --- p.105 / Chapter 6.4 --- The Effect Of Start-up Time C1 --- p.110 / Chapter 6.4.1 --- Trends of Relative Reduction in Time --- p.110 / Chapter 6.5 --- The Effect Of Transfer Time C2 --- p.110 / Chapter 6.5.1 --- Trends of Relative Reduction in Time --- p.112 / Chapter 6.5.2 --- Impact of C2=0.5 on Cache Size --- p.112 / Chapter 6.5.3 --- Impact of C2=0.5 on Block Size --- p.116 / Chapter 6.6 --- Conclusion --- p.117 / Chapter 7 --- Conclusions and Future Work --- p.119 / Chapter 7.1 --- Conclusions --- p.119 / Chapter 7.2 --- Future Work --- p.122 / Bibliography --- p.123
18

IPU/LTB:a method for reducing effective memory latency

Harmon, C. Reid, Jr. 01 December 2003 (has links)
No description available.
19

IPU/LTB a method for reducing effective memory latency /

Harmon, C. Reid, January 2003 (has links) (PDF)
Thesis (Ph. D.)--College of Computing, Georgia Institute of Technology, 2004. Directed by Ken MacKenzie. / Vita. Includes bibliographical references (leaves 135-146).
20

Hardware techniques to reduce communication costs in multiprocessors

Huh, Jaehyuk 28 August 2008 (has links)
Not available / text

Page generated in 0.0517 seconds