Spelling suggestions: "subject:"high performance computing"" "subject:"igh performance computing""
1 |
High performance parallel Java with JavapartyNassar, Samuel. January 2008 (has links) (PDF)
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2008. / Thesis Advisor(s): Su, Weilian. "June 2008." Description based on title screen as viewed on August 26, 2008. Includes bibliographical references (p. 59-60). Also available in print.
|
2 |
The performance evaluation of workstation clustersMelas, Panagiotis January 2000 (has links)
No description available.
|
3 |
Design of disk cache for high performance computing.January 1995 (has links)
by Vincent, Kwan Chi Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 123-127). / Abstract --- p.i / Acknowledgement --- p.ii / List of Tables --- p.vii / List of Figures --- p.viii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- I/O System --- p.2 / Chapter 1.2 --- Disk Cache --- p.4 / Chapter 1.3 --- Dissertation Outline --- p.5 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Prefetching --- p.7 / Chapter 2.2 --- Cache Partitioning --- p.9 / Chapter 2.2.1 --- Hardware Assisted Mechanism --- p.9 / Chapter 2.2.2 --- Software Assisted Mechanism --- p.10 / Chapter 2.3 --- Replacement Policy --- p.12 / Chapter 2.4 --- Caching Write Operation --- p.13 / Chapter 2.5 --- Others --- p.14 / Chapter 2.6 --- Summary --- p.15 / Chapter 3 --- Methodology and Models --- p.17 / Chapter 3.1 --- Performance Measurement --- p.17 / Chapter 3.1.1 --- Partial Hit --- p.17 / Chapter 3.1.2 --- Time Model --- p.17 / Chapter 3.2 --- Terminology --- p.19 / Chapter 3.2.1 --- Transfer Block --- p.19 / Chapter 3.2.2 --- Multiple-sector Request --- p.19 / Chapter 3.2.3 --- "Dynamic Block, Heading Sectors and Content Sectors" --- p.20 / Chapter 3.2.4 --- Heading Reuse and Non-heading Reuse --- p.22 / Chapter 3.3 --- New Models --- p.23 / Chapter 3.3.1 --- Unified Cache with Always Prefetch --- p.24 / Chapter 3.3.2 --- Partitioned Cache: Branch Target Cache and Prefetch Buffer --- p.25 / Chapter 3.3.3 --- BTC + PB with Alternative Storing Sector Technique --- p.29 / Chapter 3.3.4 --- BTC + PB with ASST Applying to Dynamic Block --- p.34 / Chapter 3.3.5 --- BTC + PB with Storing Enough Head Technique --- p.35 / Chapter 3.4 --- Impact of Block Size --- p.38 / Chapter 4 --- Trace Driven Simulation --- p.41 / Chapter 4.1 --- Simulation Environment --- p.41 / Chapter 4.2 --- Two Kinds Of Disk --- p.43 / Chapter 4.3 --- Control Models --- p.43 / Chapter 4.3.1 --- Model 1: No Cache --- p.43 / Chapter 4.3.2 --- Model 2: Unified Cache without Prefetch --- p.44 / Chapter 4.3.3 --- Model 3: Unified Cache with Prefetch on Miss --- p.44 / Chapter 4.4 --- Two Comparison Standards --- p.45 / Chapter 4.5 --- Trace Properties --- p.46 / Chapter 5 --- Performance Evaluation of Common Disk --- p.54 / Chapter 5.1 --- The Effect Of Cache Size --- p.54 / Chapter 5.1.1 --- Trends of Absolute Reduction in Time --- p.55 / Chapter 5.1.2 --- Trends of Relative Reduction in Time --- p.55 / Chapter 5.2 --- The Effect Of Block Size --- p.68 / Chapter 5.2.1 --- Trends of Absolute Reduction in Time --- p.68 / Chapter 5.2.2 --- Trends of Relative Reduction in Time --- p.73 / Chapter 5.3 --- The Effect Of Set Associativity --- p.77 / Chapter 5.3.1 --- Trends of Absolute Reduction in Time --- p.77 / Chapter 5.4 --- The Effect Of Start-up Time C1 --- p.79 / Chapter 5.4.1 --- Trends of Absolute Reduction in Time --- p.80 / Chapter 5.4.2 --- Trends of Relative Reduction in Time --- p.80 / Chapter 5.5 --- The Effect Of Transfer Time C2 --- p.83 / Chapter 5.5.1 --- Trends of Absolute Reduction in Time --- p.83 / Chapter 5.5.2 --- Trends of Relative Reduction in Time --- p.83 / Chapter 5.5.3 --- Impact of C2=0.5 on Cache Size --- p.86 / Chapter 5.5.4 --- Impact of C2=0.5 on Block Size --- p.87 / Chapter 5.6 --- The Effect Of Prefetch Buffer Size --- p.90 / Chapter 5.7 --- Others --- p.93 / Chapter 5.7.1 --- In The Case of Very Small Cache with Large Block Size --- p.93 / Chapter 5.7.2 --- Comparing Performance of Model 6 and Model 7 --- p.94 / Chapter 5.8 --- Conclusion --- p.95 / Chapter 5.8.1 --- The Number of Actual Sectors Transferred between Disk and Cache . --- p.95 / Chapter 5.8.2 --- The Efficiency of Our Models on Common Disk --- p.96 / Chapter 6 --- Performance Evaluation of High Performance Disk --- p.98 / Chapter 6.1 --- Difference Between Common Disk And High Performance Disk --- p.98 / Chapter 6.2 --- The Effect Of Cache Size --- p.99 / Chapter 6.2.1 --- Trends of Absolute Reduction in Time --- p.99 / Chapter 6.2.2 --- Trends of Relative Reduction in Time --- p.99 / Chapter 6.3 --- The Effect Of Block Size --- p.103 / Chapter 6.3.1 --- Trends of Absolute Reduction in Time --- p.105 / Chapter 6.3.2 --- Trends of Relative Reduction in Time --- p.105 / Chapter 6.4 --- The Effect Of Start-up Time C1 --- p.110 / Chapter 6.4.1 --- Trends of Relative Reduction in Time --- p.110 / Chapter 6.5 --- The Effect Of Transfer Time C2 --- p.110 / Chapter 6.5.1 --- Trends of Relative Reduction in Time --- p.112 / Chapter 6.5.2 --- Impact of C2=0.5 on Cache Size --- p.112 / Chapter 6.5.3 --- Impact of C2=0.5 on Block Size --- p.116 / Chapter 6.6 --- Conclusion --- p.117 / Chapter 7 --- Conclusions and Future Work --- p.119 / Chapter 7.1 --- Conclusions --- p.119 / Chapter 7.2 --- Future Work --- p.122 / Bibliography --- p.123
|
4 |
Energy-efficient resource management for high-performance computing platformsZong, Ziliang. Qin, Xiao, January 2008 (has links) (PDF)
Thesis (Ph. D.)--Auburn University, 2008. / Abstract. Includes bibliographical references (p. 127-134).
|
5 |
The Lagniappe programming environmentRiché, Taylor Louis, 1978- 31 August 2012 (has links)
Multicore, multithreaded processors are rapidly becoming the platform of choice for designing high-throughput request processing applications. We refer to this class of modern parallel architectures as multi-[star] systems. In this dissertation, we describe the design and implementation of Lagniappe, a programming environment that simplifies the development of portable, high-throughput request-processing applications on multi-[star] systems. Lagniappe makes the following four key contributions: First, Lagniappe defines and uses a unique hybrid programming model for this domain that separates the concerns of writing applications for uni-processor, single-threaded execution platforms (single-[star]systems) from the concerns of writing applications necessary to efficiently execute on a multi-[star] system. We provide separate tools to the programmer to address each set of concerns. Second, we present meta-models of applications and multi-[star] systems that identify the necessary entities for reasoning about the application domain and multi-[star] platforms. Third, we design and implement a platform-independent mechanism called the load-distributing channel that factors out the key functionality required for moving an application from a single-[star] architecture to a multi-[star] one. Finally, we implement a platform-independent adaptation framework that defines custom adaptation policies from application and system characteristics to change resource allocations with changes in workload. Furthermore, applications written in the Lagniappe programming environment are portable; we separate the concerns of application programming from system programming in the programming model. We implement Lagniappe on a cluster of servers each with multiple multicore processors. We demonstrate the effectiveness of Lagniappe by implementing several stateful request-processing applications and showing their performance on our multi-[star] system. / text
|
6 |
Distributed selective re-execution for EDGE architecturesDesikan, Rajagopalan 28 August 2008 (has links)
Not available / text
|
7 |
The Lagniappe programming environmentRiché, Taylor Louis, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
|
8 |
Practical analysis of framework-intensive applicationsDufour, Bruno, January 2010 (has links)
Thesis (Ph. D.)--Rutgers University, 2010. / "Graduate Program in Computer Science." Includes bibliographical references (p. 93-97).
|
9 |
Distributed selective re-execution for EDGE architecturesDesikan, Rajagopalan. January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Vita. Includes bibliographical references. Also available from UMI.
|
10 |
The implementation of a hardware accelerator for the full-wave analysis of electronic circuitsBodnar, Michael Richard. January 2007 (has links)
Thesis (M.S.E.C.E.)--University of Delaware, 2007. / Principal faculty advisor: Dennis W. Prather, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
|
Page generated in 0.1313 seconds