Spelling suggestions: "subject:"multiprocessor""
81 |
Hardware/software deadlock avoidance for multiprocessor multiresource system-on-a-chipLee, Jaehwan. January 2004 (has links) (PDF)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005. / Panagiotis Manolios, Committee Member ; Douglas M. Blough, Committee Member ; Vincent John Mooney III, Committee Chair ; William D. Hunt, Committee Member ; Sung Kyu Lim, Committee Member. Vita. Includes bibliographical references.
|
82 |
The doubly-linked list protocol family for distributed shared memory multiprocessor systems /Lau, Chung-kwok, Albert. January 1996 (has links)
Thesis (M. Phil.)--University of Hong Kong, 1996. / Includes bibliographical references (leaf 112-114) and index.
|
83 |
PROPEL power & area-efficient, scalable opto-electronic network-on-chip /Morris, Randy W. January 2009 (has links)
Thesis (M.S.)--Ohio University, June, 2009. / Title from PDF t.p. Includes bibliographical references.
|
84 |
NPSNET integration of distributed interactive simulation (DIS) protocol for communication architecture and information interchange /Zeswitz, Steven Randall. January 1993 (has links) (PDF)
Thesis (Master of Computer Science) Naval Postgraduate School, September 1993. / Thesis advisor(s): David R. Pratt. "September 1993." Bibliography: p. 67-69. Also availalbe online.
|
85 |
Hardware/software deadlock avoidance for multiprocessor multiresource system-on-a-chip /Lee, Jaehwan. January 2004 (has links) (PDF)
Thesis (Ph. D.)--Georgia Institute of Technology, 2004. / Vita. Department of Electrical and Computer Engineering, Georgia Institute of Technology. Includes bibliographical references (p. 142-146).
|
86 |
Eclipse flexible media processing in a heterogeneous multiprocessor template /Rutten, Martijn Johan. January 1900 (has links)
Academisch Proefschrift--Universiteit van Amsterdam, 2007. / Curriculum vitae. Description based on print version record. Includes bibliographical references (p. [191]-198).
|
87 |
Directory-based Cache Coherence in SMTp Machines without Memory Overhead using Sparse DirectoriesKiriwas, Anton 01 January 2004 (has links)
As computing power has increased over the past few decades, science and engineering have found more and more uses for this new found computing power. With the advent of multiprocessor machines, we are achieving MIPS and FLOPS ratings previously unthought-of. Distributed shared-memory machines (DSM) are quickly becoming a powerful tool for computing, and the ability to build them from commodity off-the-shelf parts would be a great benefit to computing in general. In the paper entitled, "SMTp: An Architecture for Next-generation Scalable Multi-threading", Heinrich, et al. presents an architecture for a scalable DSM built from slightly modified machines capable of simultaneous multi-threading (SMT). In this architecture SMT -based machines are connected together via a high-speed network as DSMs with a directory-based cache coherence protocol. What is unique in SMTp is that the cache coherence protocol runs on the second thread in the SMT processors instead of running on an expensive, specialized memory controller. The results of this work show that SMTp can sometimes be even faster than dedicated hardware. In this thesis I intend to present the work on SMTp and extend its capabilities by removing the necessity for memory based directory backing by leveraging the work of Wolf-Dietrich Weber in sparse directories. The removal of the directory backing store will free a large percentage of main memory for work in the system while having only a minor impact on the cache miss rate of applications and overall system throughout.
|
88 |
A modular approach to fault-tolerant binary tree architectures /Hassan, Abu S.M. (Abu Saleem Mahmudul) January 1984 (has links)
No description available.
|
89 |
Resource Banking An Energy-efficient, Run-time Adaptive Processor Design TechniqueStaples, Jacob 01 January 2011 (has links)
From the earliest and simplest scalar computation engines to modern superscalar out-oforder processors, the evolution of computational machinery during the past century has largely been driven by a single goal: performance. In today’s world of cheap, billion-plus transistor count processors and with an exploding market in mobile computing, a design landscape has emerged where energy efficiency, arguably more than any other single metric, determines the viability of a processor for a given application. The historical emphasis on performance has left modern processors bloated and over provisioned for everyday tasks in the hope that during computationally intensive periods some performance improvement will be observed. This work explores an energy-efficient processor design technique that ensures even a highly over provisioned out-of-order processor has only as many of its computational resources active as it requires for efficient computation at any given time. Specifically, this paper examines the feasibility of a dynamically banked register file and reorder buffer with variable banking policies that enable unused rename registers or reorder buffer entries to be voltage gated (turned off) during execution to save power. The impact of bank placement, turn-off and turn-on policies as well as rail stabilization latencies for this approach are explored for high-performance desktop and server designs as well as low-power mobile processors
|
90 |
Implementation of a Parallel Ynet ArchitectureLeBlanc, Julie Nadeau 01 January 1987 (has links) (PDF)
A simulation of an alternate implementation of a redundant busing network based on the Teradata Ynet architecture is presented. An overview of the Teradata DBC/1012 data base parallel processing computer including the Ynet, an active logic busing network, is given. Other multiprocessor busing networks are examined and compared to the standard Ynet and the alternate Ynet.
In the standard Ynet system, two networks, called Ynets, process message packets concurrently. When one of the Ynet paths fails, the system is reset. The remaining Ynet path restarts using the previously interrupted packets and processing continues without the aid of the failed Ynet. In the implementation presented here, the two busing networks process the message packets in parallel. Now, when one of the Ynet paths fails, the other continues processing the packets without interruption. This implementation can be referred to as a parallel Ynet.
The advantages and disadvantages of the parallel Ynet are discussed and suggestions for further research are given. Listings and sample outputs are included in the appendices.
|
Page generated in 0.0626 seconds