Spelling suggestions: "subject:"multiprocessors"" "subject:"ultiprocessors""
41 |
A Task Manager for a Multiprocessor Computer SystemIngraham, Peter J. 01 January 1986 (has links) (PDF)
The advent of the 32-bit microprocessor has sparked the manufacture of a number of 32-bit single-board computers (SBCs). These SBCs rival [performance of minicomputers for a fraction of the cost, and a number of them conform to standard 32-bit bus structures. A 32-bit multiprocessor computer system can be created by connecting two or more SBCs on a 32-bit bus. This computer system has the potential of providing many times the power as a single-processor computer system as well as a significant reduction in cost.
The Multiprocessor Task Manager is designed to efficiently distribute software tasks, schedule their execution, and manage data flow between them, such that an effective and reliable multiprocessor is realized.
|
42 |
An associative backend machine for data base managementHurson, Alireza 01 January 1980 (has links) (PDF)
It has long been recognized that computer systems containing large data bases expend an inordinate amount of time managing the resources (viz. central processing time, memory, ... etc.) rather than performing useful computation in response to user I s query. This is due to the adaptation of the classical machine architecture, the so called von Neumann architecture, to a problem domain that needs radically different machine architecture for an efficient solution. The characteristics that distinguish the computation for data base management systems are: massive amount of data, simple repetitive non-numeric operations and the association of a name space with the information space at a high level. The current systems meet these requirements by memory management techniques, specially designed application programs and a sophisticated address mapping methods. This accounts for a large software overhead and the resulting semantic gap between the high level language and the underlying machine architecture. To overcome the difficulties of the von Neumann machines, Slotnick suggested the idea of the hardware backend processing by distributing the processing capabilities outside of CPU and among the read/write cells. These cells act as filters which imp rove the system performance by reducing the processing load on the CPU as well as the amount of data transported back and forth between secondary and main storage. The major contribution of this dissertation is the definition of a backend machine architecture ASLM (Associative Search Language Machine) and the development of a query language ASL (Associative Search Language) which is directly executed by the backend machine using built-in hardware algorithms for query processing and associative hardware for name-space resolution. The language ASL is a high level data base language using associative principles for basic operations. The language has been defined based on the relational data model. ASL is relationally complete, and provides complete data independence. ASL provides facilities for query, insertion, detection and update operations on tuples of variable sizes. Moreover, the structure of the statements in ASL are represented in arithmetic expressions like entities called set expressions. ASLM is designed based on cellular organization, a design similar to Slotnick's idea with an important exception. In the design of ASLM, the processing units (cells) are moved into the backend machine. The general strategy in ASLM is based on the pre-search through the data file and then the execution of the operations on the explicit subfiles which are stored in the associative memory. The generation of the subrelations explicitly eliminates the existence of so-called mark bits in some of the previously designed data base machines. Moreover, it provides fast algorithms for international operations such as join. ASLM is also microprogrammable which gives more flexibility to the system. The design of the ASLM differs from the majority of the data base machines based on Slotnick's idea: first, the separation of the cells from the secondary storage will result in a cost effective system in comparison to the other machines. This also eliminates any restriction on the secondary devices. Second, since cells are independent of each other there is no need for interconnection network between the cells. Third, ASLM is implemented by associative memory, the closeness between associative operations and data base operations reduces the existing semantic gap found in the conventional system, and fourth, ASLM is expandable to the MIMD class of machines.
|
43 |
Design and evaluation of multimicroprocessor systemsGupta, Amar. January 1980 (has links)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980 / Includes bibliographical references. / by Amar Gupta. / M.S. / M.S. Massachusetts Institute of Technology, Sloan School of Management
|
44 |
Process migration on multiprocessor systems佘啓明, Shea, Kai-ming. January 1997 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
45 |
Architectural support for multithreading on a 4-way multiprocessorKim, Gwang-Myung 10 December 1999 (has links)
The microprocessors will have more than a billion logic transistors on a single chip in the near future. Several alternatives have been suggested for obtaining highest performance with billion-transistor chips. To achieve the highest performance possible, an on-chip multiprocessor will become one promising alternative to the current superscalar microprocessor. It may execute multiple threads effectively on multiple processors in parallel if the application program is parallelized properly. This increases the utilization of the processor and provides latency tolerance for the latency caused from data dependency and cache misses.
The Electronics and Telecommunications Research Institute (ETRI) in South Korea developed an on-chip multiprocessor RAPTOR Simulator "RapSim", which contains four SPARC microprocessor cores in it. To support this 4-way multiprocessor simulator, Multithreaded Mini Operating System (MMOS) was developed by OSU MMOS group. RapSim runs multiple threads on multiple processor cores concurrently. POSIX threads was used to build Symmetric Multiprocessor (SMP) safe Pthreads
package, called MMOS. Benchmarks should be properly parallelized by the programmer to run multiple threads across the multiple processors simultaneously. Performance simulation results shows the RAPTOR can exploit thread level parallelism effectively and offer a promising architecture for future on-chip multiprocessor designs. / Graduation date: 2000
|
46 |
Interface design and system impact analysis of a message-handling processor for fine-grain multithreadingMetz, David 28 April 1995 (has links)
There appears to be a broad agreement that high-performance computers of the future will be
Massively Parallel Architectures (MPAs), where all processors are interconnected by a high-speed
network. One of the major problems with MPAs is the latency observed for remote operations. One
technique to hide this latency is multithreading. In multithreading, whenever an instruction accesses a
remote location, the processor switches to the next available thread waiting for execution. There have
been a number of architectures proposed to implement multithreading. One such architecture is the
Threaded Abstract Machine (TAM). It supports fine-grain multithreading by an appropriate compilation
strategy rather that through elaborate hardware. Experiments on TAM have already shown that fine-grain
multithreading on conventional architectures can achieve reasonable performance.
However, a significant deficiency of the conventional design in the context of fine-grain program
execution is that the message handling is viewed as an appendix rather than as an integral, essential part
of the architecture. Considering that message handling in TAM can constitute as much as one fifth to one
half of total instructions executed, special effort must be given to support it in the underlying hardware.
This thesis presents the design modifications required to efficiently support message handling for
fine-grain parallelism on stock processors. The idea of having a separate processor is proposed and
extended to reduce the overhead due to messages. A detailed hardware is designed to establish the
interface between the conventional processor and the message-handling processor. At the same time, the
necessary cycle cost required to guarantee atomicity between the two processors is minimized. However,
the hardware modifications are kept to a minimum so as not to disturb the original functionality of a
conventional RISC processor. Finally, the effectiveness of the proposed architecture is analyzed in terms
of its impact on the system. The distribution of the workload between both processors is estimated to
indicate the potential speed-up that can be achieved with a separate processor to handle messages. / Graduation date: 1995
|
47 |
Accelerating Communication in On-Chip Interconnection NetworksAhn, Minseon 2012 May 1900 (has links)
Due to the ever-shrinking feature size in CMOS process technology, it is expected that future chip multiprocessors (CMPs) will have hundreds or thousands of processing cores. To support a massively large number of cores, packet-switched on-chip interconnection networks have become a de facto communication paradigm in CMPs. However, the on-chip networks have several drawbacks, such as limited on-chip resources, increasing communication latency, and insufficient communication bandwidth.
In this dissertation, several schemes are proposed to accelerate communication in on-chip interconnection networks within area and cost budgets to overcome the problems. First, an early transition scheme for fully adaptive routing algorithms is proposed to improve network throughput. Within a limited number of resources, previously proposed fully adaptive routing algorithms have low utilization in escape channels. To increase utilization of escape channels, it transfers packets earlier before the normal channels are full. Second, a pseudo-circuit scheme is proposed to reduce network latency using communication temporal locality. Reducing per-hop router delay becomes more important for communication latency reduction in larger on-chip interconnection networks. To improve communication latency, the previous arbitration information is reused to bypass switch arbitration. For further acceleration, we also propose two aggressive schemes, pseudo-circuit speculation and buffer bypassing. Third, two handshake schemes are proposed to improve network throughput for nanophotonic interconnects. Nanophotonic interconnects have been proposed to replace metal wires with optical links in on-chip interconnection networks for low latency and power consumptions as well as high bandwidth. To minimize the average token waiting time of the nanophotonic interconnects, the traditional credit-based flow control is removed. Thus, the handshake schemes increase link utilization and enhance network throughput.
|
48 |
Design and analysis of robust algorithms for fault tolerant computingJang, Jai Eun 04 April 1990 (has links)
Graduation date: 1990
|
49 |
Hardware techniques to reduce communication costs in multiprocessorsHuh, Jaehyuk 28 August 2008 (has links)
Not available / text
|
50 |
Synchronous multiprocessor realizations of shift-invariant flow graphsSchwartz, David Aaron 08 1900 (has links)
No description available.
|
Page generated in 0.0484 seconds