11 |
The super-actor machine : a hybrid dataflowvon Neumann architectureHum, Herbert Hing-Jing January 1992 (has links)
Emerging VLSI/ULSI technologies have created new opportunities in designing computer architectures capable of hiding the latencies and synchronization overheads associated with von Neumann-style multiprocessing. Pure Dataflow architectures have been suggested as solutions, but they do not adequately address the issues of local memory latencies and fine-grain synchronization costs. In this thesis, we propose a novel hybrid dataflow/von Neumann architecture, called the Super-Actor Machine, to address the problems facing von Neumann and pure dataflow machines. This architecture uses a novel high-speed memory organization known as a register-cache to tolerate local memory latencies and decrease local memory bandwidth requirements. The register-cache is unique in that it appears as a register file to the execution unit, while from the perspective of main memory, its contents are tagged as in conventional caches. Fine-grain synchronization costs are alleviated by the hybrid execution model and a loosely-coupled scheduling mechanism. / A major goal of this dissertation is to characterize the performance of the Super-Actor Machine and compare it with other architectures for a class of programs typical of scientific computations. The thesis includes a review on the precursor called the McGill Dataflow Architecture, description of a Super-Actor Execution Model, a design for a Super-Actor Machine, description of the register-cache mechanism, compilation techniques for the Super-Actor Machine and results from a detailed simulator. Results show that the Super-Actor Machine can tolerate local memory latencies and fine-grain synchronization overheads--the execution unit can sustain 99% throughput--if a program has adequate exposed parallelism.
|
12 |
A computer architecture for implementation within autonomous machinesCollins, Thomas Riley 05 1900 (has links)
No description available.
|
13 |
Design of energy-efficient application-specific instruction set processors /Glökler, Tilman. Meyr, Heinrich. January 2004 (has links)
Techn. Hochsch., Diss. u.d.T.: Glökler, Tilman: Design of energy-efficient application-specific instruction set processors (ASIPs)--Aachen, 2003.
|
14 |
Design of North Texas PC Users Group ecommerce interface and online membership system professional project /Steele, Jeri J. January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on Apr. 7, 2006). Includes bibliographical references.
|
15 |
A dual-ported real memory architecture for the g-machine /Rankin, Linda J., January 1986 (has links)
Thesis (M.S.)--Oregon Graduate Center, 1986.
|
16 |
Floating-point fused multiply-add architecturesQuinnell, Eric Charles, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
|
17 |
A comprehensive evaluation framework for system modernization : a case study using data servicesBarnes, Meredith Anne January 2011 (has links)
Modernization is a solution to migrate cumbersome existing systems to a new architecture for improved longevity of business processes. Three modernization approaches exist. White-box and black-box modernization are distinct from one another. Grey-box modernization is a hybrid of the white-box and black-box approaches. Modernization can be utilised to create data services for a Service Oriented Architecture. Since it is unclear which modernization approach is more suitable for the development of data services, a comprehensive evaluation framework is proposed to evaluate which of the white- or black-box approaches is more suitable. The comprehensive framework consists of three evaluation components. Firstly, developer effort to modernize existing code is measured by acknowledged software metrics. Secondly, the quality of the data services is measured against identified Quality of Service criteria for data services in particular. Thirdly, the effectiveness of the modernized data services is measured through usability evaluations. By inspection of the combination of application of each of the evaluation components, a recommended approach is identified for the modernization of data services. The comprehensive framework was successfully employed to compare the white-box and black-box modernization approaches applied to a case study. Results indicated that had only a single evaluation component been used, inconclusive results of the more suitable approach may have been obtained. The findings of this research contribute a comprehensive evaluation framework which can be applied to compare modernization approaches and measure modernization success.
|
18 |
The super-actor machine : a hybrid dataflowvon Neumann architectureHum, Herbert Hing-Jing January 1992 (has links)
No description available.
|
19 |
Methodical Evaluation of Processing-in-Memory AlternativesScrbak, Marko 05 1900 (has links)
In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in other applications and allows us to create a set of applications which can then be used as benchmarks for evaluating future PIM design alternatives.
|
20 |
The design proposal of a 16-bit microprogrammed stack machineHush, Don Rhea January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
Page generated in 0.0854 seconds