• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 455
  • 82
  • 78
  • 20
  • 10
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 800
  • 800
  • 174
  • 165
  • 155
  • 118
  • 117
  • 113
  • 105
  • 95
  • 92
  • 87
  • 80
  • 80
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The super-actor machine : a hybrid dataflowvon Neumann architecture

Hum, Herbert Hing-Jing January 1992 (has links)
Emerging VLSI/ULSI technologies have created new opportunities in designing computer architectures capable of hiding the latencies and synchronization overheads associated with von Neumann-style multiprocessing. Pure Dataflow architectures have been suggested as solutions, but they do not adequately address the issues of local memory latencies and fine-grain synchronization costs. In this thesis, we propose a novel hybrid dataflow/von Neumann architecture, called the Super-Actor Machine, to address the problems facing von Neumann and pure dataflow machines. This architecture uses a novel high-speed memory organization known as a register-cache to tolerate local memory latencies and decrease local memory bandwidth requirements. The register-cache is unique in that it appears as a register file to the execution unit, while from the perspective of main memory, its contents are tagged as in conventional caches. Fine-grain synchronization costs are alleviated by the hybrid execution model and a loosely-coupled scheduling mechanism. / A major goal of this dissertation is to characterize the performance of the Super-Actor Machine and compare it with other architectures for a class of programs typical of scientific computations. The thesis includes a review on the precursor called the McGill Dataflow Architecture, description of a Super-Actor Execution Model, a design for a Super-Actor Machine, description of the register-cache mechanism, compilation techniques for the Super-Actor Machine and results from a detailed simulator. Results show that the Super-Actor Machine can tolerate local memory latencies and fine-grain synchronization overheads--the execution unit can sustain 99% throughput--if a program has adequate exposed parallelism.
2

Floating-point fused multiply-add architectures

Quinnell, Eric Charles 28 August 2008 (has links)
Not available / text
3

Floating-point fused multiply-add architectures

Quinnell, Eric Charles, 1982- 22 August 2011 (has links)
Not available / text
4

The design and programming of a powerful short-wordlength processor using context-dependent machine instructions

Hor, Tze-man, 賀子文 January 1985 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
5

A computer architecture for implementation within autonomous machines

Collins, Thomas Riley 05 1900 (has links)
No description available.
6

SIMULATION OF A MODULAR HIERARCHICAL ADAPTIVE COMPUTER ARCHITECTURE WITH COMMUNICATION DELAY

Wang, I-Yang, 1958- January 1986 (has links)
No description available.
7

A comprehensive evaluation framework for system modernization : a case study using data services

Barnes, Meredith Anne January 2011 (has links)
Modernization is a solution to migrate cumbersome existing systems to a new architecture for improved longevity of business processes. Three modernization approaches exist. White-box and black-box modernization are distinct from one another. Grey-box modernization is a hybrid of the white-box and black-box approaches. Modernization can be utilised to create data services for a Service Oriented Architecture. Since it is unclear which modernization approach is more suitable for the development of data services, a comprehensive evaluation framework is proposed to evaluate which of the white- or black-box approaches is more suitable. The comprehensive framework consists of three evaluation components. Firstly, developer effort to modernize existing code is measured by acknowledged software metrics. Secondly, the quality of the data services is measured against identified Quality of Service criteria for data services in particular. Thirdly, the effectiveness of the modernized data services is measured through usability evaluations. By inspection of the combination of application of each of the evaluation components, a recommended approach is identified for the modernization of data services. The comprehensive framework was successfully employed to compare the white-box and black-box modernization approaches applied to a case study. Results indicated that had only a single evaluation component been used, inconclusive results of the more suitable approach may have been obtained. The findings of this research contribute a comprehensive evaluation framework which can be applied to compare modernization approaches and measure modernization success.
8

Assigning cost to branches for speculation control in superscalar processors

Khosrow-Khavar, Farzad 10 April 2008 (has links)
No description available.
9

Using lazy instruction prediction to reduce processor wakeup power dissipation

Homayoun, Houman 10 April 2008 (has links)
No description available.
10

Performance and energy efficiency of clustered processors

Zarrabi, Sepehr 10 April 2008 (has links)
Modern processors aim to achieve ILP by utilizing numerous functional units, large onchip structures and wider issue windows. This leads to extremely complex designs, which in turn adversely affect clock rate and energy efficiency. Hence, clustered processors have been introduced as an alternative, which allow high levels of ILP while maintaining a desirable clock rate and manageable power consumption. Nonetheless, clustering has its drawbacks. In this work we discuss the two types of clustering-induced delays caused by limited intra-cluster issue bandwidth and inter-cluster communication latencies. We use simulation results to show that the stalls caused by inter-cluster communication delays are the dominant factor impeding the performance of clustered processors. We also illustrate that microarchitectures become more energy efficient as the number of clusters grows. We study branch misprediction as a source of energy loss and examine how pipeline gating can alleviate this problem in centralized and distributed processors.

Page generated in 0.134 seconds