• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 472
  • 85
  • 80
  • 20
  • 10
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 832
  • 832
  • 186
  • 173
  • 161
  • 124
  • 121
  • 119
  • 117
  • 101
  • 95
  • 94
  • 83
  • 83
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Modeling and implementation of an integrated pixel processing tile for focal plane systems

Robinson, William Hugh 01 December 2003 (has links)
No description available.
252

Characterization and Avoidance of Critical Pipeline Structures in Aggressive Superscalar Processors

Sassone, Peter G. 20 July 2005 (has links)
In recent years, with only small fractions of modern processors now accessible in a single cycle, computer architects constantly fight against propagation issues across the die. Unfortunately this trend continues to shift inward, and now the even most internal features of the pipeline are designed around communication, not computation. To address the inward creep of this constraint, this work focuses on the characterization of communication within the pipeline itself, architectural techniques to avoid it when possible, and layout co-design for early detection of problems. I present work in creating a novel detection tool for common case operand movement which can rapidly characterize an applications dataflow patterns. The results produced are suitable for exploitation as a small number of patterns can describe a significant portion of modern applications. Work on dynamic dependence collapsing takes the observations from the pattern results and shows how certain groups of operations can be dynamically grouped, avoiding unnecessary communication between individual instructions. This technique also amplifies the efficiency of pipeline data structures such as the reorder buffer, increasing both IPC and frequency. I also identify the same sets of collapsible instructions at compile time, producing the same benefits with minimal hardware complexity. This technique is also done in a backward compatible manner as the groups are exposed by simple reordering of the binarys instructions. I present aggressive pipelining approaches for these resources which avoids the critical timing often presumed necessary in aggressive superscalar processors. As these structures are designed for the worst case, pipelining them can produce greater frequency benefit than IPC loss. I also use the observation that the dynamic issue order for instructions in aggressive superscalar processors is predictable. Thus, a hardware mechanism is introduced for caching the wakeup order for groups of instructions efficiently. These wakeup vectors are then used to speculatively schedule instructions, avoiding the dynamic scheduling when it is not necessary. Finally, I present a novel approach to fast and high-quality chip layout. By allowing architects to quickly evaluate what if scenarios during early high-level design, chip designs are less likely to encounter implementation problems later in the process.
253

Interoperable components across multiple component architectures

Banda, Ravi S., January 1998 (has links)
Thesis (M.S.)--West Virginia University, 1998. / Title from document title page. Document formatted into pages; contains vi, 53 p. : ill. Vita. Includes abstract. Includes bibliographical references (p. 50-51).
254

Managing lifetime reliability, performance, and power tradeoffs in multicore microarchitectures

Song, William J. 07 January 2016 (has links)
The objective of this research is to characterize and manage lifetime reliability, microarchitectural performance, and power tradeoffs in multicore processors. This dissertation is comprised of three research themes; 1) modeling and simulation method of interacting multicore processor physics, 2) characterization and management of performance and lifetime reliability tradeoff, and 3) extending Amdahl’s Law for understanding lifetime reliability, performance, and energy efficiency of heterogeneous processors. With continued technology scaling, processor operations are increasingly dominated by multiple distinct physical phenomena and their coupled interactions. Understanding these behaviors requires the modeling of complex physical interactions. This dissertation first presents a novel simulation framework that orchestrates interactions between multiple physical models and microarchitecture simulators to enable research explorations at the intersection of application, microarchitecture, energy, power, thermal, and reliability. Using this framework, workload-induced variation of device degradation is characterized, and its impacts on processor lifetime and performance are analyzed. This research introduces a new metric to quantify performance-reliability tradeoff. Lastly, the theoretical models of heterogeneous multicore processors are proposed for understanding performance, energy efficiency, and lifetime reliability consequences. It is shown that these system metrics are governed by Amdahl’s Law and correlated as a function of processor composition, scheduling method, and Amdahl’s scaling factor. This dissertation highlights the importance of multidimensional analysis and extends the scope of microarchitectural studies by incorporating the physical aspects of processor operations and designs.
255

A fundamental study on prototyping flexible computing systems

邢山震, Xing, Shanzhen. January 1999 (has links)
published_or_final_version / Industrial and Manufacturing Systems Engineering / Doctoral / Doctor of Philosophy
256

Increasing the efficacy of automated instruction set extension

Bennett, Richard Vincent January 2011 (has links)
The use of Instruction Set Extension (ISE) in customising embedded processors for a specific application has been studied extensively in recent years. The addition of a set of complex arithmetic instructions to a baseline core has proven to be a cost-effective means of meeting design performance requirements. This thesis proposes and evaluates a reconfigurable ISE implementation called “Configurable Flow Accelerators” (CFAs), a number of refinements to an existing Automated ISE (AISE) algorithm called “ISEGEN”, and the effects of source form on AISE. The CFA is demonstrated repeatedly to be a cost-effective design for ISE implementation. A temporal partitioning algorithm called “staggering” is proposed and demonstrated on average to reduce the area of CFA implementation by 37% for only an 8% reduction in acceleration. This thesis then turns to concerns within the ISEGEN AISE algorithm. A methodology for finding a good static heuristic weighting vector for ISEGEN is proposed and demonstrated. Up to 100% of merit is shown to be lost or gained through the choice of vector. ISEGEN early-termination is introduced and shown to improve the runtime of the algorithm by up to 7.26x, and 5.82x on average. An extension to the ISEGEN heuristic to account for pipelining is proposed and evaluated, increasing acceleration by up to an additional 1.5x. An energyaware heuristic is added to ISEGEN, which reduces the energy used by a CFA implementation of a set of ISEs by an average of 1.6x, up to 3.6x. This result directly contradicts the frequently espoused notion that “bigger is better” in ISE. The last stretch of work in this thesis is concerned with source-level transformation: the effect of changing the representation of the application on the quality of the combined hardwaresoftware solution. A methodology for combined exploration of source transformation and ISE is presented, and demonstrated to improve the acceleration of the result by an average of 35% versus ISE alone. Floating point is demonstrated to perform worse than fixed point, for all design concerns and applications studied here, regardless of ISEs employed.
257

A functional architecture for a logistics expert system in a sea based environment

Hicks, David M. 12 1900 (has links)
The Armed Forces of the United States are becoming more expeditionary in nature, in that more forces will be home-ported or home-stationed in the Continental U.S. One of the major characteristics associated with future military concepts is that they employ Joint and Coalition Forces from a sea base conducting a full range of operations in the littoral regions of the world. A key aspect of conducting operations is the sustainment of forces in a sea based environment. Future logistical architectures associated with providing that sustainment will be joint and integrated to provide seamless support to all forces operating in and around the sea base. The ordering system associated with that future logistical architecture must be robust, redundant, and not have a single point of failure. The ordering and tracking of all sustainment supplies through the supply chain distribution system will be important in ensuring that supplies are delivered to the right place and time to guarantee success. This thesis proposes to emphasize a functional architecture for an Expert Ordering System in a Sea Based environment that will reduce the overall logistical manpower requirements of the Joint/Combined Force. Use cases of different realistic scenarios will be produced to show justification of the system.
258

Architectures for device aware network

Chung, Wai Kong. 03 1900 (has links)
In today's heterogeneous computing environment, a wide variety of computing devices with varying capabilities need to access information in the network. Existing network is not able to differentiate the different device capabilities, and indiscriminatingly send information to the end-devices, without regard to the ability of the end-devices to use the information. The goal of a device-aware network is to match the capability of the end-devices to the information delivered, thereby optimizing the network resource usage. In the battlefield, all resources - including time, network bandwidth and battery capacity - are very limited. A device-aware network avoids the waste that happens in current, device-ignorant networks. By eliminating unusable traffic, a device-aware network reduces the time the end-devices spend receiving extraneous information, and thus saves time and conserves battery-life. In this thesis, we evaluated two potential DAN architectures, Proxy-based and Router-based approaches, based on the key requirements we identified. To demonstrate the viability of DAN, we built a prototype using a hybrid of the two architectures. The key elements of our prototype include a DAN browser, a DAN Lookup Server and DAN Processing Unit (DPU). We have demonstrated how our architecture can enhance the overall network utility by ensuring that only appropriate content is delivered to the end-devices.
259

Software Communications Architecture (SCA) compliant software defined radio design for IEEE 802.16 wirelessman-OFDMtm transceiver

Low, Kian Wai 12 1900 (has links)
Demands for seamless mobile communications are driving the research and development of software defined radio (SDR), which enables a single terminal to transmit and receive in distinct wireless systems through a simple change in software to reconfigure the terminal's functions. Its application areas include military use, home networks, intelligent transport systems and cellular communications. Several SDR software architectures have been developed during the last few years. One implementation of the Software Communications Architecture is the Open Source SCA Implementation
260

Improving Energy-Efficiency of Multicores using First-Order Modeling

Spiliopoulos, Vasileios January 2016 (has links)
In the recent decades, power consumption has evolved to one of the most critical resources in a computer system. In the form of electricity bill in data centers, battery life in mobile devices, or thermal constraints in desktops and laptops, power consumption imposes several limitations in today’s processors and improving power and energy efficiency is one of the most urgent research topics of Computer Architecture. Dynamic Voltage and Frequency Scaling (DVFS) and Cache Resizing are among the most popular energy saving techniques. Previous work, however, has focused on developing heuristics and trial-and-error methods that yield acceptable savings, but fail to provide insight and understanding of how these techniques affect power and performance of a computer system. In contrast, this Thesis proposes the use of first-order modeling to improve the energy efficiency of computer systems. A first-order model needs to be (i) accurate enough to efficiently drive DVFS and Cache Resizing decisions, and (ii) simple enough to eliminate the overhead of collecting the required inputs to the model. We show that such models can be constructed and successfully applied in modern systems. For DVFS, we propose to scale frequency down to exploit applications’ memory slack, i.e., periods that the processor spends waiting for data to be fetched from the main memory. In such cases, the processor frequency can be scaled down to save energy without inordinate performance penalty. Our DVFS models can detect slack and predict the impact of DVFS in both power and performance with great accuracy. Cache Resizing, on the other hand, relies on the fact that many applications do not benefit from the vast amount of cache that modern processors are equipped with. In such cases, the cache can be resized to save static energy consumption at limited performance cost. Since both techniques are related with the memory behavior of applications, we propose a unified model to manage the two techniques in tandem and maximize energy efficiency through synergistic DVFS and Cache Resizing. Finally, our experience with DVFS in real systems motivated us to contribute to the integration of DVFS into the gem5 simulator. Unlike other simulators that ignore the role of OS in DVFS, we extend the gem5 simulator by developing the hardware and software components that allow existing Linux DVFS infrastructure to be seamlessly integrated in the simulator.

Page generated in 0.0939 seconds