1 |
Buffer insertion in large circuits using look-ahead and back-off techniquesWaghmode, Mandar 25 April 2007 (has links)
Buffer insertion is an essential technique for reducing interconnect delay in submicron
circuits. Though it is a well researched area, there is a need for robust and
effective algorithms to perform buffer insertion at the circuit level. This thesis proposes
a new buffer insertion algorithm for large circuits. The algorithm finds a buffering
solution for the entire circuit such that buffer cost is minimized and the timing
requirements of the circuit are satisfied. The algorithm iteratively inserts buffers in
the circuit improving the circuit delay step by step. At the core of this algorithm are
very simple but extremely effective techniques that constructively guide the search
for a good buffering solution. A flexibility to adapt to the user's requirements and the
ability to reduce the number of buffers are the strengths of this algorithm. Experimental
results on ISCAS85 benchmark circuits show that the proposed algorithm, on
average, yields 36% reduction in the number of buffers, and runs three times faster
than one of the best known previously researched algorithms.
|
2 |
Algorithmic techniques for nanometer VLSI design and manufacturing closureHu, Shiyan 10 October 2008 (has links)
As Very Large Scale Integration (VLSI) technology moves to the nanoscale
regime, design and manufacturing closure becomes very difficult to achieve due to
increasing chip and power density. Imperfections due to process, voltage and temperature variations aggravate the problem. Uncertainty in electrical characteristic of
individual device and wire may cause significant performance deviations or even functional failures. These impose tremendous challenges to the continuation of Moore's
law as well as the growth of semiconductor industry.
Efforts are needed in both deterministic design stage and variation-aware design
stage. This research proposes various innovative algorithms to address both stages for
obtaining a design with high frequency, low power and high robustness. For deterministic optimizations, new buffer insertion and gate sizing techniques are proposed. For
variation-aware optimizations, new lithography-driven and post-silicon tuning-driven
design techniques are proposed.
For buffer insertion, a new slew buffering formulation is presented and is proved
to be NP-hard. Despite this, a highly efficient algorithm which runs > 90x faster
than the best alternatives is proposed. The algorithm is also extended to handle
continuous buffer locations and blockages.
For gate sizing, a new algorithm is proposed to handle discrete gate library in
contrast to unrealistic continuous gate library assumed by most existing algorithms. Our approach is a continuous solution guided dynamic programming approach, which
integrates the high solution quality of dynamic programming with the short runtime
of rounding continuous solution.
For lithography-driven optimization, the problem of cell placement considering
manufacturability is studied. Three algorithms are proposed to handle cell flipping
and relocation. They are based on dynamic programming and graph theoretic approaches, and can provide different tradeoff between variation reduction and wire-
length increase.
For post-silicon tuning-driven optimization, the problem of unified adaptivity
optimization on logical and clock signal tuning is studied, which enables us to significantly save resources. The new algorithm is based on a novel linear programming
formulation which is solved by an advanced robust linear programming technique.
The continuous solution is then discretized using binary search accelerated dynamic
programming, batch based optimization, and Latin Hypercube sampling based fast
simulation.
|
3 |
Fast interconnect optimizationLi, Zhuo 12 April 2006 (has links)
As the continuous trend of Very Large Scale Integration (VLSI) circuits technology
scaling and frequency increases, delay optimization techniques for interconnect
are increasingly important for achieving timing closure of high performance designs.
For the gigahertz microprocessor and multi-million gate ASIC designs it is crucial to
have fast algorithms in the design automation tools for many classical problems in
the field to shorten time to market of the VLSI chip. This research presents algorithmic
techniques and constructive models for two such problems: (1) Fast buffer
insertion for delay optimization, (2) Wire sizing for delay optimization and variation
minimization on non-tree networks.
For the buffer insertion problem, this dissertation proposes several innovative
speedup techniques for different problem formulations and the realistic requirement.
For the basic buffer insertion problem, an O(n log2 n) optimal algorithm that runs
much faster than the previous classical van GinnekenÂs O(n2) algorithm is proposed,
where n is the number of buffer positions. For modern design libraries that contain
hundreds of buffers, this research also proposes an optimal algorithm in O(bn2) time
for b buffer types, a significant improvement over the previous O(b2n2) algorithm
by Lillis, Cheng and Lin. For nets with small numbers of sinks and large numbers
of buffer positions, a simple O(mn) optimal algorithm is proposed, where m is the
number of sinks. For the buffer insertion with minimum cost problem, the problem is first proved to be NP-complete. Then several optimal and approximation techniques
are proposed to further speed up the buffer insertion algorithm with resource control
for big industrial designs.
For the wire sizing problem, we propose a systematic method to size the wires of
general non-tree RC networks. The new method can be used for delay optimization
and variation reduction.
|
4 |
Exploiting instruction-level parallelism a constructive approach /Santos, Luiz Cláudio Villar dos, January 1998 (has links)
Thesis (doctoral)--Technische Universiteit Eindhoven, 1998. / Vita. Includes bibliographical references (p. 137-142).
|
5 |
Architectures and Algorithms for Mitigation of Soft Errors in Nanoscale VLSI CircuitsBhattacharya, Koustav 22 October 2009 (has links)
The occurrence of transient faults like soft errors in computer circuits poses a significant challenge to the reliability of computer systems. Soft error, which occurs when the energetic neutrons coming from space or the alpha particles arising out of packaging materials hit the transistors, may manifest themselves as a bit flip in the memory element or as a transient glitch generated at any internal node of combinational logic, which may subsequently propagate to and be captured in a latch. Although the problem of soft errors was earlier only a concern for space applications, aggressive technology scaling trends have exacerbated the problem to modern VLSI systems even for terrestrial applications.
In this dissertation, we explore techniques at all levels of the design flow to reduce the vulnerability of VLSI systems against soft errors without compromising on other design metrics like delay, area and power. We propose new models for estimating soft errors for storage structures and combinational logic. While soft errors in caches are estimated using the vulnerability metric, soft errors in logic circuits are estimated using two new metrics called the glitch enabling probability (GEP) and the cumulative probability of observability (CPO). These metrics, based on signal probabilities of nets, accurately model soft errors in radiation-aware synthesis algorithms and helps in efficient exploration of the design solution space during optimization. At the physical design level, we leverage the use of larger netlengths to provide larger RC ladders for effectively filtering out the transient glitches. Towards this, a new heuristic has been developed to selectively assign larger wirelengths to certain critical nets. This reduces the delay and area overhead while improving the immunity to soft errors. Based on this, we propose two placement algorithms based on simulated annealing and quadratic programming which significantly reduce the soft error rates of circuits.
At the circuit level, we develop techniques for hardening circuit nodes using a novel radiation jammer technique. The proposed technique is based on the principles of a RC differentiator and is used to isolate the driven cell from the driving cell which is being hit by a radiation strike. Since the blind insertion of radiation blocker cells on all circuit nodes is expensive, candidate nodes are selected for insertion of these cells using a new metric called the probability of radiation blocker circuit insertion (PRI). We investigate a gate sizing algorithm, at the logic level, in which we simultaneously optimize both the soft error rate (SER) and the crosstalk noise besides the power and performance of circuits while considering the effect of process variations. The reliability centric gate sizing technique has been formulated as a mathematical program and is efficiently solved. At the architectural level, we develop solutions for the correction of multi-bit errors in large L2 caches by controlling or mining the redundancy in the memory hierarchy and methods to increase the amount of redundancy in the memory hierarchy by employing a redundancy-based replacement policy, in which the amount of redundancy is controlled using a user defined redundancy threshold.
The novel architectures and the new reliability-centric synthesis algorithms proposed for the various design abstraction levels have been shown to achieve significant reduction of soft error rates in current nanometer circuits. The design techniques, algorithms and architectures can be integrated into existing design flows. A VLSI system implementation can leverage on the architectural solutions for the reliability of the caches while the custom hardware synthesized for the VLSI system can be protected against radiation strikes by utilizing the circuit level, logic level and layout level optimization algorithms that have been developed.
|
6 |
Network Coding in Distributed, Dynamic, and Wireless Environments: Algorithms and ApplicationsChaudhry, Mohammad 2011 December 1900 (has links)
The network coding is a new paradigm that has been shown to improve throughput, fault tolerance, and other quality of service parameters in communication networks. The basic idea of the network coding techniques is to relish the "mixing" nature of the information flows, i.e., many algebraic operations (e.g., addition, subtraction etc.) can be performed over the data packets. Whereas traditionally information flows are treated as physical commodities (e.g., cars) over which algebraic operations can not be performed. In this dissertation we answer some of the important open questions related to the network coding. Our work can be divided into four major parts.
Firstly, we focus on network code design for the dynamic networks, i.e., the networks with frequently changing topologies and frequently changing sets of users. Examples of such dynamic networks are content distribution networks, peer-to-peer networks, and mobile wireless networks. A change in the network might result in infeasibility of the previously assigned feasible network code, i.e., all the users might not be able to receive their demands. The central problem in the design of a feasible network code is to assign local encoding coefficients for each pair of links in a way that allows every user to decode the required packets. We analyze the problem of maintaining the feasibility of a network code, and provide bounds on the number of modifications required under dynamic settings. We also present distributed algorithms for the network code design, and propose a new path-based assignment of encoding coefficients to construct a feasible network code.
Secondly, we investigate the network coding problems in wireless networks. It has been shown that network coding techniques can significantly increase the overall
throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks each packet transmitted by a device is broadcasted within a certain
area and can be overheard by the neighboring devices. When a device needs to transmit packets, it employs the Index Coding that uses the knowledge of what the device's neighbors have heard in order to reduce the number of transmissions. With the Index Coding, each transmitted packet can be a linear combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. We propose an efficient exact, and several heuristic solutions for the Index Coding problem. Noting that the Index Coding problem is NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem, where the objective is to maximize the number of transmissions that are saved by employing coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.
Thirdly, we consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. Parts of the data can be stored in either original form, or encoded form at multiple network locations. Clients access the parts of the data through simultaneous downloads from several servers across the network. For each link used client has to pay some cost. A client might not be able to access a subset of servers simultaneously due to network restrictions e.g., congestion etc. Furthermore, a subset of the servers might contain correlated data, and accessing such a subset might not increase amount of information at the client. We present a novel efficient polynomial-time solution for this problem that leverages the matroid theory.
Fourthly, we explore applications of the network coding for congestion mitigation and over flow avoidance in the global routing stage of Very Large Scale Integration
(VLSI) physical design. Smaller and smarter devices have resulted in a significant increase in the density of on-chip components, which has given rise to congestion
and over flow as critical issues in on-chip networks. We present novel techniques and algorithms for reducing congestion and minimizing over flows.
|
Page generated in 0.0253 seconds