Spelling suggestions: "subject:"canprocess covariation"" "subject:"canprocess byvariation""
11 |
Layout optimization in ultra deep submicron VLSI designWu, Di 16 August 2006 (has links)
As fabrication technology keeps advancing, many deep submicron (DSM) effects have become
increasingly evident and can no longer be ignored in Very Large Scale Integration
(VLSI) design. In this dissertation, we study several deep submicron problems (eg. coupling
capacitance, antenna effect and delay variation) and propose optimization techniques
to mitigate these DSM effects in the place-and-route stage of VLSI physical design.
The place-and-route stage of physical design can be further divided into several steps:
(1) Placement, (2) Global routing, (3) Layer assignment, (4) Track assignment, and (5) Detailed
routing. Among them, layer/track assignment assigns major trunks of wire segments
to specific layers/tracks in order to guide the underlying detailed router. In this dissertation,
we have proposed techniques to handle coupling capacitance at the layer/track assignment
stage, antenna effect at the layer assignment, and delay variation at the ECO (Engineering
Change Order) placement stage, respectively. More specifically, at layer assignment, we
have proposed an improved probabilistic model to quickly estimate the amount of coupling
capacitance for timing optimization. Antenna effects are also handled at layer assignment
through a linear-time tree partitioning algorithm. At the track assignment stage, timing is
further optimized using a graph based technique. In addition, we have proposed a novel
gate splitting methodology to reduce delay variation in the ECO placement considering
spatial correlations. Experimental results on benchmark circuits showed the effectiveness
of our approaches.
|
12 |
Robust, Low Power, Discrete Gate SizingCasagrande, Anthony Joseph 01 January 2015 (has links)
Ultra-deep submicron circuits require accurate modeling of gate delay in order to meetaggressive timing constraints. With the lack of statistical data, variability due to the mechanical manufacturing process and its chemical properties poses a challenging problem. Discrete gate sizing requires (i) accurate models that take into account random parametric variation and (ii) a fair allocation of resources to optimize the solution. The proposed GTFUZZ gate sizing algorithm handles both tasks. Gate sizing is modeled as a resource allocation problem using fuzzy game theory. Delay is modeled as a constraint and power is optimized in this algorithm. In GTFUZZ, delay is modeled as a fuzzy goal with fuzzy parameters to
capture the imprecision of gate delay early in the design phase when extensive empirical data is absent. Dynamic power is modeled as a fuzzy goal without varying coefficients. The fuzzy goals provide a flexible platform for multimetric optimization. The robust GTFUZZ algorithm is compared against fuzzy linear programming (FLP) and deterministic worst-case FLP (DWCFLP) algorithms. The benchmark circuits are first synthesized, placed, routed, and optimized for performance using the Synopsys University 32/28nm standard cell library and technology files. Operating at the optimized clock frequency, results show an average
power reduction of about 20% versus DWCFLP and 9% against variation-aware gate sizing with FLP. Timing and timing yield are verified by both Synopsys PrimeTime and Monte Carlo simulations of the critical paths using HSPICE.
|
13 |
ROBUST DEVICE MODELING WITH PROCESS VARIATION CONSIDERATION AND DIMENSION REDUCTION TECHNIQUESMitev, Alexander January 2009 (has links)
Nowadays the highest device integration affects the design process in several ways. The process variations (PV) significantly impact the circuit performance. As a consequence, a major consideration is determining the relation of the production yield to the technology based manufacturing variations. The traditional Monte Carlo based sampling analysis became computationally not effective due to employing complex device models with the large parameter set. The higher device integration requires dealing with numerous local and global parameters and can bottleneck the efforts of achieving fast design cycles.Statistical analysis can be facilitated by direct relation estimation a of circuit metrics to the set of PV parameters. The traditional transistor models use a large number of parameters and equations but various performance factors are possible to be related to small parameter set. A new macro model is proposed for CMOS complementary gates, where all static and dynamic characteristics are related to set of Finite Points of IV device curves. All timing and power related quantities can be predicted by evaluating the model equations. The dynamic characterization relies on charge distribution at each node. The affect of all PV is represented with characterizing the FP sensitivity. In overall the new gate model employ same computational structure for different devices in far more simple computational form.Large scale circuit analysis based on the FP models can be used for estimation of various global performance parameters. Timing performance (STA) is calculated from node to node, where at each step a new set of parameters (including PV) are introduced. Motivated by the limitations the traditional PCA, we simplify the overall computational cost with new efficient reduction technique. It turned out that the input output correlation of performance-parameters model is essential information for reduction. If the model is unknown, Sliced Inverse Regression (SIR) technique can be used to determine the Effective Reduction Space (e.d.r.). Optionally if the empiric performance analytic expression is known, the e.d.r. is found by Principle Hessian Method. In theoretical aspect the inverse reduction technique reduces parameters in the sense of their statistical significance.
|
14 |
A design research study of the effects of process variation on the performance and functionality of a multi-input neural sensor (MINS) IC for neural signal recordingWang, Zeqi January 2014 (has links)
(Thesis: M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / In recent years, the effects of process variation have become increasingly more severe as technology has been scaling down in lithographic dimension. This problem also has affected the operation of the MINS IC (Multi-Input Neural Sensor) designed at Boston University by Dr. Lu Wang. This MINS chip was designed for use in both in-vitro and in-vivo applications of measuring and recording neural action potentials and local field potentials in the brains of animals. The MINS chip has been tested and is fully functional, however, with a serious problem of output level shifting from input-to-input due to process variation. This thesis will focus on the study of the effect of process variation on the MINS chip and a proposed method for process variation correction.
The effect of process variation on the MINS chip is an extremely serious issue, given the large amount of gain required to sense and record neural signals, especially local field potentials, which have an input voltage of the order of 10-100 μV. The previous version of the printed circuit board designed to correct for process variation can center the output between the upper and lower rails by measuring the required column bias current independently for each of 256 inputs individually and storing these on an FPGA. However, this process variation correction procedure has jeopardized the ability to scan at the required rate in order to record action potentials (spikes).
This thesis has two parts: The first part includes the study of the effects of process variation on the functionality of the 8HP MINS chip by doing Gaussian distribution analysis. The second part is the design of a new printed circuit board to increase the speed of the process variation correction procedure in scan mode, and as a goal, to center the output level in both stop mode and scan mode.
The study of the effects of process variation on MINS utilizes circuit simulations with the IBM 8HP device models and design kit, using extracted models based on the MINS chip layout. According to the Monte Carlo sampling analysis, only 12 out of 200 samples are showing output level to be around center, with 65% of the samples having output voltage at upper and lower rails. What is more, as the study of 1000 cases shows, a column bias current of about 105uA and/or a bias voltage of 1.212 V, with 3σ to be 3.798uA and 0.131V respectively, is needed to center the output.
A new developed version of the variation correction PCB has been designed and fabricated, utilizing a charge pump methodology to quickly charge up (or discharge) the large stabilization capacitor (4.7μF) placed on the Ibias0 node for stability, on the existing MINS PCB. Given that the Ibias0 current on MINS is only around 100μA, a large current of the order of 250-500mA is used in order to achieve the desired scan rate on the chip. A ping-pong approach is used, having two 4.7μF capacitors so that one can be readied while the other is being used for the testing. This PCB design also includes the needed controls with comparators and logic to terminate the charging/discharging operation at the exact correct voltage on the Ibias0 node, for each of the 256 inputs. On this new board, the required voltage at the Ibias0 node (Vbias0) to center the output, instead of Ibias0, will be measured and stored for each of 256 inputs in both stop mode and scan mode. / 2031-01-01
|
15 |
Fiabilisation de convertisseurs analogique-numérique à modulation Sigma-Delta / Reliability of analog-to-digital Sigma-Delta convertersCai, Hao 09 September 2013 (has links)
Ce travail de thèse a porté sur des problèmes de fiabilité de circuits intégrés en technologie CMOS 65 nm, en particulier sur la conception en vue de la fiabilité, la simulation et l'amélioration de la fiabilité. Les mécanismes dominants de vieillissement HCI et NBTI ainsi que la variation du processus ont été étudiés et évalués quantitativement au niveau du circuit et au niveau du système. Ces méthodes ont été appliquées aux modulateurs Sigma-Delta afin de déterminer la fiabilité de ce type de composant qui est très utilisé. / This thesis concentrates on reliability-aware methodology development, reliability analysis based on simulation as well as failure prediction of CMOS 65nm analog and mixed signal (AMS) ICs. Sigma-Delta modulators are concerned as the object of reliability study at system level. A hierarchical statistical approach for reliability is proposed to analysis the performance of Sigma-Delta modulators under ageing effects and process variations. Statistical methods are combined into this analysis flow.
|
16 |
Global Interconnects in the Presence of UncertaintyBenito, Ibis D 01 January 2008 (has links) (PDF)
Global interconnect reliability is becoming a bigger issue as we scale down further into the submicron regime. As transistor dimensions get smaller, variations in the manufacturing process, and temperature variations may cause undesired behavior, and as a result, compromise performance. This work makes an effort to characterize the effects of such variations, to provide designers with a guideline for making designs tolerant to these variations while benefiting from tighter design margins.
Since interconnects contribute to most of the delay and power on a chip, interconnect performance becomes a primary issue in design. One of the main concerns when considering physical transistor dimension variations is the effect on delay. Due to smaller transistor dimensions, the photolithographic process may produce transistors with significant variations from the ideal physical dimensions. Such variations cause delay uncertainty which can lead to over or underestimation in the design phase. This work examines interconnects to establish a guideline of the effect that process variations have on delay. A repeated interconnect is analyzed and the effects of physical device variations on delay are observed. Given the delay distribution in the presence of Leff variation, a supply voltage assignment technique is proposed to correct the observed deviation from the nominal delay on a long, repeated interconnect. This technique results in a significant reduction of the delay distribution, with a negligible power overhead.
After looking at static variation effects on interconnect performance, this thesis addresses thermal variations on global signals, which cause delay degradation and may lead to timing failures. Given the presence of a large thermal gradient along a clock signal in a data path clocked by two leaves of an H-tree, several thermal scenarios which can compromise timing are discussed. A buffer-based skew compensation technique is proposed to correct the effect of thermal and manufacturing variations on this system.
Having characterized repeated interconnect performance under process variations, the bandwidth of the line can be more effectively utilized by using a technique called phase coding. Phase coded interconnects are introduced in the context of using them once an interconnect has been adequately modeled in the presence of variations.
With guidelines quantifying the effects of process variations on interconnect techniques and careful characterization, designers can factor these considerations into their design process, reducing the variation from the nominal expected behavior and allowing for smaller design margins. This will lead to more reliable products as we advance into future technologies and transistor dimensions get smaller.
|
17 |
On-Chip True Random Number Generation in Nanometer CmosSuresh, Vikram Belur 01 January 2012 (has links) (PDF)
On-chip True Random Number Generator (TRNG) forms an integral part of a number of cryptographic systems in multi-core processors, communication networks and RFID. TRNG provides random keys, device id and seed for Pseudo Random Number Generators (PRNG). These circuits, harnessing physical random variations like thermal noise or stray electromagnetic waves are ideally expected to generate random bits with very high entropy and zero correlation. But, progression to advance semiconductor manufacturing processes has brought about various challenges in the design of TRNG. Increasing variations in the fabrication process and the sensitivity of transistors to operating conditions like temperature and supply voltage have significant effect on the efficiency of TRNG designed in sub-micron technologies. Poorly designed random number generators also provide an avenue for attackers to break the security of a cryptographic system. Process variation and operating conditions may be used as effective tools of attack against TRNG. This work makes a comprehensive study of the effect of process variation on metastability-based TRNG designed in deep sub-micron technology. Furthermore, the effect of operating temperature and the supply voltage on the performance of TRNG is also analyzed. To mitigate these issues we study entropy extraction mechanisms based both on algorithmic approach and circuit tuning and compare these techniques based on their tolerance to process variation and the energy overhead for correction. We combine the two v approaches to efficiently perform self-calibration, using a hybrid of algorithmic correction and circuit tuning to compensate the effect of variations. The proposed technique provides a fair trade-off between the degree of entropy extraction and the overhead in terms of area and energy, introducing minimal correlation in the output of the TRNG. Besides the study of the effect of process variation and operating conditions on the TRNG, we also propose to study the possible attack models on a TRNG. Finally, we propose a probabilistic approach to design and analysis of TRNG using a stochastic model of the circuit operation and incorporating the random source in thermal noise. All analysis is done for 45nm technology using the NCSU PDK transistor models. The simulation platform is developed using HSPICE and a Perl based automation flow.
|
18 |
In-process monitoring of micromoulding - assessment of process variationWhiteside, Benjamin R., Coates, Philip D., Martyn, Michael T. January 2005 (has links)
No / Advances in micromoulding technology are leading to complex,net-shape products having sub-milligramme masses with micro-scale surface features in a range of polymer and nano-composite materials.For such small components subjected to the extreme stress,strain-rate and temperature gradients encountered in the micromoulding process,detailed process monitoring is desirable to highlight variations in moulding conditions and assist in creating a viable manufacturing process with acceptable quality products.This paper covers the implementation of a suite of sensors on a commercial micromoulding machine and detailed computer monitoring during processing of a polyacetal component over a range of processing conditions.The results determined that cavity pressure curve integral data provides the most sensitive factor for characterisation of a moulding process of a 0.34 mm~3(0.49 mg)product.The repeatability of the process is directly compared with that of a 15.6mm~3(22.2 mg)product and shown to beinferior.DSC measurements of the whole products indicated little variation in average crystallinity of the products manufactured over a mould temperature range of 30 to 130deg C.
|
19 |
DEVELOPMENT OF PROCESS VARIATION TOLERANT STANDARD CELLSTHAKORE, PRIYANKA 03 July 2007 (has links)
No description available.
|
20 |
Low-Power Clocking and Circuit Techniques for Leakage and Process Variation CompensationHansson, Martin January 2008 (has links)
Over the last four decades the integrated circuit industry has evolved in a tremendous pace. This success has been driven by the scaling of device sizes leading to higher and higher integration capability, which have enabled more functionality and higher performance. The impressive evolution of modern high-performance microprocessors have resulted in chips with over a billion transistors as well as multi-GHz clock frequencies. As the silicon integrated circuit industry moves further into the nanometer regime, scaling of device sizes is still predicted to continue at least into the near future. However, there are a number of challenges to overcome to be able to continue the increase of integration at the same pace. Three of the major challenges are increasing power dissipation due to clocking of synchronous circuit, increasing leakage currents causing growing static power dissipation and reduced circuit robustness, and finally increasing spread in circuit parameters due to physical limitations in the manufacturing process. This thesis presents a number of circuit techniques that aims to help in all three of the mentioned challenges.Power dissipation related to the clock generation and distribution is identified as the dominating contributor of the total active power dissipation for multi-GHz systems. As the complexity and size of synchronous systems continues to increase, clock power will also increase. This makes novel power reduction techniques absolutely crucial in future VLSI design. In this thesis an energy recovering clocking technique aimed at reducing the total chip clock power is presented. Based on theoretical analysis the technique is shown to enable considerable clock power savings. Moreover, the impact of the proposed technique on conventional flip-flop topologies is studied. Measurements on an experimental chip design proves the technique, and shows more than 56% lower clock power compared to conventional clock distribution techniques at clock frequencies up to 1.76 GHz.Static leakage power dissipation is a considerable contributor to the total power dissipation. This power is dissipated even for circuits that are idle and not contributing to the operation. Hence, with increasing number of transistors on each chip, circuit techniques which reduce the static leakage currents are necessary. In this thesis a technique is discussed which reduces the static leakage current in a microcode ROM resulting in 30% reduction of the leakage power with no area or performance penalty.Apart from increasing static power dissipation the increasing leakage currents also impact the robustness constraints of the circuits. This is important for regenerative circuits like flip-flops and latches where a changed state due to leakage will lead to loss of functionality. This is a serious issue especially for high-performance dynamic circuits, which are attractive in order to limit the clock load in the design. However, with the increasing leakage the robustness of dynamic circuits reduces dramatically. To improve the leakage robustness for sub-90 nm low clock load dynamic flip-flops, a novel keeper technique is proposed. The proposed keeper utilizes a scalable and simple leakage compensation technique, which is implemented on a reconfigurable flip-flop. At normal clock frequencies the flip-flop is configured in dynamic mode, and reduces the clock power by 25% due to the lower clock load. During any low-frequency operation, the flip-flop is configured as a static flip-flop retaining full functional robustness.As scaling continues further towards the fundamental atomistic limits, several challenges arise for continuing industrial device integration. Large inaccuracies in lithography process, impurities in manufacturing, and reduced control of dopant levels during implantation all cause increasing statistical spread of performance, power, and robustness of the devices. In order to compensate the impact of the increasingly large process variations on latches and flip-flops, a reconfigurable keeper technique is presented in this thesis. In contrast to the traditional design for worst-case process corners, a variable keeper circuit is utilized. The proposed reconfigurable keeper preserves the robustness of storage nodes across the process corners without degrading the overall chip performance.
|
Page generated in 0.104 seconds