Spelling suggestions: "subject:"processvariation"" "subject:"processvariationer""
31 |
Gpu Based Lithography Simulation and OpcSubramany, Lokesh 01 January 2011 (has links) (PDF)
Optical Proximity Correction (OPC) is a part of a family of techniques called Resolution Enhancement Techniques (RET). These techniques are employed to increase the resolution of a lithography system and improve the quality of the printed pattern. The fidelity of the pattern is degraded due to the disparity between the wavelength of light used in optical lithography, and the required size of printed features. In order to improve the aerial image, the mask is modified. This process is called OPC, OPC is an iterative process where a mask shape is modified to decrease the disparity between the required and printed shapes. After each modification the chip is simulated again to quantify the effect of the change in the mask. Thus, lithography simulation is an integral part of OPC and a fast lithography simulator will definitely decrease the time required to perform OPC on an entire chip.
A lithography simulator which uses wavelets to compute the aerial image has previously been developed. In this thesis I extensively modify this simulator in order to execute it on a Graphics Processing Unit (GPU). This leads to a lithography simulator that is considerably faster than other lithography simulators and when used in OPC will lead to drastically decreased runtimes. The other work presented in the proposal is a fast OPC tool which allows us to perform OPC on circuits faster than other tools. We further focus our attention on metrics like runtime, edge placement error and shot size and present schemes to improve these metrics.
|
32 |
Process Variation-Aware Timing Optimization with Load Balance of Multiple Paths in Dynamic and Mixed-Static-Dynamic CMOS LogicYelamarthi, Kumar 23 June 2008 (has links)
No description available.
|
33 |
Study of Physical Unclonable Functions at Low Voltage on FPGAPriya, Kanu 15 September 2011 (has links)
Physical Unclonable Functions (PUFs) provide a secure, power efficient and non-volatile means of chip identification. These are analogous to one-way functions that are easy to create but impossible to duplicate. They offer solutions to many of the FPGA (Field Programmable Gate Array) issues like intellectual property, chip authentication, cryptographic key generation and trusted computing. Moreover, FPGA evolving as an important platform for flexible logic circuit, present an attractive medium for PUF implementation to ensure its security.
In this thesis, we explore the behavior of RO-PUF (Ring Oscillator Physical Unclonable Functions) on FPGA when subjected to low voltages. We investigate its stability by applying environmental variations, such as temperature changes to characterize its effectiveness. It is shown with the help of experiment results that the spread of frequencies of ROs widens with lowering of voltage and stability is expected. However, due to inherent circuit challenges of FPGA at low voltage, RO-PUF fails to generate a stable response. It is observed that more number of RO frequency crossover and counter value fluctuation at low voltage, lead to instability in PUF. We also explore different architectural components of FPGA to explain the unstable nature of RO-PUF. It is reasoned out that FPGA does not sustain data at low voltage giving out unreliable data. Thus a low voltage FPGA is required to verify the stability of RO-PUF. To emphasize our case, we look into the low power applications research being done on FPGA. We conclude that FPGA, though flexible, being power inefficient, requires optimization on architectural and circuit level to generate stable responses at low voltages. / Master of Science
|
34 |
Intelligent real-time environment and process adaptive radio frequency front-ends for ultra low power applicationsBanerjee, Debashis 21 September 2015 (has links)
In the thesis the design of process tolerant, use-aware radio-frequency front-ends were explored. First, the design of fuzzy logic and equation based controllers, which can adapt to multi-dimensional channel conditions, are proposed. Secondly, the thesis proves that adaptive systems can have multiple modes of operation depending upon the throughput requirements of the system. Two such modes were demonstrated: one optimizing the energy-per-bit (energy priority mode) and another achieving the lowest power consumption at the highest throughput (data priority mode). Finally, to achieve process tolerant channel adaptive operation a self-learning methodology is proposed which learns the optimal re-configuration setting for the system on-the-fly. Implications of the research are discussed and future avenues of further research are proposed.
|
35 |
Developing a basis for characterizing precision of estimates produced from non-probability samples on continuous domainsCooper, Cynthia 20 February 2006 (has links)
Graduation date: 2006 / This research addresses sample process variance estimation on continuous domains and for non-probability samples in particular. The motivation for the research is a scenario in which a program has collected non-probability samples for which there is interest in characterizing how much an extrapolation to the domain would vary given similarly arranged collections of observations. This research does not address the risk of bias and a key assumption is that the observations could represent the response on the domain of interest. This excludes any hot-spot monitoring programs. The research is presented as a collection of three manuscripts. The first (to be published in Environmetrics (2006)) reviews and compares model- and design-based approaches for sampling and estimation in the context of continuous domains and promotes a model-assisted sample-process variance estimator. The next two manuscripts are written to be companion papers. With the objective of quantifying uncertainty of an estimator based on a non-probability sample, the proposed approach is to first characterize a class of sets of locations that are similarly arranged to the collection of locations in the non-probability sample, and then to predict variability of an estimate over that class of sets using the covariance structure indicated by the non-probability sample (assuming the covariance structure is indicative of the covariance structure on the study region). The first of the companion papers discusses characterizing classes of similarly arranged sets with the specification of a metric density. Goodness-of-fit tests are demonstrated on several types of patterns (dispersed, random and clustered) and on a non-probability collection of locations surveyed by Oregon Department of Fish & Wildlife on the Alsea River basin in Oregon. The second paper addresses predicting the variability of an estimate over sets in a class of sets (using a Monte Carlo process on a simulated response with appropriate covariance structure).
|
36 |
Development of Robust Analog and Mixed-Signal Circuits in the Presence of Process- Voltage-Temperature VariationsOnabajo, Marvin Olufemi 2011 May 1900 (has links)
Continued improvements of transceiver systems-on-a-chip play a key role in the advancement of mobile telecommunication products as well as wireless systems in biomedical and remote sensing applications. This dissertation addresses the problems of escalating CMOS process variability and system complexity that diminish the reliability and testability of integrated systems, especially relating to the analog and mixed-signal blocks. The proposed design techniques and circuit-level attributes are aligned with current built-in testing and self-calibration trends for integrated transceivers. In this work, the main focus is on enhancing the performances of analog and mixed-signal blocks with digitally adjustable elements as well as with automatic analog tuning circuits, which are experimentally applied to conventional blocks in the receiver path in order to demonstrate the concepts.
The use of digitally controllable elements to compensate for variations is exemplified with two circuits. First, a distortion cancellation method for baseband operational transconductance amplifiers is proposed that enables a third-order intermodulation (IM3) improvement of up to 22dB. Fabricated in a 0.13µm CMOS process with 1.2V supply, a transconductance-capacitor lowpass filter with the linearized amplifiers has a measured IM3 below -70dB (with 0.2V peak-to-peak input signal) and 54.5dB dynamic range over its 195MHz bandwidth. The second circuit is a 3-bit two-step quantizer with adjustable reference levels, which was designed and fabricated in 0.18µm CMOS technology as part of a continuous-time SigmaDelta analog-to-digital converter system. With 5mV resolution at a 400MHz sampling frequency, the quantizer's static power dissipation is 24mW and its die area is 0.4mm^2.
An alternative to electrical power detectors is introduced by outlining a strategy for built-in testing of analog circuits with on-chip temperature sensors. Comparisons of an amplifier's measurement results at 1GHz with the measured DC voltage output of an on-chip temperature sensor show that the amplifier's power dissipation can be monitored and its 1-dB compression point can be estimated with less than 1dB error. The sensor has a tunable sensitivity up to 200mV/mW, a power detection range measured up to 16mW, and it occupies a die area of 0.012mm^2 in standard 0.18µm CMOS technology.
Finally, an analog calibration technique is discussed to lessen the mismatch between transistors in the differential high-frequency signal path of analog CMOS circuits. The proposed methodology involves auxiliary transistors that sense the existing mismatch as part of a feedback loop for error minimization. It was assessed by performing statistical Monte Carlo simulations of a differential amplifier and a double-balanced mixer designed in CMOS technologies.
|
37 |
Refactoring-based statistical timing analysis and its applications to robust design and test synthesisChung, Jae Yong, 1981- 11 July 2012 (has links)
Technology scaling in the nanometer era comes with a significant amount of process variation, leading to lower yield and new types of defective parts. These challenges necessitate robust design to ensure adequate yield, and smarter testing to screen out bad chips. Statistical static timing analysis (SSTA) en- ables this but suffers from crude approximation algorithms.
This dissertation first studies the underlying theories of timing graphs and proposes two fundamental techniques enhancing the core statistical timing algorithms. We first propose the refactoring technique to capture topological correlation. Static timing analysis is based on levelized breadth-first traversal, which is a fundamental graph traversal technique and has been used for static timing analysis over the past decades. We show that there are numerous alternatives to the traversal because of an algebraic property, the distributivity of addition over maximum. This new interpretation extends the degrees of freedom of static timing analysis, which is exploited to improve the accuracy of SSTA. We also propose a novel operator for computing joint probabilities in SSTA. In many SSTA applications, this is very common but is done using the max operator which results in much error due to the linear approximation. The new operator provides significantly higher accuracy at a small cost of run time.
Second, based on the two fundamental studies, this dissertation devel- ops three applications. We propose a criticality computation method that is essential to robust design and test synthesis; The proposed method, combined with the two fundamental techniques, achieves drastic accuracy improvement over the state-of-the-art method, demonstrating the benefits in practical ap- plications. We formulate the statistical path selection problem for at-speed test as a gambling problem and present an elegant solution based on the Kelly criterion. To circumvent the coverage loss issue in statistical path selection, we propose a testability driven approach, making it a practical solution for coping with parametric defects. / text
|
38 |
A hierarchical optimization engine for nanoelectronic systems using emerging device and interconnect technologiesPan, Chenyun 21 September 2015 (has links)
A fast and efficient hierarchical optimization engine was developed to benchmark and optimize various emerging device and interconnect technologies and system-level innovations at the early design stage. As the semiconductor industry approaches sub-20nm technology nodes, both devices and interconnects are facing severe physical challenges. Many novel device and interconnect concepts and system integration techniques are proposed in the past decade to reinforce or even replace the conventional Si CMOS technology and Cu interconnects. To efficiently benchmark and optimize these emerging technologies, a validated system-level design methodology is developed based on the compact models from all hierarchies, starting from the bottom material-level, to the device- and interconnect-level, and to the top system-level models. Multiple design parameters across all hierarchies are co-optimized simultaneously to maximize the overall chip throughput instead of just the intrinsic delay or energy dissipation of the device or interconnect itself. This optimization is performed under various constraints such as the power dissipation, maximum temperature, die size area, power delivery noise, and yield. For the device benchmarking, novel graphen PN junction devices and InAs nanowire FETs are investigated for both high-performance and low-power applications. For the interconnect benchmarking, a novel local interconnect structure and hybrid Al-Cu interconnect architecture are proposed, and emerging multi-layer graphene interconnects are also investigated, and compared with the conventional Cu interconnects. For the system-level analyses, the benefits of the systems implemented with 3D integration and heterogeneous integration are analyzed. In addition, the impact of the power delivery noise and process variation for both devices and interconnects are quantified on the overall chip throughput.
|
39 |
Design of digitally assisted adaptive analog and RF circuits and systemsBanerjee, Aritra 12 January 2015 (has links)
With more and more integration of analog and RF circuits in scaled CMOS technologies, process variation is playing a critical role which makes it difficult to achieve all the performance specifications across all the process corners. Moreover, at scaled technology nodes, due to lower voltage and current handling capabilities of the devices, they suffer from reliability issues that reduce the overall lifetime of the system. Finally, traditional static style of designing analog and RF circuits does not result in optimal performance of the system. A new design paradigm is emerging toward digitally assisted analog and RF circuits and systems aiming to leverage digital correction and calibration techniques to detect and compensate for the manufacturing imperfections and improve the analog and RF performance offering a high level of integration. The objective of the proposed research is to design digital friendly and performance tunable adaptive analog/RF circuits and systems with digital enhancement techniques for higher performance, better process variation tolerance, and more reliable operation and developing strategy for testing the proposed adaptive systems. An adaptation framework is developed for process variation tolerant RF systems which has two parts – optimized test stimulus driven diagnosis of individual modules and power optimal system level tuning. Another direct tuning approach is developed and demonstrated on a carbon nanotube based analog circuit. An adaptive switched mode power amplifier is designed which is more digital-intensive in nature and has higher efficiency, improved reliability and better process resiliency. Finally, a testing strategy for adaptive RF systems is shown which reduces test time and test cost compared to traditional testing.
|
40 |
Predicting Critical Warps in Near-Threshold GPGPU Applications Using a Dynamic Choke Point AnalysisSanyal, Sourav 01 August 2019 (has links)
General purpose graphics processing units (GP-GPU), owing to their enormous thread-level parallelism, can significantly improve the power consumption at the near-threshold (NTC) operating region, while offering close to a super-threshold performance. However, process variation (PV) can drastically reduce the GPU performance at NTC. In this work, choke points—a unique device-level characteristic of PV at NTC—that can exacerbate the warp criticality problem in GPUs have been explored. It is shown that the modern warp schedulers cannot tackle the choke point induced critical warps in an NTC GPU. Additionally, Choke Point Aware Warp Speculator, a circuit-architectural solution is proposed to dynamically predict the critical warps in GPUs, and accelerate them in their respective execution units. The best scheme achieves an average improvement of ∼39% in performance, and ∼31% in energy-efficiency, over one state-of-the-art warp scheduler, across 15 GPGPU applications, while incurring marginal hardware overheads.
|
Page generated in 0.1383 seconds