21 |
Development of a test methodology for FinFET-Based SRAMsMedeiros, Guilherme Cardoso 17 August 2017 (has links)
Submitted by Caroline Xavier (caroline.xavier@pucrs.br) on 2017-09-11T13:09:26Z
No. of bitstreams: 1
DIS_GUILHERME_CARDOSO_MEDEIROS_COMPLETO.pdf: 10767866 bytes, checksum: f8ce0a0593916dec149c9417c21ff36e (MD5) / Made available in DSpace on 2017-09-11T13:09:26Z (GMT). No. of bitstreams: 1
DIS_GUILHERME_CARDOSO_MEDEIROS_COMPLETO.pdf: 10767866 bytes, checksum: f8ce0a0593916dec149c9417c21ff36e (MD5)
Previous issue date: 2017-08-17 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES / Miniaturiza??o tem sido adotada como o principal objetivo da ind?stria de
Circuitos Integrados (CIs) nos ?ltimos anos, uma vez que agrega muitos benef?cios
tais como desempenho, maior densidade, e baixo consumo de energia. Junto com a
miniaturiza??o da tecnologia CMOS, o aumento na quantidade de dados a serem
armazenados no chip causaram a amplia??o do espa?o ocupado por mem?rias do
tipo Static Random-Access Memory (SRAM) em System-on-Chips (SoCs).
Tal miniaturiza??o e evolu??o da nanotecnologia proporcionou muitas revolu??es
na ind?stria de semicondutores, tornando necess?rio tamb?m a melhoria no
processo de fabrica??o de CIs. Devido a sensibilidade causada pela miniaturiza??o
e pelas variabilidades de processo de fabrica??o, eventuais defeitos introduzidos durante
fabrica??o podem danificar o CI, afetando o n?vel de confiabilidade do CI e
causando perdas no rendimento por die fabricado.
A miniaturiza??o adotada pela ind?stria de semicondutores impulsionou a
pesquisa de novas tecnologias visando a substitui??o de transistores do tipo CMOS.
Transistores FinFETs, devido a suas propriedades el?tricas superiores, emergiram
como a tecnologia a ser adotada pela ind?stria.
Com a fabrica??o de mem?rias utilizando a tecnologia FinFET, surge a preocupa??o
com testes de mem?ria, uma vez que modelos de falhas e metodologias de
teste utilizados para tecnologias planares podem n?o ser suficientes para detectarem
todos os defeitos presented em tecnologias multi-gate. Uma vez que esta nova tecnologia
pode ser afetada por novos tipos de falhas, testes que dependem da execu??o
de opera??es, m?todos de endere?amento, checagem de padr?es, e outros tipos de
condi??es de est?mulo, podem deixar de serem estrat?gias confi?veis para o teste
dos mesmos.
Neste contexto, este trabalho de mestrado prop?e uma metodologia baseada
em hardware para testar mem?rias em FinFET que monitore par?metros do bloco
de mem?ria e gere sinais baseados nessas caracter?sticas. Atrav?s do uso de sensores
que monitoram os par?metros do circuito (como consumo de corrente, tens?o nas
bit lines) e detectam mudan?as dos padr?es monitorados, os sensores criam pulsos
que representam essas varia??es. Esses pulsos s?o modulados usando t?cnicas de
modula??o. Uma vez que defeitos resistivos alteram os par?metros monitorados,
c?lulas afetadas por esses defeitos apresentam diferentes sinais modulados, validando
a metodologia proposta e permitindo a detec??o destes defeitos e consequentemente
aumentando o yield de fabrica??o e a confiabilidade do circuito ao longo da sua
vida.
A metodologia baseada em hardware proposta neste trabalho foi implementada
utilizando sensores integrados no pr?prio CI, e foi dividida em duas abordagens:
monitoramento de consumo de corrente e monitoramento da tens?o nas bit lines.
Cada abordagem foi validada com a inje??o de 12 defeitos resistivos de diferentes
naturezas e localiza??es, a ap?s validados considerando diferentes temperaturas de
opera??o e o impacto da varia??o de processo de fabrica??o. / Miniaturization has been the industry?s main goal over the last few years,
as it brings benefits such as high performance and on-chip integration as well as
power consumption reduction. Alongside the constant scale-down of Integrated Circuits
(ICs) technology, the increasing need to store more and more information has
resulted in the fact that Static Random Access Memories (SRAMs) occupy great
part of Systems-on-Chip (SoCs).
The constant evolution of nanotechnology brought many revolutions to semiconductors,
making it also necessary to improve the integrated circuit manufacturing
process. Therefore, the use of new, complex processing steps, materials, and
technology has become necessary.
The technology-shrinking objective adopted by the semiconductor industry
promoted research for technologies to replace CMOS transistors. FinFET transistors,
due to their superior electrical properties, have emerged as the technology most
probably to be adopted by the industry.
However, one of the most critical downsides of technology scaling is related
to the non-determinism of device?s electrical parameters due to process variation.
Miniaturization has led to the development of new types of manufacturing defects
that may affect IC reliability and cause yield loss.
With the production of FinFET-based memories, there is a concern regarding
embedded memory test and repair, because fault models and test algorithms
used for memories based on conventional planar technology may not be sufficient
to cover all possible defects in multi-gate memories. New faults that are specific to
FinFETs may exist, therefore, current test solutions, which rely on operations executing
specific patterns and other stressing conditions, may not stand to be reliable
tools for investigating those faults.
In this context, this work proposes a hardware-based methodology for testing
memories implemented using FinFET technology that monitors aspects of the
memory array and creates output signals deriving from the behavior of these characteristics.
Sensors monitor the circuit?s parameters and upon changes from their
idle values, create pulses that represent such variations. These pulses are modulated
applying the pulse width modulation techniques. As resistive defects alter current
consumption and bit line voltages, cells affected by resistive defects present altered
modulated signals, validating the proposed methodology and allowing the detection
of these defects. This further allows to increase the yield after manufacturing
and circuit reliability during its lifetime. Considering how FinFET technology has
evolved and the likelihood that ordinary applications will employ FinFET-based
circuits in the future, the development of techniques to ensure circuit reliability has
become a major concern.
The presented hardware-based methodology, which was implemented using
On-Chip Sensors, has been divided in two approaches: monitoring current consumption
and monitoring the voltage level of bit lines. Each approach has been validated
by injecting a total of 12 resistive defects, and evaluated considering different operation
temperatures and the impact of process variation.
|
22 |
Low-Power Multi-GHz Circuit Techniques for On-chip ClockingHansson, Martin January 2006 (has links)
<p>The impressive evolution of modern high-performance microprocessors have resulted in chips with over one billion transistors as well as multi-GHz clock frequencies. As the silicon integrated circuit industry moves further into the nanometer regime, three of the main challenges to overcome in order for continuing CMOS technology scaling are; growing standby power dissipation, increasing variations in process parameters, and increasing power dissipation due to growing clock load and circuit complexity. This thesis addresses all three of these future scaling challenges with the overall focus on reducing the total clock-power for low-power, multi-GHz VLSI circuits.</p><p>Power-dissipation related to the clock generation and distribution is identified as the dominating contributor of the total active power dissipation. This makes novel power reduction techniques crucial in future VLSI design. This thesis describes a new energy-recovering clocking technique aimed at reducing the total chip clock-power. The proposed technique consumes 2.3x lower clock-power compared to conventional clocking at a clock frequency of 1.56 GHz.</p><p>Apart from increasing power dissipation due to leakage also the robustness constraints for circuits are impacted by the increasing leakage. To improve the leakage robustness for sub-90 nm low clock load dynamic flip-flops a novel keeper technique is proposed. The proposed keeper utilizes a scalable and simple leakage compensation technique. During any low frequency operation, the flip-flop is configured as a static flip-flop with increased functional robustness.</p><p>In order to compensate the impact of the increasingly large process variations on latches and flip-flops, a reconfigurable keeper technique is presented in this thesis. In contrast to the traditional design for worst-case process corners, a variable keeper circuit is utilized. The proposed reconfigurable keeper preserves the robustness of storage nodes across the process corners without degrading the overall chip performance.</p> / Report code: LiU-TEK-LIC-2006:21.
|
23 |
Voltage and Timing Adaptation for Variation and Aging Tolerance in Nanometer VLSI CircuitsShim, Kyu-Nam 1978- 14 March 2013 (has links)
Process variations and circuit aging continue to be main challenges to the power-efficiency of VLSI circuits, as considerable power budget must be allocated at design time to mitigate timing variations. Modern designs incorporate adaptive techniques for variation compensation to reduce the extra power consumption. The efficiency of existing adaptive approaches, however, is often significantly attenuated by the fine-grained nature of variations in nanometer technology such as random dopant fluctuation, litho-variation, and different rates of transistor degradation due to non-uniform activity factors. This dissertation addresses the limitations from existing adaptation techniques, and proposes new adaptive approaches to effectively compensate the fine-grained variations.
Adaptive supply voltage (ASV) is one of the effective adaptation approaches for power-performance tuning. ASV has advantages on controlling dynamic and leakage power, while voltage generation and delivery overheads from conventional ASV systems make their application to mitigate fine-grained variations demanding. This dissertation presents a dual-level ASV system which provides ASV at both coarse-grained and fine-grained level, and has limited power routing overhead. Significant power reduction from our dual-ASV system demonstrates its superiority over existing approaches.
Another novel technique on supply voltage adaptation for variation resilience in VLSI interconnects is proposed. A programmable boostable repeater design boosts switching speed by raising its internal voltage rail transiently and autonomously, and achieves fine-grained voltage adaptation without stand-alone voltage regulators or additional power grid. Since interconnect is a widely recognized bottleneck to chip performance and tremendous repeaters are employed on chip designs, boostable repeater has plenty of chances to improve system robustness.
A low cost scheme for delay variation detection is essential to compose an efficient adaptation system. This dissertation presents an area-efficient built-in delay testing scheme which exploits BIST SCAN architecture and dynamic clock skew control. Using this built-in delay testing scheme, a fine-grained adaptation system composed of the proposed boostable repeater design and adaptive clock skew control is proposed, and demonstrated to mitigate process variation and aging induced timing degradations in a power as well as area efficient manner.
|
24 |
High Performance Digital Circuit TechniquesSadrossadat, Sayed Alireza January 2009 (has links)
Achieving high performance is one of the most difficult challenges in designing digital circuits. Flip-flops and adders are key blocks in most digital systems and must therefore be designed to yield highest performance. In this thesis, a new high performance serial adder is developed while power consumption is attained. Also, a statistical framework for the design of flip-flops is introduced that ensures that such sequential circuits meet timing yield under performance criteria.
Firstly, a high performance serial adder is developed. The new adder is based on the idea of having a constant delay for the addition of two operands. While conventional adders exhibit logarithmic delay, the proposed adder works at a constant delay order. In addition, the new adder's hardware complexity is in a linear order with the word length, which consequently exhibits less area and power consumption as compared to conventional high performance adders. The thesis demonstrates the underlying algorithm used for the new adder and followed by simulation results.
Secondly, this thesis presents a statistical framework for the design of flip-flops under process variations in order to maximize their timing yield. In nanometer CMOS technologies, process variations significantly impact the timing performance of sequential circuits which may eventually cause their malfunction. Therefore, developing a framework for designing such circuits is inevitable. Our framework generates the values of the nominal design parameters; i.e., the size of gates and transmission gates of flip-flop such that maximum timing yield is achieved for flip-flops. While previous works focused on improving the yield of flip-flops, less research was done to improve the timing yield in the presence of process variations.
|
25 |
High Performance Digital Circuit TechniquesSadrossadat, Sayed Alireza January 2009 (has links)
Achieving high performance is one of the most difficult challenges in designing digital circuits. Flip-flops and adders are key blocks in most digital systems and must therefore be designed to yield highest performance. In this thesis, a new high performance serial adder is developed while power consumption is attained. Also, a statistical framework for the design of flip-flops is introduced that ensures that such sequential circuits meet timing yield under performance criteria.
Firstly, a high performance serial adder is developed. The new adder is based on the idea of having a constant delay for the addition of two operands. While conventional adders exhibit logarithmic delay, the proposed adder works at a constant delay order. In addition, the new adder's hardware complexity is in a linear order with the word length, which consequently exhibits less area and power consumption as compared to conventional high performance adders. The thesis demonstrates the underlying algorithm used for the new adder and followed by simulation results.
Secondly, this thesis presents a statistical framework for the design of flip-flops under process variations in order to maximize their timing yield. In nanometer CMOS technologies, process variations significantly impact the timing performance of sequential circuits which may eventually cause their malfunction. Therefore, developing a framework for designing such circuits is inevitable. Our framework generates the values of the nominal design parameters; i.e., the size of gates and transmission gates of flip-flop such that maximum timing yield is achieved for flip-flops. While previous works focused on improving the yield of flip-flops, less research was done to improve the timing yield in the presence of process variations.
|
26 |
Algorithms and Methodology for Post-Manufacture Adaptation to Process Variations and Induced Noise in Deeply Scaled CMOS TechnologiesAshouei, Maryam 27 September 2007 (has links)
In the last two decades, VLSI technology scaling has spurred a rapid growth in the semiconductor industry. With CMOS device dimensions falling below 100 nm, achieving higher performance and packing more complex functionalities into digital integrated circuits have become easier. However, the scaling trend poses new challenges to design and process engineers. First, larger process parameter variations in the current technologies cause larger spread in the delay and power distribution of circuits and result in the parametric yield loss. In addition, ensuring the reliability of deep sub-micron (DSM) technologies under soft/transient errors is a significant challenge. These errors occur because of the combined effects of the atmospheric radiations and the significantly reduced noise margins of scaled technologies.
This thesis focuses on addressing the issues related to the process variations and reliability in deeply scaled CMOS technologies. The objective of this research has been to develop circuit-level techniques to address process variations, transient errors, and the reliability concern. The proposed techniques can be divided into two parts. The first part addresses the process variation concern and proposes techniques to reduce the variation effects on power and performance distribution. The second part deals with the transient errors and techniques to reduce the effect of transient errors with minimum hardware or computational overhead.
|
27 |
Reliability- and Variation-Aware Placement for Field-Programmable Gate ArraysBsoul, Assem 26 September 2009 (has links)
Field-programmable gate arrays (FPGAs) have the potential to address scaling challenges in CMOS technology because of their regular structures and the flexibility they possess by being re-configurable after fabrication. One of the potential approaches in attacking scaling challenges, such as negative-bias temperature instability (NBTI) and process variation (PV), is by using placement techniques that are aware of these problems. Such techniques aim at placing a circuit in an FPGA such that the critical path delay is
improved compared to the expected worst case. This can be achieved by placing NBTI-critical blocks of a circuit in areas with fast transistors in an FPGA chip.
In this thesis, we present a detailed research effort that addresses the joint effect of NBTI and PV in FPGAs. We follow an experimental methodology in that we use actual PV data that we measure from 15 FPGA chips. The measured data is used to study the joint effect of NBTI and PV on the timing performance of circuits that are placed and routed in FPGAs. Enhancements are made to a well-known FPGA placement algorithm, T-VPlace, in order to make the placement process aware of the joint effect of NBTI and PV. Results are given for the placement and routing of Microelectronics Center of North Carolina (MCNC) benchmark circuits to show the effectiveness of the proposed techniques in addressing scaling challenges in FPGAs. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-24 17:23:29.626
|
28 |
Variability-aware low-power techniques for nanoscale mixed-signal circuits.Ghai, Dhruva V. 05 1900 (has links)
New circuit design techniques that accommodate lower supply voltages necessary for portable systems need to be integrated into the semiconductor intellectual property (IP) core. Systems that once worked at 3.3 V or 2.5 V now need to work at 1.8 V or lower, without causing any performance degradation. Also, the fluctuation of device characteristics caused by process variation in nanometer technologies is seen as design yield loss. The numerous parasitic effects induced by layouts, especially for high-performance and high-speed circuits, pose a problem for IC design. Lack of exact layout information during circuit sizing leads to long design iterations involving time-consuming runs of complex tools. There is a strong need for low-power, high-performance, parasitic-aware and process-variation-tolerant circuit design. This dissertation proposes methodologies and techniques to achieve variability, power, performance, and parasitic-aware circuit designs. Three approaches are proposed: the single iteration automatic approach, the hybrid Monte Carlo and design of experiments (DOE) approach, and the corner-based approach. Widely used mixed-signal circuits such as analog-to-digital converter (ADC), voltage controlled oscillator (VCO), voltage level converter and active pixel sensor (APS) have been designed at nanoscale complementary metal oxide semiconductor (CMOS) and subjected to the proposed methodologies. The effectiveness of the proposed methodologies has been demonstrated through exhaustive simulations. Apart from these methodologies, the application of dual-oxide and dual-threshold techniques at circuit level in order to minimize power and leakage is also explored.
|
29 |
Split Latency Allocator: Process Variation-Aware Register Access Latency Boost in a Near-Threshold Graphics Processing UnitPal, Asmita 01 August 2018 (has links)
Over the last decade, Graphics Processing Units (GPUs) have been used extensively in gaming consoles, mobile phones, workstations and data centers, as they have exhibited immense performance improvement over CPUs, in graphics intensive applications. Due to their highly parallel architecture, general purpose GPUs (GPGPUs) have gained the foreground in applications where large data blocks can be processed in parallel. However, the performance improvement is constrained by a large power consumption. Likewise, Near Threshold Computing (NTC) has emerged as an energy-efficient design paradigm. Hence, operating GPUs at NTC seems like a plausible solution to counteract the high energy consumption. This work investigates the challenges associated with NTC operation of GPUs and proposes a low-power GPU design, Split Latency Allocator, to sustain the performance of GPGPU applications.
|
30 |
Exploiting Application Behaviors for Resilient Static Random Access Memory Arrays in the Near-Threshold Computing RegimeMugisha, Dieudonne Manzi 01 May 2015 (has links)
Near-Threshold Computing embodies an intriguing choice for mobile processors due to the promise of superior energy efficiency, extending the battery life of these devices while reducing the peak power draw. However, process, voltage, and temperature variations cause a significantly high failure rate of Level One cache cells in the near-threshold regime a stark contrast to designs in the super-threshold regime, where fault sites are rare.
This thesis work shows that faulty cells in the near-threshold regime are highly clustered in certain regions of the cache. In addition, popular mobile benchmarks are studied to investigate the impact of run-time workloads on timing faults manifestation. A technique to mitigate the run-time faults is proposed. This scheme maps frequently used data to healthy cache regions by exploiting the application cache behaviors. The results show up to 78% gain in performance over two other state-of-the-art techniques.
|
Page generated in 0.1057 seconds