• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Impact of Technology Scaling on Leakage Reduction Techniques

Ghafari, Payam January 2007 (has links)
CMOS technology is scaling down to meet the performance, production cost, and power requirements of the microelectronics industry. The increase in the transistor leakage current is one of the most important negative side effects of technology scaling. Leakage affects not only the standby and active power consumption, but also the circuit reliability, since it is strongly correlated to the process variations. Leakage current influences circuit performance differently depending on: operating conditions (e.g., standby, active, burn in test), circuit family (e.g., logic or memory), and environmental conditions (e.g., temperature, supply voltage). Until the introduction of high-K gate dielectrics in the lower nanometer technology nodes, gate leakage will remain the dominant leakage component after subthreshold leakage. Since the way designers control subthreshold and gate leakage can change from one technology to another, it is crucial for them to be aware of the impact of the total leakage on the operation of circuits and the techniques that mitigate it. Consequently, techniques that reduce total leakage in circuits operating in the active mode at different temperature conditions are examined. Also, the implications of technology scaling on the choice of techniques to mitigate total leakage are investigated. This work resulted in guidelines for the design of low-leakage circuits in nanometer technologies. Logic gates in the 65nm, 45nm, and 32nm nodes are simulated and analyzed. The techniques that are adopted for comparison in this work affect both gate and subthreshold leakage, namely, stack forcing, pin reordering, reverse body biasing, and high threshold voltage transistors. Aside from leakage, our analysis also highlights the impact of these techniques on the circuit's performance and noise margins. The reverse body biasing scheme tends to be less effective as the technology scales since this scheme increases the band to band tunneling current. Employing high threshold voltage transistors seems to be one of the most effective techniques for reducing leakage with minor performance degradation. Pin reordering and natural stacks are techniques that do not affect the performance of the device, yet they reduce leakage. However, it is demonstrated that they are not as effective in all types of logic since the input values might switch only between the highly leaky states. Therefore, depending on the design requirements of the circuit, a combination, or hybrid of techniques which can result in better performance and leakage savings, is chosen. Power sensitive technology mapping tools can use the guidelines found as a result of the research in the low power design flow to meet the required maximum leakage current in a circuit. These guidelines are presented in general terms so that they can be adopted for any application and process technology.
2

Variability-Aware Design of Static Random Access Memory Bit-Cell

Gupta, Vasudha January 2008 (has links)
The increasing integration of functional blocks in today's integrated circuit designs necessitates a large embedded memory for data manipulation and storage. The most often used embedded memory is the Static Random Access Memory (SRAM), with a six transistor memory bit-cell. Currently, memories occupy more than 50% of the chip area and this percentage is only expected to increase in future. Therefore, for the silicon vendors, it is critical that the memory units yield well, to enable an overall high yield of the chip. The increasing memory density is accompanied by aggressive scaling of the transistor dimensions in the SRAM. Together, these two developments make SRAMs increasingly susceptible to process-parameter variations. As a result, in the current nanometer regime, statistical methods for the design of the SRAM array are pivotal to achieve satisfactory levels of silicon predictability. In this work, a method for the statistical design of the SRAM bit-cell is proposed. Not only does it provide a high yield, but also meets the specifications for the design constraints of stability, successful write, performance, leakage and area. The method consists of an optimization framework, which derives the optimal design parameters; i.e., the widths and lengths of the bit-cell transistors, which provide maximum immunity to the variations in the transistor's geometry and intrinsic threshold voltage fluctuations. The method is employed to obtain optimal designs in the 65nm, 45nm and 32nm technologies for different set of specifications. The optimality of the resultant designs is verified. The resultant optimal bit-cell designs in the 65nm, 45nm and 32nm technologies are analyzed to study the SRAM area and yield trade-offs associated with technology scaling. In order to achieve 50% scaling of the bit-cell area, at every technology node, two ways are proposed. The resultant designs are further investigated to understand, which mode of failure in the bit-cell becomes more dominant with technology scaling. In addition, the impact of voltage scaling on the bit-cell designs is also studied.
3

Impact of Technology Scaling on Leakage Reduction Techniques

Ghafari, Payam January 2007 (has links)
CMOS technology is scaling down to meet the performance, production cost, and power requirements of the microelectronics industry. The increase in the transistor leakage current is one of the most important negative side effects of technology scaling. Leakage affects not only the standby and active power consumption, but also the circuit reliability, since it is strongly correlated to the process variations. Leakage current influences circuit performance differently depending on: operating conditions (e.g., standby, active, burn in test), circuit family (e.g., logic or memory), and environmental conditions (e.g., temperature, supply voltage). Until the introduction of high-K gate dielectrics in the lower nanometer technology nodes, gate leakage will remain the dominant leakage component after subthreshold leakage. Since the way designers control subthreshold and gate leakage can change from one technology to another, it is crucial for them to be aware of the impact of the total leakage on the operation of circuits and the techniques that mitigate it. Consequently, techniques that reduce total leakage in circuits operating in the active mode at different temperature conditions are examined. Also, the implications of technology scaling on the choice of techniques to mitigate total leakage are investigated. This work resulted in guidelines for the design of low-leakage circuits in nanometer technologies. Logic gates in the 65nm, 45nm, and 32nm nodes are simulated and analyzed. The techniques that are adopted for comparison in this work affect both gate and subthreshold leakage, namely, stack forcing, pin reordering, reverse body biasing, and high threshold voltage transistors. Aside from leakage, our analysis also highlights the impact of these techniques on the circuit's performance and noise margins. The reverse body biasing scheme tends to be less effective as the technology scales since this scheme increases the band to band tunneling current. Employing high threshold voltage transistors seems to be one of the most effective techniques for reducing leakage with minor performance degradation. Pin reordering and natural stacks are techniques that do not affect the performance of the device, yet they reduce leakage. However, it is demonstrated that they are not as effective in all types of logic since the input values might switch only between the highly leaky states. Therefore, depending on the design requirements of the circuit, a combination, or hybrid of techniques which can result in better performance and leakage savings, is chosen. Power sensitive technology mapping tools can use the guidelines found as a result of the research in the low power design flow to meet the required maximum leakage current in a circuit. These guidelines are presented in general terms so that they can be adopted for any application and process technology.
4

Variability-Aware Design of Static Random Access Memory Bit-Cell

Gupta, Vasudha January 2008 (has links)
The increasing integration of functional blocks in today's integrated circuit designs necessitates a large embedded memory for data manipulation and storage. The most often used embedded memory is the Static Random Access Memory (SRAM), with a six transistor memory bit-cell. Currently, memories occupy more than 50% of the chip area and this percentage is only expected to increase in future. Therefore, for the silicon vendors, it is critical that the memory units yield well, to enable an overall high yield of the chip. The increasing memory density is accompanied by aggressive scaling of the transistor dimensions in the SRAM. Together, these two developments make SRAMs increasingly susceptible to process-parameter variations. As a result, in the current nanometer regime, statistical methods for the design of the SRAM array are pivotal to achieve satisfactory levels of silicon predictability. In this work, a method for the statistical design of the SRAM bit-cell is proposed. Not only does it provide a high yield, but also meets the specifications for the design constraints of stability, successful write, performance, leakage and area. The method consists of an optimization framework, which derives the optimal design parameters; i.e., the widths and lengths of the bit-cell transistors, which provide maximum immunity to the variations in the transistor's geometry and intrinsic threshold voltage fluctuations. The method is employed to obtain optimal designs in the 65nm, 45nm and 32nm technologies for different set of specifications. The optimality of the resultant designs is verified. The resultant optimal bit-cell designs in the 65nm, 45nm and 32nm technologies are analyzed to study the SRAM area and yield trade-offs associated with technology scaling. In order to achieve 50% scaling of the bit-cell area, at every technology node, two ways are proposed. The resultant designs are further investigated to understand, which mode of failure in the bit-cell becomes more dominant with technology scaling. In addition, the impact of voltage scaling on the bit-cell designs is also studied.
5

Analytical and Experimental Study of Wide Tuning Range Low Phase Noise mm-Wave LC-VCOs

Elabd, Salma 11 August 2016 (has links)
No description available.
6

High Performance RF and Basdband Analog-to-Digital Interface for Multi-standard/Wideband Applications

Zhang, Heng 2010 December 1900 (has links)
The prevalence of wireless standards and the introduction of dynamic standards/applications, such as software-defined radio, necessitate the next generation wireless devices that integrate multiple standards in a single chip-set to support a variety of services. To reduce the cost and area of such multi-standard handheld devices, reconfigurability is desirable, and the hardware should be shared/reused as much as possible. This research proposes several novel circuit topologies that can meet various specifications with minimum cost, which are suited for multi-standard applications. This doctoral study has two separate contributions: 1. The low noise amplifier (LNA) for the RF front-end; and 2. The analog-to-digital converter (ADC). The first part of this dissertation focuses on LNA noise reduction and linearization techniques where two novel LNAs are designed, taped out, and measured. The first LNA, implemented in TSMC (Taiwan Semiconductor Manufacturing Company) 0.35Cm CMOS (Complementary metal-oxide-semiconductor) process, strategically combined an inductor connected at the gate of the cascode transistor and the capacitive cross-coupling to reduce the noise and nonlinearity contributions of the cascode transistors. The proposed technique reduces LNA NF by 0.35 dB at 2.2 GHz and increases its IIP3 and voltage gain by 2.35 dBm and 2dB respectively, without a compromise on power consumption. The second LNA, implemented in UMC (United Microelectronics Corporation) 0.13Cm CMOS process, features a practical linearization technique for high-frequency wideband applications using an active nonlinear resistor, which obtains a robust linearity improvement over process and temperature variations. The proposed linearization method is experimentally demonstrated to improve the IIP3 by 3.5 to 9 dB over a 2.5–10 GHz frequency range. A comparison of measurement results with the prior published state-of-art Ultra-Wideband (UWB) LNAs shows that the proposed linearized UWB LNA achieves excellent linearity with much less power than previously published works. The second part of this dissertation developed a reconfigurable ADC for multistandard receiver and video processors. Typical ADCs are power optimized for only one operating speed, while a reconfigurable ADC can scale its power at different speeds, enabling minimal power consumption over a broad range of sampling rates. A novel ADC architecture is proposed for programming the sampling rate with constant biasing current and single clock. The ADC was designed and fabricated using UMC 90nm CMOS process and featured good power scalability and simplified system design. The programmable speed range covers all the video formats and most of the wireless communication standards, while achieving comparable Figure-of-Merit with customized ADCs at each performance node. Since bias current is kept constant, the reconfigurable ADC is more robust and reliable than the previous published works.

Page generated in 0.0827 seconds