• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 72
  • 64
  • 50
  • 25
  • 21
  • 15
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 680
  • 197
  • 162
  • 136
  • 135
  • 134
  • 127
  • 124
  • 118
  • 85
  • 81
  • 75
  • 73
  • 69
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Nonlinear Analog Networks for Image Smoothing and Segmentation

Lumsdaine, A., Wyatt, J.L., Jr., Elfadel, I.M. 01 January 1991 (has links)
Image smoothing and segmentation algorithms are frequently formulatedsas optimization problems. Linear and nonlinear (reciprocal) resistivesnetworks have solutions characterized by an extremum principle. Thus,sappropriately designed networks can automatically solve certainssmoothing and segmentation problems in robot vision. This papersconsiders switched linear resistive networks and nonlinear resistivesnetworks for such tasks. The latter network type is derived from thesformer via an intermediate stochastic formulation, and a new resultsrelating the solution sets of the two is given for the "zerostermperature'' limit. We then present simulation studies of severalscontinuation methods that can be gracefully implemented in analog VLSIsand that seem to give "good'' results for these non-convexsoptimization problems.
172

Continuous Stochastic Cellular Automata that Have a Stationary Distribution and No Detailed Balance

Poggio, Tomaso, Girosi, Federico 01 December 1990 (has links)
Marroquin and Ramirez (1990) have recently discovered a class of discrete stochastic cellular automata with Gibbsian invariant measures that have a non-reversible dynamic behavior. Practical applications include more powerful algorithms than the Metropolis algorithm to compute MRF models. In this paper we describe a large class of stochastic dynamical systems that has a Gibbs asymptotic distribution but does not satisfy reversibility. We characterize sufficient properties of a sub-class of stochastic differential equations in terms of the associated Fokker-Planck equation for the existence of an asymptotic probability distribution in the system of coordinates which is given. Practical implications include VLSI analog circuits to compute coupled MRF models.
173

Developing Multi-Criteria Performance Estimation Tools for Systems-on-Chip

Vander Biest, Alexis GJE 23 March 2009 (has links)
The work presented in this thesis targets the analysis and implementation of multi-criteria performance prediction methods for System-on-Chips (SoC). These new SoC architectures offer the opportunity to integrate complete heterogeneous systems into a single chip and can be used to design battery powered handhelds, security critical systems, consumer electronics devices, etc. However, this variety in terms of application usually comes with a lot of different performance objectives like power consumption, yield, design cost, production cost, silicon area and many others. These performance requirements are often very difficult to meet together so that SoC design usually relies on making the right design choices and finding the best performance compromises. In parallel with this architectural paradigm shift, new Very Deep Submicron (VDSM) silicon processes have more and more impact on the performances and deeply modify the way a VLSI system is designed even at the first stages of a design flow. In such a context where many new technological and system related variables enter the game, early exploration of the impact of design choices becomes crucial to estimate the performance of the system to design and reduce its time-to-market. In this context, this thesis presents: - A study of state-of-the-art tools and methods used to estimate the performances of VLSI systems and an original classification based on several features and concepts that they use. Based on this comparison, we highlight their weaknesses and lacks to identify new opportunities in performance prediction. - The definition of new concepts to enable the automatic exploration of large design spaces based on flexible performance criteria and degrees of freedom representing design choices. - The implementation of a couple of two new tools of our own: - Nessie, a tool enabling hierarchical representation of an application along with its platform and automatically performs the mapping and the estimation of their performance. -Yeti, a C++ library enabling the defintion and value estimation of closed-formed expressions and table-based relations. It provides the user with input and model sensitivity analysis capability, simulation scripting, run-time building and automatic plotting of the results. Additionally, Yeti can work in standalone mode to provide the user with an independent framework for model estimation and analysis. To demonstrate the use and interest of these tools, we provide in this thesis several case studies whose results are discussed and compared with the literature. Using Yeti, we successfully reproduced the results of a model estimating multi-core computation power and extended them thanks to the representation flexibility of our tool. We also built several models from the ground up to help the dimensioning of interconnect links and clock frequency optimization. Thanks to Nessie, we were able to reproduce the NoC power consumption results of an H.264/AVC decoding application running on a multicore platform. These results were then extended to the case of a 3D die stacked architecture and the performance benets are then discussed. We end up by highlighting the advantages of our technique and discuss future opportunities for performance prediction tools to explore.
174

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.
175

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.
176

Convex Optimization and Utility Theory: New Trends in VLSI Circuit Layout

Etawil, Hussein January 1999 (has links)
The design of modern integrated circuits is overwhelmingly complicated due to the enormous number of cells in a typical modern circuit. To deal with this difficulty, the design procedure is broken down into a set of disjoint tasks. Circuit layout is the task that refers to the physical realization of a circuit from its functional description. In circuit layout, a connection-list called netlist of cells and nets is given. Placement and routing are subtasks associated with circuit layout and involve determining the geometric locations of the cells within the placement area and connecting cells sharing common nets. In performing the placement and the routing of the cells, minimum placement area, minimum delay and other performance constraints need to be observed. In this work, we propose and investigate new approaches to placement and routing problems. Specifically, for the placement subtask, we propose new convex programming formulations to estimate wirelength and force cells to spread within the placement area. As opposed to previous approaches, our approach is partitioning free and requires no hard constraints to achieve cell spreading within the placement area. The result of the global optimization of the new convex models is a global placement which is further improved using a Tabu search based iterative technique. The effectiveness, robustness and superiority of the approach are demonstrated on a set of nine benchmark industrial circuits. With regard to the routing subtask, we propose a hybrid methodology that combines Tabu search and Stochastic Evolution as a search engine in a new channel router. We also propose a new scheme based on Utility Theory for selecting and assigning nets to tracks in the channel. In this scheme, problem-domain information expressed in the form of utility functions is used to guide the search engine to explore the search space effectively. The effectiveness and robustness of the approach is demonstrated on five industrial benchmarks.
177

Modeling and Mitigation of Soft Errors in Nanoscale SRAMs

Jahinuzzaman, Shah M. January 2008 (has links)
Energetic particle (alpha particle, cosmic neutron, etc.) induced single event data upset or soft error has emerged as a key reliability concern in SRAMs in sub-100 nanometre technologies. Low operating voltage, small node capacitance, high packing density, and lack of error masking mechanisms are primarily responsible for the soft error susceptibility of SRAMs. In addition, since SRAM occupies the majority of die area in system-on-chips (SoCs) and microprocessors, different leakage reduction techniques, such as, supply voltage reduction, gated grounding, etc., are applied to SRAMs in order to limit the overall chip leakage. These leakage reduction techniques exponentially increase the soft error rate in SRAMs. The soft error rate is further accentuated by process variations, which are prominent in scaled-down technologies. In this research, we address these concerns and propose techniques to characterize and mitigate soft errors in nanoscale SRAMs. We develop a comprehensive analytical model of the critical charge, which is a key to assessing the soft error susceptibility of SRAMs. The model is based on the dynamic behaviour of the cell and a simple decoupling technique for the non-linearly coupled storage nodes. The model describes the critical charge in terms of NMOS and PMOS transistor parameters, cell supply voltage, and noise current parameters. Consequently, it enables characterizing the spread of critical charge due to process induced variations in these parameters and to manufacturing defects, such as, resistive contacts or vias. In addition, the model can estimate the improvement in critical charge when MIM capacitors are added to the cell in order to improve the soft error robustness. The model is validated by SPICE simulations (90nm CMOS) and radiation test. The critical charge calculated by the model is in good agreement with SPICE simulations with a maximum discrepancy of less than 5%. The soft error rate estimated by the model for low voltage (sub 0.8 V) operations is within 10% of the soft error rate measured in the radiation test. Therefore, the model can serve as a reliable alternative to time consuming SPICE simulations for optimizing the critical charge and hence the soft error rate at the design stage. In order to limit the soft error rate further, we propose an area-efficient multiword based error correction code (MECC) scheme. The MECC scheme combines four 32 bit data words to form a composite 128 bit ECC word and uses an optimized 4-input transmission-gate XOR logic. Thus MECC significantly reduces the area overhead for check-bit storage and the delay penalty for error correction. In addition, MECC interleaves two composite words in a row for limiting cosmic neutron induced multi-bit errors. The ground potentials of the composite words are controlled to minimize leakage power without compromising the read data stability. However, use of composite words involves a unique write operation where one data word is written while other three data words are read to update the check-bits. A power efficient word line signaling technique is developed to facilitate the write operation. A 64 kb SRAM macro with MECC is designed and fabricated in a commercial 90nm CMOS technology. Measurement results show that the SRAM consumes 534 μW at 100 MHz with a data latency of 3.3 ns for a single bit error correction. This translates into 82% per-bit energy saving and 8x speed improvement over recently reported multiword ECC schemes. Accelerated neutron radiation test carried out at TRIUMF in Vancouver confirms that the proposed MECC scheme can correct up to 85% of soft errors.
178

On the Use of Directed Moves for Placement in VLSI CAD

Vorwerk, Kristofer January 2009 (has links)
Search-based placement methods have long been used for placing integrated circuits targeting the field programmable gate array (FPGA) and standard cell design styles. Such methods offer the potential for high-quality solutions but often come at the cost of long run-times compared to alternative methods. This dissertation examines strategies for enhancing local search heuristics---and in particular, simulated annealing---through the application of directed moves. These moves help to guide a search-based optimizer by focusing efforts on states which are most likely to yield productive improvement, effectively pruning the size of the search space. The engineering theory and implementation details of directed moves are discussed in the context of both field programmable gate array and standard cell designs. This work explores the ways in which such moves can be used to improve the quality of FPGA placements, improve the robustness of floorplan repair and legalization methods for mixed-size standard cell designs, and enhance the quality of detailed placement for standard cell circuits. The analysis presented herein confirms the validity and efficacy of directed moves, and supports the use of such heuristics within various optimization frameworks.
179

Convex Optimization and Utility Theory: New Trends in VLSI Circuit Layout

Etawil, Hussein January 1999 (has links)
The design of modern integrated circuits is overwhelmingly complicated due to the enormous number of cells in a typical modern circuit. To deal with this difficulty, the design procedure is broken down into a set of disjoint tasks. Circuit layout is the task that refers to the physical realization of a circuit from its functional description. In circuit layout, a connection-list called netlist of cells and nets is given. Placement and routing are subtasks associated with circuit layout and involve determining the geometric locations of the cells within the placement area and connecting cells sharing common nets. In performing the placement and the routing of the cells, minimum placement area, minimum delay and other performance constraints need to be observed. In this work, we propose and investigate new approaches to placement and routing problems. Specifically, for the placement subtask, we propose new convex programming formulations to estimate wirelength and force cells to spread within the placement area. As opposed to previous approaches, our approach is partitioning free and requires no hard constraints to achieve cell spreading within the placement area. The result of the global optimization of the new convex models is a global placement which is further improved using a Tabu search based iterative technique. The effectiveness, robustness and superiority of the approach are demonstrated on a set of nine benchmark industrial circuits. With regard to the routing subtask, we propose a hybrid methodology that combines Tabu search and Stochastic Evolution as a search engine in a new channel router. We also propose a new scheme based on Utility Theory for selecting and assigning nets to tracks in the channel. In this scheme, problem-domain information expressed in the form of utility functions is used to guide the search engine to explore the search space effectively. The effectiveness and robustness of the approach is demonstrated on five industrial benchmarks.
180

Modeling and Mitigation of Soft Errors in Nanoscale SRAMs

Jahinuzzaman, Shah M. January 2008 (has links)
Energetic particle (alpha particle, cosmic neutron, etc.) induced single event data upset or soft error has emerged as a key reliability concern in SRAMs in sub-100 nanometre technologies. Low operating voltage, small node capacitance, high packing density, and lack of error masking mechanisms are primarily responsible for the soft error susceptibility of SRAMs. In addition, since SRAM occupies the majority of die area in system-on-chips (SoCs) and microprocessors, different leakage reduction techniques, such as, supply voltage reduction, gated grounding, etc., are applied to SRAMs in order to limit the overall chip leakage. These leakage reduction techniques exponentially increase the soft error rate in SRAMs. The soft error rate is further accentuated by process variations, which are prominent in scaled-down technologies. In this research, we address these concerns and propose techniques to characterize and mitigate soft errors in nanoscale SRAMs. We develop a comprehensive analytical model of the critical charge, which is a key to assessing the soft error susceptibility of SRAMs. The model is based on the dynamic behaviour of the cell and a simple decoupling technique for the non-linearly coupled storage nodes. The model describes the critical charge in terms of NMOS and PMOS transistor parameters, cell supply voltage, and noise current parameters. Consequently, it enables characterizing the spread of critical charge due to process induced variations in these parameters and to manufacturing defects, such as, resistive contacts or vias. In addition, the model can estimate the improvement in critical charge when MIM capacitors are added to the cell in order to improve the soft error robustness. The model is validated by SPICE simulations (90nm CMOS) and radiation test. The critical charge calculated by the model is in good agreement with SPICE simulations with a maximum discrepancy of less than 5%. The soft error rate estimated by the model for low voltage (sub 0.8 V) operations is within 10% of the soft error rate measured in the radiation test. Therefore, the model can serve as a reliable alternative to time consuming SPICE simulations for optimizing the critical charge and hence the soft error rate at the design stage. In order to limit the soft error rate further, we propose an area-efficient multiword based error correction code (MECC) scheme. The MECC scheme combines four 32 bit data words to form a composite 128 bit ECC word and uses an optimized 4-input transmission-gate XOR logic. Thus MECC significantly reduces the area overhead for check-bit storage and the delay penalty for error correction. In addition, MECC interleaves two composite words in a row for limiting cosmic neutron induced multi-bit errors. The ground potentials of the composite words are controlled to minimize leakage power without compromising the read data stability. However, use of composite words involves a unique write operation where one data word is written while other three data words are read to update the check-bits. A power efficient word line signaling technique is developed to facilitate the write operation. A 64 kb SRAM macro with MECC is designed and fabricated in a commercial 90nm CMOS technology. Measurement results show that the SRAM consumes 534 μW at 100 MHz with a data latency of 3.3 ns for a single bit error correction. This translates into 82% per-bit energy saving and 8x speed improvement over recently reported multiword ECC schemes. Accelerated neutron radiation test carried out at TRIUMF in Vancouver confirms that the proposed MECC scheme can correct up to 85% of soft errors.

Page generated in 0.0336 seconds