• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 57
  • 35
  • 11
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 218
  • 218
  • 218
  • 196
  • 42
  • 40
  • 35
  • 29
  • 28
  • 28
  • 27
  • 27
  • 24
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Signal integrity in deep submicron CMOS chip design

Sonchhatra, Jignesh Suresh 01 January 2000 (has links) (PDF)
Advancement in CMOS technology has become a driving force in the advancement of today's IC design arena. In the past few years, considerable research has been done on the CMOS devices and circuits. Constant efforts have been made to realize smaller and smaller devices by reducing the channel length of the transistors and scaling down various other device parameters. Consequently, various problems have arisen such as interconnect delay, signal integrity and signal coupling. The purpose of this thesis is to review and understand current problems in IC design and come up with various solutions to them. Efforts have been made to propose a model of interconnect which demonstrates the effects of parasitic components on the chip. Signal coupling effects have been demonstrated by simulating various RC and RLC models for interconnects. The impact of parasitic inductance on the performance of the chip is understood with the help of simulation results. The design of an eight bit shifter is realized using Clockless Asynchronous and Clocked Boolean design techniques. Both the chips are placed and routed using Silicon Ensemble, a high-tech CAD tool from Cadence Design Systems. Various optimization techniques have been applied to both the prototypes and a detailed comparison has been done considering factors such as area of the chip, total length of the interconnect, row utilization, chip congestion etc. Based on these results, it was found that Clocked Boolean Shifter was compact, compared to its Asynchronous counterpart design. However, Asynchronous Clockless architectures are well recommended where complex chip functionalities are intended to be integrated without much of the hassles of timing problems.
142

A cost quality model for CMOS IC design

Deshpande, Sandeep 04 December 2009 (has links)
With a decreasing minimum feature size in very large scale integration (VLSI) complementary metal oxide semiconductor (CMOS) technology, the number of transistors that can be integrated on a single chip is increasing rapidly. Ensuring that these extremely dense chips are almost free of defects, and at the same time, cost-effective requires planning from the initial stage of design. This research proposes a concurrent engineering-based design methodology for layout optimization. The proposed method for layout optimization is iterative, and layout changes in each design iteration are made based on the principles of physical design for testability (P-DFT). P-DFT modifies a design such that the circuit has fewer faults, difficult to detect faults are made easier to detect, and difficult to detect faults are made less likely to occur. To implement this design methodology, a mathematical model is required to evaluate alternate designs. This research proposes an evaluation measure: the cost quality model. The cost quality model extends known test quality and testability estimation measures for gate-level circuits to switch-level circuits. To provide high fidelity in testability estimation and reasonable CPU time overhead, the cost quality model uses inductive fault analysis techniques to extract a realistic circuit fault list, I<sub>DDQ</sub> test generation techniques to generate tests for these faults, statistical models to reduce computational overhead due to test generation and fault simulation, yield simulation tools, and mathematical models to estimate test quality and costs. To demonstrate the effectiveness of this model, results are presented for CMOS layouts of benchmark circuits and modifications of these layouts. / Master of Science
143

Selection of flip-flops for partial scan paths by use of a statistical testability measure

Jett, David B. 30 December 2008 (has links)
Partial scan paths improve the testability of digital circuits, and incur minimal costs in the area overhead and test application time. Design constraints may require that a partial scan path include only those flip-flops that provide the greatest improvements in circuit testability. STAFFS, a tool that identifies such flip-flops, has been developed. It uses a statistical testability measure to acquire quantitative data for the controllabilities and observabilities of the nodes of a circuit. It predicts the changes that would occur in the data due to the scanning of specific flip-flops, and uses those predictions to select flip-flops. STAFFS weights the observability data versus the controllability data when selecting flip-flops, and it can efficiently select alternative scan designs for different weights. Experimental results for thirteen sequential benchmark circuits reveal that STAFFS consistently selects scan designs with fault coverages that are significantly higher than those of arbitrarily selected scan designs. / Master of Science
144

An interactive design rule checker for integrated circuit layout

Kim, Kwanghyun January 1985 (has links)
An implementation of an interactive design rule checker is described in this thesis. Corner-based design rule checking algorithm is used for the implementation. Due to the locality of checking mechanism of the corner-based algorithm, it is suitable for hierarchical and interactive local design rule checking. It also allows the various design rules to be specified very easily. Interactive operations are devised so that the design rule checker can be invoked from inside the layout editor. All the information about the violation, such as position, type of violation, and symbol definition name are provided in an interactive manner. In order to give full freedom to the user to choose the scope of checking, three options, "Flattening", "Unflattening" and "User-defined window" are implemented in creating the database to be checked. The "User-defined window" option allows hierarchical design rule checking on a design which contains global rectangles. Using these three options, very efficient hierarchical checking can be performed. / Master of Science / incomplete_metadata
145

Completion and validation of the design of a reconfigurable image processing board

Deo, Nitin January 1985 (has links)
Starting in September 1984, the Telesign project is an extensive and complex project proposed and undertaken by Dr. Nadler at Virginia Tech. The emphasis of this project is to enable the members of the deaf community to communicate visually using sign language or lip reading over the telephone network. The Image Processing Board (IPB) is the 'Brain' of the whole system. The IPB processes a given frame of an image to transmit only selected data. It uses the pseudo-laplacian operator, invented by Dr. Nadler, for edge detection. According to a recent survey of various edge detection algorithms by D. E. Pearson, [1], the pseudo-laplacian operator is the most efficient one and it produces the most natural pictures. The whole IPB hosts about one hundred LSI/VLSI chips according to the present hardware description. In the case of such a big system, hardware simulation becomes mandatory in order to ensure reliability of the design and to anticipate any kind of logic or timing errors in the design. This thesis describes the modifications to the original design to make it reconfigurable with proper initialization and the Hardware Simulation of the IPB, using General Simulation Program (GSP), including some comments on the simulators available at Virginia Tech and in particular a critique of the simulator used here. Many improvements to the simulator are suggested. Precautions to be taken while preparing the lay-out and wiring of the IPB, suggestions to simplify the design at some points at the cost of a few more chips, and lastly the instructions to run the models to get the required results, are outlined in this thesis. / Master of Science
146

Energy efficient design of the delay-insensitive asynchronous circuits

Weng, Ning 01 October 2000 (has links)
No description available.
147

An Algorithm for the PLA Equivalence Problem

Moon, Gyo Sik 12 1900 (has links)
The Programmable Logic Array (PLA) has been widely used in the design of VLSI circuits and systems because of its regularity, flexibility, and simplicity. The equivalence problem is typically to verify that the final description of a circuit is functionally equivalent to its initial description. Verifying the functional equivalence of two descriptions is equivalent to proving their logical equivalence. This problem of pure logic is essential to circuit design. The most widely used technique to solve the problem is based on Binary Decision Diagram or BDD, proposed by Bryant in 1986. Unfortunately, BDD requires too much time and space to represent moderately large circuits for equivalence testing. We design and implement a new algorithm called the Cover-Merge Algorithm for the equivalence problem based on a divide-and-conquer strategy using the concept of cover and a derivational method. We prove that the algorithm is sound and complete. Because of the NP-completeness of the problem, we emphasize simplifications to reduce the search space or to avoid redundant computations. Simplification techniques are incorporated into the algorithm as an essential part to speed up the the derivation process. Two different sets of heuristics are developed for two opposite goals: one for the proof of equivalence and the other for its disproof. Experiments on a large scale of data have shown that big speed-ups can be achieved by prioritizing the heuristics and by choosing the most favorable one at each iteration of the Algorithm. Results are compared with those for BDD on standard benchmark problems as well as on random PLAs to perform an unbiased way of testing algorithms. It has been shown that the Cover-Merge Algorithm outperforms BDD in nearly all problem instances in terms of time and space. The algorithm has demonstrated fairly stabilized and practical performances especially for big PLAs under a wide range of conditions, while BDD shows poor performance because of its memory greedy representation scheme without adequate simplification.
148

An asynchronous forth microprocessor.

January 2000 (has links)
Ping-Ki Tsang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 87-95). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Aims --- p.1 / Chapter 1.2 --- Contributions --- p.3 / Chapter 1.3 --- Overview of the Thesis --- p.4 / Chapter 2 --- Asynchronous Logic g --- p.6 / Chapter 2.1 --- Motivation --- p.6 / Chapter 2.2 --- Timing Models --- p.9 / Chapter 2.2.1 --- Fundamental-Mode Model --- p.9 / Chapter 2.2.2 --- Delay-Insensitive Model --- p.10 / Chapter 2.2.3 --- QDI and Speed-Independent Models --- p.11 / Chapter 2.3 --- Asynchronous Signalling Protocols --- p.12 / Chapter 2.3.1 --- 2-phase Handshaking Protocol --- p.12 / Chapter 2.3.2 --- 4-phase Handshaking Protocol --- p.13 / Chapter 2.4 --- Data Representations --- p.14 / Chapter 2.4.1 --- Dual Rail Coded Data --- p.15 / Chapter 2.4.2 --- Bundled Data --- p.15 / Chapter 2.5 --- Previous Asynchronous Processors --- p.16 / Chapter 2.6 --- Summary --- p.20 / Chapter 3 --- The MSL16 Architecture --- p.21 / Chapter 3.1 --- RISC Machines --- p.21 / Chapter 3.2 --- Stack Machines --- p.23 / Chapter 3.3 --- Forth and its Applications --- p.24 / Chapter 3.4 --- MSL16 --- p.26 / Chapter 3.4.1 --- Architecture --- p.28 / Chapter 3.4.2 --- Instruction Set --- p.30 / Chapter 3.4.3 --- The Datapath --- p.32 / Chapter 3.4.4 --- Interrupts and Exceptions --- p.33 / Chapter 3.4.5 --- Implementing Forth primitives --- p.34 / Chapter 3.4.6 --- Code Density Estimation --- p.34 / Chapter 3.5 --- Summary --- p.35 / Chapter 4 --- Design Methodology --- p.37 / Chapter 4.1 --- Basic Notation --- p.38 / Chapter 4.2 --- Specification of MSL16A --- p.39 / Chapter 4.3 --- Decomposition into Concurrent Processes --- p.41 / Chapter 4.4 --- Separation of Control and Datapath --- p.45 / Chapter 4.5 --- Handshaking Expansion --- p.45 / Chapter 4.5.1 --- 4-Phase Handshaking Protocol --- p.46 / Chapter 4.6 --- Production-rule Expansion --- p.47 / Chapter 4.7 --- Summary --- p.48 / Chapter 5 --- Implementation --- p.49 / Chapter 5.1 --- C-element --- p.49 / Chapter 5.2 --- Mutual Exclusion Elements --- p.51 / Chapter 5.3 --- Caltech Asynchronous Synthesis Tools --- p.53 / Chapter 5.4 --- Stack Design --- p.54 / Chapter 5.4.1 --- Eager Stack Control --- p.55 / Chapter 5.4.2 --- Lazy Stack Control --- p.56 / Chapter 5.4.3 --- Eager/Lazy Stack Datapath --- p.53 / Chapter 5.4.4 --- Pointer Stack Control --- p.61 / Chapter 5.4.5 --- Pointer Stack Datapath --- p.62 / Chapter 5.5 --- ALU Design --- p.62 / Chapter 5.5.1 --- The Addition Operation --- p.63 / Chapter 5.5.2 --- Zero-Checker --- p.64 / Chapter 5.6 --- Memory Interface and Tri-state Buffers --- p.64 / Chapter 5.7 --- MSL16A --- p.65 / Chapter 5.8 --- Summary --- p.66 / Chapter 6 --- Results --- p.67 / Chapter 6.1 --- FPGA based implementation of MSL16 --- p.67 / Chapter 6.2 --- MSL16A --- p.69 / Chapter 6.2.1 --- A Comparison of 3 Stack Designs --- p.69 / Chapter 6.2.2 --- Evaluation of the ALU --- p.73 / Chapter 6.2.3 --- Evaluation of MSL16A --- p.74 / Chapter 6.3 --- Summary --- p.81 / Chapter 7 --- Conclusions --- p.83 / Chapter 7.1 --- Future Work --- p.85 / Bibliography --- p.87 / Publications --- p.95
149

A Study of Microwave curing of Underfill using Open and Closed microwave ovens

Thakare, Aditya 14 April 2015 (has links)
As the demand for microprocessors is increasing with more and more consumers using integrated circuits in their daily life, the demand on the industry is increasing to ramp up production. In order to speed up the manufacturing processes, new and novel approaches are trying to change certain aspects of it. Microwaves have been tried as an alternative to conventional ovens in the curing of the polymers used as underfills and encapsulants in integrated circuits packages. Microwaves however being electromagnetic waves have non uniform energy distribution in different settings, causing burning or incomplete cure of polymers. In this study, we compare the two main types of microwaves proposed to perform the task of curing the polymers. To limit the study and obtain comparable results, both microwaves were limited to propagate in a single mode, TE10. The first is a closed microwave cavity using air as the propagation medium, and the second is an open microwave oven with a PTFE cavity that uses an evanescent field to provide energy. The open air cavity was studied with different orientations of a substrate placed inside it so as to find the best case scenario in the curing process. This scenario was then compared with the best case scenario found for a sample cured in an evanescent field. This comparison yielded results showing an advantage of the open microwave in maximum field present, thus leading to higher localized energy absorption and temperatures in the substrate, however this case also lead to a higher temperature gradient. The substrate cured in the closed microwave has a lower temperature gradient, but also a lower maximum field which leads to slower cure. In the TE10 mode therefore, a closed microwave has an overall advantage as the heating process is only slightly slower than that of an open cavity, but the temperature gradient in this case is significantly lower.
150

Synthesis of Linear Reversible Circuits and EXOR-AND-based Circuits for Incompletely Specified Multi-Output Functions

Schaeffer, Ben 21 July 2017 (has links)
At this time the synthesis of reversible circuits for quantum computing is an active area of research. In the most restrictive quantum computing models there are no ancilla lines and the quantum cost, or latency, of performing a reversible form of the AND gate, or Toffoli gate, increases exponentially with the number of input variables. In contrast, the quantum cost of performing any combination of reversible EXOR gates, or CNOT gates, on n input variables requires at most O(n2/log2n) gates. It was under these conditions that EXOR-AND-EXOR, or EPOE, synthesis was developed. In this work, the GF(2) logic theory used in EPOE is expanded and the concept of an EXOR-AND product transform is introduced. Because of the generality of this logic theory, it is adapted to EXOR-AND-OR, or SPOE, synthesis. Three heuristic spectral logic synthesis algorithms are introduced, implemented in a program called XAX, and compared with previous work in classical logic circuits of up to 26 inputs. Three linear reversible circuit methods are also introduced and compared with previous work in linear reversible logic circuits of up to 100 inputs.

Page generated in 0.0601 seconds