• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An infrastructure for RTL validation and verification

Kalla, Priyank 01 January 2002 (has links)
With the increase in size and complexity of digital designs, it has become imperative to address critical validation and verification issues at early stages of the design cycle. This requires robust, automated verification tools at higher (behavioural or register-transfer) level of abstraction. This dissertation describes tools and techniques to assist validation and symbolic verification of high-level or RTL descriptions of digital designs. In particular, a comprehensive infrastructure has been developed that assists in: (i) validation of the descriptions via simulation, and (ii) their functional equivalence verification. A prototype system has been developed around a hardware description language compiler in order to automate the process of validation and verification of RTL descriptions. The validation part of the infrastructure consists of Satisfiability (SAT) solvers based on Binary Decision Diagrams (BDD) that have been developed to automatically generate functional vectors to simulate the design. BDD-based SAT solvers suffer from the memory explosion problem. To overcome this limitation, two SAT solvers have been developed that employ the elements of the unate recursive paradigm to control the growth of BDD-size while quickly searching for solutions. Experiments carried out over a wide range of designs—ranging from random Boolean logic to regular array structures such as multipliers and shifters—demonstrate the robustness of these techniques. The verification part of the framework consists of equivalence checking tools that can verify the equivalence of RTL descriptions of digital designs. RTL descriptions represent high-level computations in abstract, symbolic forms from which low-level (binary) details are difficult to extract; the implementation details of logic blocks are not always available. Contemporary canonic representations do not have the scalability or the versatility to efficiently represent RTL descriptions in compact form. For this reason, a new representation called Taylor Expansion Diagrams (TED) has been developed to assist in functional equivalence verification of high-level descriptions of digital designs. TEDs are a compact, canonical, graph-based representation that are based upon a general non-binary decomposition principle using the Taylor series expansion. RTL computations are viewed as polynomials of a finite degree and TEDs are constructed for them. A set of reduction rules are applied to the diagram to make it canonical. TEDs also have the power to represent word-level algebraic computations in abstract symbolic form that allows to efficiently solve the equivalence checking problem for digital designs. The theoretical fundamentals behind TEDs are discussed and their efficient implementation is described. The robustness of the TED representation is analyzed by carrying out equivalence verification experiments over both equivalent and non-equivalent designs. It is shown that TEDs are exceptionally suitable for verifying large designs that contain not only algebraic (arithmetic) datapaths, but also model their interaction with Boolean variables.
42

Layout and logic techniques for yield and reliability enhancement

Chen, Zhan 01 January 1998 (has links)
Several yield and reliability enhancement techniques have been proposed for the compaction, routing and technology mapping stages of VLSI design. For yield, we modify the existing layouts to reduce the sensitivity of the design to random point defects, which are the main yield detractors in today's IC technology. For reliability, we deal with several important failure mechanisms including electromigration, antenna effect, crosstalk noise and hot-carrier effect. At the layout compaction stage, new techniques for yield enhancement are presented and the yield improvement results on some industrial examples are shown. These new techniques take 2D connections into consideration when performing 1D compaction and they consider the problem of data volume control when dealing with hierarchical design. For this stage of the VLSI layout design, we also propose a minimum layout perturbation compaction algorithm for electromigration reliability enhancement. This algorithm can increase the width of the wires which have electromigration reliability problems and resolve the design rule violations introduced by the wire widening process with minimum changes to the layout so that the previously achieved layout optimization goals such as area, performance and yield can be preserved as much as possible. At the routing stage, a layer reassignment algorithm is presented for yield enhancement for 2-layer channel routing. This layer reassignment approach is then extended to antenna effect minimization during the 3-layer routing process. We also develope an algorithm which combines layer reassignment, track reassignment and dogleg insertion to reduce the crosstalk noise in routing. For the technology mapping stage, a logic level hot-carrier effect model is presented. Based on this model a mapping algorithm which targets hot-carrier effect is proposed and it is shown that a design with the lowest power measure, which has long been regarded as the rough measure of reliability, is not always the best design for reliability. Experimental results have shown that by applying the proposed techniques it is possible to achieve significant yield and reliability improvement during the layout and logic levels of VLSI design.
43

Global rational approximation for computer systems and communication networks

Yang, Hong 01 January 1996 (has links)
In the analysis and design of computer systems and communication networks, difficulties often arise due to the so-called "Curse of Dimensionality"--prohibitive computational requirements for large size systems. When the system is small, we can often get accurate results. As the system size grows, the computational burdens become overwhelming. In this dissertation, we propose a unified approach, Global Rational Approximation (GRA), to tackle this "Curse of Dimensionality" problem from the standpoint of systems analysis. We observe that quite often the accurate evaluation of such systems is feasible when the system size is small. On the other hand, an impressive amount of knowledge has been accumulated in the past decade regarding the qualitative behavior such as monotonicity, convexity, boundedness and the asymptotic properties of the performance functions for very general computer/communication systems. The central idea of our approach is to take advantage of the knowledge of the down-sized systems, as well as the asymptotic properties, and to finally yield good estimates of performance for large size systems. The performance values of down-sized systems can be obtained either from analytic methods or from simulations. This work establishes the theoretical foundation for the GRA approach. Two types of rational approximants are proposed. An approximation scheme of generating sequence of approximants is also proposed. As an important application of the GRA approach, the dissertation discusses the issue of supporting quality-of-service (QoS) guarantees, in specific, calculation of cell loss probabilities, in ATM networks. The dissertation also discusses the applications of the GRA approach in many other computer/communication systems. In an example of multiprocessor systems, we consider a model of a particular architecture with single bus and distributed common memory. For the queue inference engine problem, the mean queue length area is obtained using GRAs. For large Markov chain with certain regularity (with the stochastically monotone transition matrix), we also apply the GRA approach to obtain the stationary probability distribution. The theoretical and numerical results indicate that the GRA approach could be a promising tool for many applications in many computer systems and communication networks.
44

Computer-aided design-for-reliability of deep sub-micron integrated circuits

Dasgupta, Aurobindo 01 January 1996 (has links)
The last few years have witnessed a revolution in integrated circuit (IC) fabrication technology leading to high packing densities of ICs. Although this has paved the way for increased portability and faster clock rates, it has also given rise to new problems. One such is the increased unreliability in deep sub-micron ICs. In this dissertation, we address reliability concerns that are caused by soft errors, electromigration (EM) and hot-carriers effects within a top-down design methodology. Traditionally reliability has been addressed late in the design process (usually at the device and circuit levels). At this stage, improvements in reliability are obtained at the cost of degradation in performance already incorporated earlier in the IC design process. This results in wasted design effort and increased time-to-market. In this dissertation, we propose novel techniques that enhance IC reliability. (1) Soft errors are minimized at the system level by suitably synthesizing a multiprocessor for an application task graph. (2) Electromigration induced failures are reduced at the RT level by suitably sequencing the data-transfers onto buses. (3) Hot-carrier induced damage is minimized at the switch level by suitably reordering the inputs to the logic gates and by resizing the hot-carrier affected MOSFETs. The proposed techniques for reliability enhancement were evaluated on benchmark examples. High-reliability designs are synthesized subject to constraints on delay, energy dissipation and area. Finally, the results are validated by simulating the synthesized reliability-optimized circuits.
45

Adaptive wavelet packets for image and video compression

Hsu, Wei-Lien 01 January 1996 (has links)
In this dissertation, we investigate the design of image and video compression algorithms by using adaptive wavelet packet decomposition. For image compression, we present a spatial and frequency decomposition algorithm that extends and improves the existing "double tree" algorithm (2) by using a more flexible merging scheme, a more efficient quantizer, and an improved initial slope estimator. The new spatial merging scheme allows for mergers and for regions to form which are not possible with a quad-tree structure; this generalization yields a noticeable improvement in the overall R-D performance. The proposed Scalar/Pyramidal Lattice Vector quantizer improves the coding efficiency of the coefficients of wavelet packets. The initial slope estimator drastically reduces the computations needed to obtain the optimum decomposition. For video compression, we first develop an efficient temporal, spatial and frequency decomposition method for video coding. In this method, the given video sequence is decomposed into small volumes which adapt to the nonstationarity and the motion of the video signals. A three dimensional best wavelet packet is generated for each volume based on a rate and distortion criterion, such that the subbands of this 3D wavelet packet can be more efficiently coded than the original signals. This method does not require motion estimation/compensation; the motion information is contained in the high temporal subbands. It allows us to implicitly represent and code the motion information based on the given bit rate constraint. To alleviate the computational complexity of the 3D wavelet packet decomposition, in our second video compression algorithm, a large amount of temporal redundancy is removed through the use of low-high temporal subbanding and the DPCM (Differential Pulse Code Modulation) procedure. The coefficients of high temporal subbands and the original lowpass subbands or the difference of the low subbands from the DPCM loop, are decomposed by adaptive wavelet packets based on the rate and distortion criterion. Since the adaptive wavelet packet representation is capable of achieving a better R-D performance than subband coding or wavelet decompositions, it can be expected that the video coding algorithm proposed in this dissertation will yield an improvenent over the traditional 3D subband coding techniques. For possible use in various applications, a rate constrained coding and a quality (distortion) constrained coding versions of this video coding algorithm are developed and analyzed.
46

Algorithms and protocols towards energy-efficiency in wireless networks

Namboodiri, Vinod 01 January 2008 (has links)
A plethora of wireless technologies promise ubiquitous communication in the future. A major impediment to this vision is limited energy supply from batteries in these devices. Energy supply has far reaching implications on user experience as well as practicality of many envisaged applications. The energy consumed for radio communication in these devices is a major factor in the operating lifetimes of these devices. This calls for design of network protocols and algorithms that reduce energy consumption due to the wireless interface. In this dissertation we look at three emerging application scenarios of wireless networks and present our solutions to make radio communication energy-efficient in these scenarios. We first look at the scenario of VoIP calls over wireless LANs where we propose an algorithm to reduce energy wasted in the idle mode of the wireless interface. We show that, in spite of the interactive, real-time nature of voice, energy consumption during calls can be reduced by close to 80% in most instances. Next, we consider the scenario of tag anti-collision protocols for radio frequency identification (RFID) systems and propose three protocols that reduce energy consumption due to packet collisions. We demonstrate that all three protocols provide 40-70% energy savings both at the reader and tags (if they are active tags). Finally, we consider topology control algorithms for wireless mesh networks that derive transmit power levels that prove to be energy-efficient. We propose a non-uniform model of gain for nodes using switched-beam directional antennas and develop algorithms for this model introducing antenna orientation as a parameter in topology construction. Through our evaluations, we demonstrate that 20-40% power level reductions are possible by our approach.
47

Software-based permanent fault recovery techniques using inherent hardware redundancy

Xu, Weifeng 01 January 2007 (has links)
Recent advances in deep submicron (DSM) technology have imposed an adverse impact on the long-term lifetime reliability of semiconductor devices. According to the reliability report from International Technology Roadmap for Semiconductors (ITRS), smaller feature sizes and higher power densities make DSM devices more susceptible to wear-out failures. As a consequence, permanent faults are more likely to occur in DSM devices at runtime. To ensure system reliability and availability, fault tolerant techniques must be applied to overcome these runtime permanent faults. For systems requiring non-stop computation, a full duplication of system hardware components is usually required, which incurs a high overhead in hardware cost. For systems that allow a short period of downtime, however, low cost software techniques that take advantage of the inherent hardware redundancy of computing devices, such as Field Programmable Gate Arrays (FPGAs) and Very Long Instruction Word (VLIW) processors, can potentially be applied as an intermediate fault recovery step. These techniques can reconfigure the computation of a faulty device to maintain the system operation until the faulty device can be replaced. To maintain correct computation on a faulty device, operations originally assigned to faulty resources must be moved to fault-free device resources. This process requires two phases: a testing phase to locate faults and a recovery phase to eliminate the usage of faulty resources in the computation. In this dissertation, we present software techniques that address specific testing and recovery challenges for FPGAs and VLIW processors. For FPGAs, we focus on testing and recovering path delay faults. Path delay faults occur when the maximum delay of at least one critical path exceeds the maximum allowable system delay due to a permanent fault. To locate paths with delay faults, a built-in self-test (BIST) approach is presented to evaluate all combinations of signal transitions along critical paths. To recover from path delay faults, a timing-driven incremental router is used to reroute paths affected by the faults. To facilitate fast fault recovery, information from the initial design route is used to guide the reroute process. Since many embedded systems have a limited amount of local computational resources, a network-based recovery system has been developed. A computationally superior server performs the FPGA fault recovery and sends the results back to the affected client, completing the recovery process. Experiments on the recovery system have shown that the incremental router provides a speedup of up to 12x compared with a commercial incremental flow. For VLIW processors, we focus on recovering from permanent faults in registers. To maintain VLIW functionality after detecting faulty registers, programs must be recompiled to assign variables to fault-free registers. One issue with recompilation is possible performance loss due to increased register requirements. To address this problem, a register pressure control technique is presented to reduce register requirements. To demonstrate its advantages, the technique has been integrated into an academic VLIW compiler. Experimental results have shown that the technique improves performance by 14% compared with an academic VLIW flow.
48

Schemes to reduce test application time in digital circuit testing

Saxena, Jayashree 01 January 1993 (has links)
Test application time contributes significantly to the cost of VLSI testing. While it is essential to be able to perform test generation efficiently, it is also necessary to minimize the time spent in applying the tests to individual units. In this dissertation, we study the use of a hybrid scheme that combines scan testing and sequential testing in sequential circuits with full scan. The scheme exploits the finite state machine connectivity of the sequential machine in the test mode, thus avoiding time-consuming scan-in and scan-out. However, the scan facility is used both to detect faults that cannot be detected sequentially, as well as for faults for which efficient sequential test sequences cannot be found. Two algorithms for test generation in the hybrid scheme are described. Experimental results are reported for the stuck-at fault and the transition fault models. For the stuck-at fault model, the percentage reductions in test application time when compared to full scan is up to 87% in the ISCAS89 benchmark circuits. For the transition fault model, the percentage reductions fall between 21% and 82%. The hybrid test vectors generated under a stuck-at fault model also detect certain transition faults when applied at-speed. For the ISCAS89 circuits, the hybrid stuck-at test set is seen to achieve a transition fault coverage of between 52% to 91%. The hybrid test generation algorithm for sequential circuits with full scan can be applied to any fault model. However, for complex fault models such as the path delay fault model, the problem of reduction of test application time is important for combinational circuits as well. In this dissertation, a method to obtain compact test sets for path delay faults in combinational circuits is presented. Test set size is seen to decrease significantly using this approach thus reducing test application time. In summary, this dissertation examines the use of an alternative test application strategy for sequential circuits with full scan in order to reduce test application time. For combinational circuits, a method to derive smaller and more compact test sets has been presented for the path delay fault model.
49

Wavelength division multiplexed optical LANs/MANs: Architectures, protocols and implementations

Li, Bo 01 January 1994 (has links)
For a multiuser local and metropolitan area networks, lightwave technology clearly provides a great opportunity to share an enormous network capacity, potentially tens of Terabits per second on a single optical fiber, among all network users. It is clear that future high speed broadband optical network can support hundreds even thousands of users, each requiring gigabits per second throughput. The major limitation of implementation of such networks is lacking corresponding fast optical switching though transmission is optical, as a result the maximum rate at which each end user can access is limited by the much slower electronic devices. Therefore the key in designing lightwave networks, in order to exploit its huge bandwidth, is to introduce concurrency among multiple user transmissions into the network architectures and protocols. Wavelength Division Multiplexing (WDM) provides us such concurrency. The main objective of the thesis is to conduct extensive studies of WDM Network toward a technically feasible and practical system. The work in this thesis can be divided into three parts, the first part primarily deals with the virtual topology design problem for multihop lightwave networks, its main focus is to design a virtual topology which takes advantage of the unique features of WDM networks such as broadcasting nature, tunability and system reconfigurability while also considering its limitations such as limited number of transmitters and receivers per node, limited number of wavelengths available. In the second part of the thesis, we restrict our attention to the design of a feasible architecture which can be implemented under current technology such as the existence of fast limited range components. In the third part we aim to design a more efficient channel access protocol, which must be scalable and can support integrated service for multiple classes of traffic.
50

Sample path analysis and control of finite capacity queueing systems

Sparaggis, Panayotis Dionysios 01 January 1994 (has links)
Sample path techniques are commonly used to explore control aspects of queueing systems. Using these techniques, one is primarily interested in comparing the performance of different systems by constructing some stochastic processes of interest on a common probability space. In this way one may then show that, in fact, sample path trajectories in one system always dominate trajectories in another system under a proper coupling. This dissertation studies queueing systems with finite capacities from a sample path perspective. We identify structural properties and determine optimal control policies for routing, scheduling and some more general systems. As it will be observed in the analysis, some classical orderings such as majorization may not be directly applicable to state vectors in the presence of finite buffers. Nevertheless, certain output counting processes (e.g., the departure or loss counting processes) may in fact interact with state processes in a way that monotonicity results on the former can be supported. In this dissertation we propose a set of analytical tools that can be used to compare sample paths in systems with finite buffers. These include a new componentwise ordering and a new majorization ordering, which can be used to accommodate an array of problems under a comparison framework. Furthermore, we identify and clearly state the limitations and statistical conditions under which fairly general results hold. We show, for instance, that the assumption of exponential service time distributions is crucial in proving certain structural and optimality results. Moreover, results that apply to systems where state information is available are stronger than those applying to systems with no or limited state information. From a practical viewpoint, we obtain structural properties and determine optimal control policies for a variety of systems with finite buffers. We treat both symmetric and assymetric systems. For the latter, we propose procedures that considerably improve the performance of probabilistic algorithms which are commonly used in the absence of state information. Simulation results demonstrate this performance improvement.

Page generated in 0.2069 seconds