• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Low-cost schemes for fault tolerance

Vaidya, Nitin Hemant 01 January 1993 (has links)
Two aspects of fault tolerance are fault diagnosis and fault recovery. This dissertation studies both these aspects and presents low-cost schemes for achieving diagnosis and recovery. Two models for fault tolerance are studied, namely, modular redundancy and system-level diagnosis. Modular redundant systems achieve fault detection and recovery by employing multiple replicas of each module. Such systems try to mask the failures, whenever possible. When high reliability is to be achieved with low redundancy, it is not always possible to mask the failures without retrying the computation. Check-pointing and rollback recovery is a technique that tries to minimize the expense of retrying. Multiprocessor fault tolerance schemes using modular redundancy are proposed here to minimize this expense further by exploiting the inherent redundancy offered by modular redundant systems. The proposed schemes are shown to improve the performance of modular redundant systems in the presence of faults, as compared to rollback schemes. A trade-off exists between cost and performance of any fault tolerant system. Such a trade-off for modular redundant systems can be exploited to achieve high reliability at a low cost by trading the performance. The cost-performance trade-off is governed by the reliability-safety trade-off for the modular redundant systems. This trade-off is studied and the effect of increasing the level of redundancy on reliability-safety of a modular redundant system is analyzed. System-level diagnosis is a graph-theoretic approach for diagnosing the status of the modules in a system. A method for minimizing the cost of diagnosis, named safe diagnosis, is proposed. It is shown that a large level of diagnostic safety in addition to existing diagnostic reliability can be achieved with a low overhead. Additionally, it is shown that achieving high safety does not increase the complexity of fault diagnosis algorithms.
52

Unsupervised segmentation of noisy and textured images modelled with Gibbs random fields

Won, Chee Sun 01 January 1990 (has links)
We view a given image as a realization of a doubly stochastic image model, which is made up of an observable noise (or texture) process(es) and a hidden region process. Specifically, a Gaussian-Markov random field model is used for the noise (or texture) process(es) and a Gibbs random field model is used for the region process. Adopting these stochastic models for representing images, our objective is to use an estimation-theoretic method for segmenting images into regions with similar features. We assume no prior knowledge about the model parameter values and the number of regions in the image. To achieve this objective, it is necessary to estimate the model parameters from the given noisy (or textured) image. Thus, we study the existence and the uniqueness of the maximum likelihood (ML) and the maximum pseudo-likelihood (MPL) estimates for a class of Gibbsian/Exponential distributions. This study allows us to devise new implementations for some known parameter estimation techniques. These new implementations developed for the parameter estimation problem are then used to devise an unsupervised image segmentation algorithm. We adopt the "maximum a posteriori" (MAP) estimation criterion for the simultaneous parameter estimation and segmentation problem. Since the direct maximization of the MAP criterion is infeasible, we modify the MAP criterion to make it implementable. Specifically, part of the model parameters is eliminated from the maximization by substituting their ML estimates into the probability distributions. The rest of the parameters are estimated iteratively with the segmentation, which is implemented through a relaxation procedure. Due to the deviation from the optimal maximization, the resulting criterion is a modified MAP, and the resulting segmentation is a partial optimal solution (POS) of the overall maximization. Obtaining POS's under different assumptions for the 'number of regions' in the image, we choose the optimal value for the 'number of regions' by maximizing a new model-fitting criterion. This general unsupervised segmentation is then adopted to two classes of images, namely, noisy images and textured images. Versions of the algorithm are developed for each of these classes. The performance of the algorithm is tested on a wide range of noisy (and textured) images. Despite the difficulty of the problem, the algorithm yields good segmentations, accurate estimates for the parameters and the correct number of regions.
53

Analytical approach to VLSI logic synthesis

Yang, Saeyang 01 January 1990 (has links)
First, an analytical method for the minimization of multiple-valued input Boolean functions is investigated. The method is based on the reduction of the logic minimization problem to graph coloring. Implicants of a special type, called Minimally Split Product Implicants (MSI), are generated from a set of input cubes, and a graph which represents incompatibility relations between implicants is constructed from this set. Grouping of the MSI implicants into a minimum cardinality cover is then obtained by coloring the incompatibility graph of implicants. It can be shown that optimum results can be obtained with this set of implicants, provided that an optimum graph coloring is found. Second, a new theoretical formulation of the input encoding problem is presented, based on the concept of compatibility of dichotomies. The input encoding problem is shown to be equivalent to two-level logic minimization. Three possible techniques to solve the encoding problem are discussed, based on: (1) techniques borrowed from classical logic minimization (generation of prime dichotomies and solving the covering problem), (2) graph coloring applied to the incompatibility graph of dichotomies, and (3) extraction of essential prime dichotomies followed by graph coloring. For near-optimum results a powerful heuristic, based on an iterative improvement technique, has been developed. The new method can be applied to the input encoding of combinational logic as well as the state assignment of Finite State Machines (FSM) in both two-level and multi-level implementations. Third, a method for four-level optimization by Programmable Logic Array (PLA) decomposition is presented. A single two-level Boolean function is decomposed into two stages of cascaded PLA's, such that the total area of all PLA's is smaller than that of the original PLA. Finally, the concluding chapter suggests a number of important directions for research in sequential logic synthesis. The integration of logical and physical design steps in VLSI is emphasized. (Abstract shortened with permission of author.)
54

Lightpath communications: A novel approach to optical WANs

Karmi, Gadi 01 January 1990 (has links)
This dissertation presents a new approach, the Lightnet architecture, addressing the substantial mismatch between optical transmission bandwidth and electronic switching speeds. The Lightnet is a wide area network that proposes to take advantage of the convenient topologies available in the local and metropolitan area domain. By establishing lightpaths, direct optical communication paths, it implements the virtual links required to implement regular virtual topologies. The fixed allocation of transmission bandwidths to certain paths (establishing the lightpaths) creates a tradeoff between transmission and processing bandwidth. By sacrificing some of the ample transmission bandwidth, both the number of switching instances a packet incurs in an end-to-end transmission, and the amount of processing required at each switching instance are decreased. Thus, less processing is required per packet delivery or alternatively, more packets may be delivered using the same limited processing resources, resulting in increased user-available throughput. In terms of performance, the Lightnet is shown to increase user available throughput by up to almost an order of magnitude for the sample networks and virtual topologies studied. In terms of buffering requirements at intermediate nodes, it is shown that the Lightnet can carry the same load as a conventional store and forward network using less than half the number of buffers, or alternatively, carrying substantially higher load using approximately the same number of buffers. In terms of hardware requirements, by using the virtual topology embedding, lightpath routing, and wavelength assignment algorithms presented in the dissertation, the Lightnet is shown to be implementable using a moderate number of wavelengths (e.g. 14 wavelengths were required to establish a 32 node hypercube virtual topology over a sample Arpanet-like physical network) and photonic switch sizes. ftn*This work was funded under DARPA grant number NAG-2-578
55

High-level synthesis of data-driven ASICs

Patel, Baiju V 01 January 1991 (has links)
A novel approach to high level synthesis of A sc SICs based on a data driven execution model is presented. The synthesis procedure is directed at producing highly parallel A sc SICs providing high throughput using pipelining. The major benefits of our approach are its potential for higher speed, ease of design, and ease of verification and testing. The application is specified in the functional language S sc ISAL, which is translated to a data flow graph (DFG). This DFG is then directly mapped onto silicon, resulting in a circuit which resembles the DFG itself. Next, area minimization and buffer allocation steps are carried out to meet specified area and performance requirements. The design for testability features based on functional description are used to enhance controllability and observability. A hierarchical test generation procedure based on functional fault model is developed. Synthesis tools incorporating all these features are implemented.
56

Clock period minimization with wave pipelining

Joy, Donald Arthur 01 January 1991 (has links)
In conventional pipelined designs one set of signals is allowed to propagate between sets of flipflops at any instant. The flipflops provide the intermediate memory for the pipeline. This dissertation explores the minimization of the clock period by the use of wave pipelining. More than one set of signals are allowed to propagate on the logic paths simultaneously. A linear program is explored that minimizes the clock period and is used to find the points in the circuit where logic signal interference prevents further minimization of the clock period. Using CMOS standard cells, the wave pipelining characteristics of a layout are determined and iteratively improved to allow more complete wave pipelining of logic signals. Since wave pipelining is dependent upon the circuit path delays, the improvement of the circuit wave pipelining characteristics is interwoven with a standard cell placement procedure. Using this technique the circuit delays can be estimated. The resulting circuit therefore approximates the wave pipelining characteristics given by the algorithm. The placement algorithms minimize wire length giving preference to maximum paths. The circuit wave pipelining characteristics are determined and the gates that are critically constraining the wave pipelining process are identified. The critical points are then improved by delay addition and deletion as the algorithm progresses. Thus, the algorithm presents a method of iteratively improving the clock period through more complete wave pipelining of signals. Results are dependent upon the circuit being optimized. Implications of the use of this linear program and of circuit feedback are discussed as are methods of redefining the linear program constraints.
57

Trace-based fault simulation methods

Song, Ohyoung 01 January 1992 (has links)
Trace-based methods have been shown to be more effective than traditional fault simulation methods. The goal of this dissertation is to further accelerate trace-based fault simulation for combinational and synchronous sequential circuits. The use of general purpose shared memory multiprocessors for effective trace-based fault simulation is also investigated. Significant improvements in the speed of fault simulation of combinational circuits have been achieved by combining parallel pattern simulation of the fault-free circuit with tracing based methods for identifying detected faults. We present methods of achieving further speed improvements by reducing both the amount of backtracing within fanout-free regions and explicit fault simulation of stem faults. Results of simulating a set of benchmark combinational circuits with the proposed methods indicate that they are faster than other published methods, both with and without fault-dropping. We improve the speed of fault simulation of synchronous sequential circuits by using a linear iterative array model for such a circuit, and combining parallel fault simulation with surrogate fault simulation. Fault propagation of faults whose fault effects have not propagated from state variables in the previous time frame can be determined by backtracing from their surrogate lines and using the concept of surrogate faults. The others and surrogate faults need explicit forward propagation. Parallel fault simulation is used for the explicit forward propagation. Also, backtracing is extended to handle 0, 1, and X (unknown) signal values to represent unknown initial states of the sequential circuit. The results of simulating a set of benchmark sequential circuits show that execution time is reduced by 7 $\sim$ 54%, compared to a method which has been reported to be one of the fastest. Trace-based method is parallelized on a general-purpose multiprocessor with shared memory and the effect of the number of processors on speed-up of simulation and processor utilization is studied. The algorithm is based on a synchronous simulation method using a global simulation clock and task partitioning.
58

Testable designs for CMOS VLSI circuits

Park, Bong-Hee 01 January 1992 (has links)
Testing of Complementary Metal Oxide Semiconductor (scCMOS) circuits has become extremely important due to the emergence of scCMOS as a dominant technology for Very Large Scale integrated (scVLSI) circuits. The classical line stuck-at fault model is not adequate for modeling all physical faults in scCMOS circuits. One of such faults is Field Effect Transistor (scFET) stuck-open. Much work has been done on testing and testable designs for FET stuck-open faults in scCMOS combinational circuits. But very little has been reported on scCMOS sequential circuits. It is the objective of this research to investigate testable design techniques for scCMOS sequential circuits in which all stuck-open faults are detectable. Problems in detecting stuck-open faults in scCMOS sequential circuits are addressed and an in-depth research on testable realizations for stuck-open faults in scCMOS sequential circuit using new testing methodologies is presented. A testable design method of scCMOS sequential circuit for a given state table for stuck-open faults with standard scan-paths and the testing methodology, One-Scan/Two-Clock (1S/2C), has been investigated. The state of the second vector is generated by shifting-in the state of the first vector and applying primary input values using circuit under test itself. Necessary conditions for a stuck-open fault in scCMOS sequential circuit to be robustly testable with the testing methodologies, 1S/2C, and One-Scan/One-Clock (1S/1C), are shown. A 1S/1C testable stuck-open fault can be detected by shifting-in one state and applying two primary input values without clocking. Problems and testable designs for 1S/1C testing methodology in scCMOS complex gates, two-level scNAND-NAND, and scNAND-XOR realizations are presented. Testable designs for scCMOS sequential circuits without scan-paths have also been investigated. Two methods to obtain non-scan testable scCMOS sequential circuits are introduced. One adopts dynamic logic scCMOS circuits such as domino logic circuits and differential cascode voltage switch (scDCVS) logic circuits that are known to be easily testable for stuck-open faults using techniques presented for 1S/2C testable design. The other uses testable sequential machines synthesized for gate-level stuck-at faults and techniques developed for 1S/1C testable design.
59

Methods for improving the efficiency of high-speed communication networks

Zhang, Tao 01 January 1993 (has links)
This dissertation presents the first or new solutions to the following five crucial problems in high-speed Asynchronous Transfer Mode (ATM) networks and in fiber optic networks. First, a new congestion control strategy targeted towards integrated services in high-speed ATM networks is presented to provide congestion-free network control. The strategy is designed to take full advantages of potential efficiency and flexibility provided by ATM. The resulting control strategy supports different service rate and provides bounded end-to-end queueing delay for each real-time service class according to its individual requirements, while providing a best effort service to loss-sensitive and delay-tolerable data streams. Analytical and simulation results further show that the proposed strategy has a better average packet delay performance, with lower computational and implementation complexities, compared to other known congestion-free strategies for ATM networks. Second, the first heuristic solution to the global optimization of virtual path systems in ATM networks is presented, which solves the problem in three stages with mathematically provable quality of results in each stage. Third, all-optical throughput optimization with respect to arbitrarily given traffic demands in wide area WDM networks is first studied in this dissertation under a realistic constraint, namely, the availability of a limited number of optical transmitters and receivers for each fiber link, which can be smaller than the number of available wavelengths. A heuristic solution based on static establishment of lightpaths is presented, which provides provable performance with reasonable polynomial time complexity. Fourth, a new solution to efficient circuit-switching in wide area WDM networks, based on dynamic establishment of semi-lightpaths, is introduced. Optimal semi-lightpath is defined and the first algorithm for dynamic establishment of optimal semi-lightpaths is presented and analyzed, in which a more realistic constraint, namely, limited tuning ranges of optical transmitters and receivers, is incorporated. Fifth, the first solutions to network throughput optimization in WDM controllable directional stars based on lightpath wavelength routing are presented and shown to provide close to optimum network throughput with provable worst-case bounds, and have simple structures and reasonable polynomial time complexities.
60

Quality-of-service issues in high-speed networks

Nagarajan, Ramesh 01 January 1993 (has links)
We are currently witnessing an increasing demand for new services such as video conferencing and broadcast television. The development and deployment of new technologies such as fiber optics and intelligent high-speed digital switches have made it feasible to provide these services in future high-speed networks (HSNs). These new services are, however, characterized by rather stringent quality-of-service (QOS) criteria such as bounds on end-to-end packet delay and loss. Providing guarantees on QOS for these new services in future HSNs poses a number of interesting and challenging problems. In this thesis, we address the problem of providing statistical QOS guarantees to applications in HSNs. QOS criteria for applications are typically specified on an end-to-end basis in the network. The first part of this thesis examines policies for mapping (allocating) the end-to-end requirement to nodal requirements. This mapping simplifies providing QOS guarantees as also connection admission algorithms. The allocation policy performance is gauged on the basis of the maximum supportable network load under that policy. We develop good insight into allocation policy performance with the aid of a novel nodal sensitivity measure. It is shown that for a QOS metric of interest, the loss probability, and network operating regimes of practical interest rather naive policies exhibit near optimal performance. However, for certain other QOS metrics and operating regimes, it is shown that the optimal policy significantly outperforms simple QOS allocation policies. In the second part of this thesis, we restrict our attention to an isolated node and focus on appropriate QOS criteria and mechanisms to guarantee these criteria at the nodal level. Two new QOS criteria, the customer-duration-based and interval-based criteria, are proposed for HSNs and their applications. Further, a new cumulative transient metric, the customer average, is proposed for queueing models of communication systems. The computation of the new metric is detailed for a class of simple queueing models. Further, it is shown how this new metric enables one to compute and guarantee the new proposed QOS criteria.

Page generated in 0.2033 seconds