• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 72
  • 64
  • 50
  • 25
  • 21
  • 15
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 680
  • 197
  • 162
  • 136
  • 135
  • 134
  • 127
  • 124
  • 118
  • 85
  • 81
  • 75
  • 73
  • 69
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

TouchSPICE vs. ReActive-SPICE: A Human-Computer Interaction Perspective

O'Hara, Joshua Martin 01 August 2012 (has links)
Traditional SPICE simulation tools and applications of circuit theory lack real-time interaction and feedback. The goal of this thesis was to create an interactive physical environment to allow the manipulation and simulation of discrete electrical components in near-real-time while optimizing and streamlining the human-computer interaction (HCI) elements to make the user experience as positive and transparent as possible. This type of HCI and near-real-time simulation feedback would allow for the instant realization of how the parameters of each discrete component or hardware module affect the overall simulation and response of the circuit. The scope of this thesis is to research, design and develop two real-time interactive SPICE simulation tools and analyze the real-time benefits and HCI elements of both simulators, principally the user interface design itself. The first real-time interactive simulator (TouchSPICE) uses multiple embedded processors (touchscreen hardware blocks) and a host computer to build and simulate a circuit. The second real-time interactive simulator (ReActive-SPICE) uses a single host computer with integrated software to build and simulate a circuit, much like LTspice™ and PSpice™ without the real-time aspects. As part of the study, 20 students were asked to create circuits utilized in undergraduate-level labs using TouchSPICE and ReActive-SPICE for the sole purpose of providing feedback on the two user interfaces. Students were asked to complete a survey before, during and after circuit creation to provide a basis for judging the intuitiveness, efficiency and overall effectiveness of the HCIs. Conclusions based-off the surveys support the hypothesis that both TouchSPICE and ReActive-SPICE were more intuitive and overall simpler than traditional SPICE simulation tools. Feedback collected showed TouchSPICE to have a more intuitive user interface while ReActive-SPICE proved to be more efficient. ReActive-SPICE was further developed and enhanced to improve the user interface as well as the overall circuit creation and real-time simulation processes.
302

Quadded GasP: a Fault Tolerant Asynchronous Design

Scheiblauer, Kristopher S. 27 February 2017 (has links)
As device scaling continues, process variability and defect densities are becoming increasingly challenging for circuit designers to contend with. Variability reduces timing margins, making it difficult and time consuming to meet design specifications. Defects can cause degraded performance or incorrect operation resulting in circuit failure. Consequently test times are lengthened and production yields are reduced. This work assess the combination of two concepts, self-timed asynchronous design and fault tolerance, as a possible solution to both variability and defects. Asynchronous design is not as sensitive to variability as synchronous, while fault tolerance allows continued functional operation in the presence of defects. GasP is a self-timed asynchronous design that provides high performance in a simple circuit. Quadded Logic, is a gate level fault tolerant methodology. This study presents Quadded GasP, a fault tolerant asynchronous design. This work demonstrates that Quadded GasP circuits continue to function within performance expectations when faults are present. The increased area and reduced performance costs of Quadded GasP area also evaluated. These results show Quadded GasP circuits are a viable option for managing process variation and defects. Application of these circuits will provide decreased development and test times, as well as increased yield.
303

Toward Biologically-Inspired Self-Healing, Resilient Architectures for Digital Instrumentation and Control Systems and Embedded Devices

Khairullah, Shawkat Sabah 01 January 2018 (has links)
Digital Instrumentation and Control (I&C) systems in safety-related applications of next generation industrial automation systems require high levels of resilience against different fault classes. One of the more essential concepts for achieving this goal is the notion of resilient and survivable digital I&C systems. In recent years, self-healing concepts based on biological physiology have received attention for the design of robust digital systems. However, many of these approaches have not been architected from the outset with safety in mind, nor have they been targeted for the automation community where a significant need exists. This dissertation presents a new self-healing digital I&C architecture called BioSymPLe, inspired from the way nature responds, defends and heals: the stem cells in the immune system of living organisms, the life cycle of the living cell, and the pathway from Deoxyribonucleic acid (DNA) to protein. The BioSymPLe architecture is integrating biological concepts, fault tolerance techniques, and operational schematics for the international standard IEC 61131-3 to facilitate adoption in the automation industry. BioSymPLe is organized into three hierarchical levels: the local function migration layer from the top side, the critical service layer in the middle, and the global function migration layer from the bottom side. The local layer is used to monitor the correct execution of functions at the cellular level and to activate healing mechanisms at the critical service level. The critical layer is allocating a group of functional B cells which represent the building block that executes the intended functionality of critical application based on the expression for DNA genetic codes stored inside each cell. The global layer uses a concept of embryonic stem cells by differentiating these type of cells to repair the faulty T cells and supervising all repair mechanisms. Finally, two industrial applications have been mapped on the proposed architecture, which are capable of tolerating a significant number of faults (transient, permanent, and hardware common cause failures CCFs) that can stem from environmental disturbances and we believe the nexus of its concepts can positively impact the next generation of critical systems in the automation industry.
304

Fault diagnosis of VLSI designs: cell internal faults and volume diagnosis throughput

Fan, Xiaoxin 01 December 2012 (has links)
The modern VLSI circuit designs manufactured with advanced technology nodes of 65nm or below exhibit an increasing sensitivity to the variations of manufacturing process. New design-specific and feature-sensitive failure mechanisms are on the rise. Systematic yield issues can be severe due to the complex variability involved in process and layout features. Without improved yield analysis methods, time-to-market is delayed, mature yield is suboptimal, and product quality may suffer, thereby undermining the profitability of the semiconductor company. Diagnosis-driven yield improvement is a methodology that leverages production test results, diagnosis results, and statistical analysis to identify the root cause of yield loss and fix the yield limiters to improve the yield. To fully leverage fault diagnosis, the diagnosis-driven yield analysis requires that the diagnosis tool should provide high-quality diagnosis results in terms of accuracy and resolution. In other words, the diagnosis tool should report the real defect location without too much ambiguity. The second requirement for fast diagnosis-driven yield improvement is that the diagnosis tool should have the capability of processing a volume of failing dies within a reasonable time so that the statistical analysis can have enough information to identify the systematic yield issues. In this dissertation, we first propose a method to accurately diagnose the defects inside the library cells when multi-cycle test patterns are used. The methods to diagnose the interconnect defect have been well studied for many years and are successfully practiced in industry. However, for process technology at 90nm or 65nm or below, there is a significant number of manufacturing defects and systematic yield limiters lie inside library cells. The existing cell internal diagnosis methods work well when only combinational test patterns are used, while the accuracy drops dramatically with multi-cycle test patterns. A method to accurately identify the defective cell as well as the failing conditions is presented. The accuracy can be improved up to 94% compared with about 75% accuracy for previous proposed cell internal diagnosis methods. The next part of this dissertation addresses the throughput problem for diagnosing a volume of failing chips with high transistor counts. We first propose a static design partitioning method to reduce the memory footprint of volume diagnosis. A design is statically partitioned into several smaller sub-circuits, and then the diagnosis is performed only on the smaller sub-circuits. By doing this, the memory usage for processing the smaller sub-circuit can be reduced and the throughput can be improved. We next present a dynamic design partitioning method to improve the throughput and minimize the impact on diagnosis accuracy and resolution. The proposed dynamic design partitioning method is failure dependent, in other words, each failure file has its own design partition. Extensive experiments have been designed to demonstrate the efficiency of the proposed dynamic partitioning method.
305

Compaction mechanism to reduce test pattern counts and segmented delay fault testing for path delay faults

Jha, Sharada 01 May 2013 (has links)
With rapid advancement in science and technology and decreasing feature size of transistors, the complexity of VLSI designs is constantly increasing. With increasing density and complexity of the designs, the probability of occurrence of defects also increases. Therefore testing of designs becomes essential in order to guarantee fault-free operation of devices. Testing of VLSI designs involves generation of test patterns, test pattern application and identification of defects in design. In case of scan based designs, the test set size directly impacts the test application time which is determined by the number of memory elements in the design and the test storage requirements. There are various methods in literature which are used to address the issue of large test set size classified as static or dynamic compaction methods depending on whether the test compaction algorithm is performed as a post-processing step after test generation or is integrated within the test generation. In general, there is a trade-off between the test compaction achievable and the run-time. Methods which are computationally intensive might provide better compaction, however, might have longer run times owing to the complexity of the algorithm. In the first part of the thesis we address the problem of large test set size in partially scanned designs by proposing an incremental dynamic compaction method. Typically, the fault coverage curve of designs ramp up very quickly in the beginning and later slows down and ultimately the curve flattens towards the tail of the curve. In the initial phase of test generation a greedy compaction method is used because initially there are easy-to-detect faults and the scope for compaction is better. However, in the later portion of the curve, there are hard-to-detect faults which affect compaction and we propose to use a dynamic compaction approach. We propose a novel mechanism to identify redundant faults during dynamic compaction to avoid targeting them later. The effectiveness of method is demonstrated on industrial designs and test size reduction of 30% is achieved. As the device complexity is increasing, delay defects are also increasing. Speed path debug is necessary in order to meet performance requirements. Speed paths are the frequency limiting paths in a design identified during debug. Speed paths can be tested using functional patterns, transition n-detect patterns or path delay patterns. However, usage of functional patterns for speed path debug is expensive because generation of functional patterns is expensive and the application cost is also high because the number of patterns is large and requires functional testers. In the second part of the dissertation we propose a simple path sensitization approach that can be used to generate pseudo-robust tests, which are near robust tests and can be used for designs that have multiple clock domains. The fault coverage for path delay fault APTG can be further improved by dividing the paths that are not testable under pseudo robust conditions, into shorter sub-paths. The effectiveness of the method is demonstrated on industrial designs.
306

SSAGA: Streaming Multiprocessors (SMs) Sculpted for Asymmetric General Purpose Graphics Processing Unit (GPGPU) Applications

Saha, Shamik 01 May 2016 (has links)
The evolution of the Graphics Processing Units (GPUs) over the last decade, has reinforced general purpose computing while sustaining a steady performance growth in graphics intensive applications. However, the immense performance improvement is generally associated with a steep rise in GPU power consumption. Consequently, GPUs are already close to the abominable power wall. With a massive popularity of the mobile devices running general-purpose GPU (GPGPU) applications, it is of utmost importance to ensure a high energy efficiency, while meeting the strict performance requirements. In this work, we demonstrate that, customizing a Streaming Multiprocessor (SM) of a GPU, at a lower frequency, is significantly more energy efficient, compared to employing Dynamic Voltage and Frequency Scaling (DVFS) on an SM, designed for a high frequency operation. Using a system level Computer Aided Design (CAD) technique, we propose SSAGA - Streaming Multiprocessors Sculpted for Asymmetric GPGPU Applications, an energy efficient GPU design paradigm. SSAGA creates architecturally identical SM cores, customized for different voltage-frequency domains.
307

Revamping Timing Error Resilience to Tackle Choke Points at NTC

Bal, Aatreyi 01 May 2019 (has links)
The growing market of portable devices and smart wearables has contributed to innovation and development of systems with longer battery-life. While Near Threshold Computing (NTC) systems address the need for longer battery-life, they have certain limitations. NTC systems are prone to be significantly affected by variations in the fabrication process, commonly called process variation (PV). This dissertation explores an intriguing effect of PV, called choke points. Choke points are especially important due to their multifarious influence on the functional correctness of an NTC system. This work shows why novel research is required in this direction and proposes two techniques to resolve the problems created by choke points, while maintaining the reduced power needs.
308

Design of an Analog VLSI Cochlea

Shiraishi, Hisako January 2003 (has links)
The cochlea is an organ which extracts frequency information from the input sound wave. It also produces nerve signals, which are further analysed by the brain and ultimately lead to perception of the sound. An existing model of the cochlea by Fragni`ere is first analysed by simulation. This passive model is found to have the properties that the living cochlea does in terms of the frequency response. An analog VLSI circuit implementation of this cochlear model in CMOS weak inversion is proposed, using log-domain filters in current domain. It is fabricated on a chip and a measurement of a basilar membrane section is performed. The measurement shows a reasonable agreement to the model. However, the circuit is found to have a problem related to transistor mismatch, causing different behaviour in identical circuit blocks. An active cochlear model is proposed to overcome this problem. The model incorporates the effect of the outer hair cells in the living cochlea, which controls the quality factor of the basilar membrane filters. The outer hair cells are incorporated as an extra voltage source in series with the basilar membrane resonator. Its value saturates as the input signal becomes larger, making the behaviour rather closer to that of a passive model. The simulation results show this nonlinear phenomenon, which is also seen in the living cochlea. The contribution of this thesis is summarised as follows: a) the first CMOS weak inversion current domain basilar membrane resonator is designed and fabricated, and b) the first active two-dimensional cochlear model for analog VLSI implementation is developed.
309

Compilation de programmes VHDL en vue de l'évaluation de testabilité d'équipements digitaux

Wodey, Pierre 03 November 1993 (has links) (PDF)
La complexité et le peu d'accessibilité des équipements numériques rend de plus en plus difficiles les taches de vérification et de dépannage de ces équipements. Pour pallier ces problèmes, des outils ont été définis pour traiter des niveaux de description élevés contournant ainsi la complexité intrinsèque des descriptions de bas niveau. Dans ce mémoire, nous nous sommes intéressés a la définition d'un outil d'analyse de testabilité qui permette de prendre en compte des circuits, cartes ou systèmes décrits en langage vhdl. L'objectif est de pouvoir traiter des équipements asynchrones décrits par leur comportement aussi bien que par leur hiérarchie. L'analyse de testabilité se base sur la représentation des transferts d'information et permet, d'une part de déterminer une spécification fonctionnelle du programme de test et, d'autre part, de calculer des mesures de testabilité exprimées par une mesure de contrôlabilité et une mesure d'observabilité. Dans cette thèse nous présentons, tout d'abord, la compilation de programmes vhdl comportementaux sous forme de modèles de transfert d'information. Nous définissons la notion de capacité d'information dynamique qui permet de calculer des mesures de testabilité significatives même dans une certaine classe de cycles séquentiels. Ici sont abordes les problèmes de simplification et d'optimisation des graphes déduits d'une description comportementale. Par le biais de la définition d'une bibliothèque nous avons apporte une solution au probleme de la concaténation de graphes de transfert d'information pour compiler les descriptions hiérarchiques. Des expérimentations sur des exemples réels de circuits ont montre que les optimisations apportent une accélération des traitements d'analyse de testabilité ainsi que la pertinence de ce type de modélisation pour cerner a priori les problèmes de test
310

Contribution à la définition et à la mise en œuvre de NAUTILE

Hornik, Armand 06 June 1989 (has links) (PDF)
Cette thèse constitue une contribution à l'élaboration d'un nouveau système de conception de circuits intégrés, nautile. Elle comporte une étude des différents systèmes existants et a partir de leur synthèse établit la définition d'un nouveau système. Celui-ci doit réaliser un environnement complet de conception de circuits v.l.s.i. Permettant d'être facilement interfaçables avec différents systèmes déjà existants, d'être indépendant de la technologie et de gérer différentes représentations (dessin des masques, schéma électrique, schéma logique) d'un même circuit en assurant la cohérence entre elles. Enfin cette thèse donne une description du prototype réalisé du système nautile, consistant en une structure de donnée orientée objet, en les primitives de gestion de la structure, ainsi qu'en un certain nombre d'outils (routeurs, générateurs divers) ayant ete mis en œuvre

Page generated in 0.038 seconds