Spelling suggestions: "subject:"4digital circuit"" "subject:"deigital circuit""
1 |
A steady-state response test generation technique for mixed-signal integrated circuitsAlani, Alaa Fadhil January 1993 (has links)
No description available.
|
2 |
Image-based rendering for visualisation of 3D scenes in near real-timeTang, Bo January 2008 (has links)
In this research work, a software and hardware prototype for real—time 3D visualisation is developed. The proposed system takes two input videos to interpolate virtual in—between views, which are then combined into 3D videos after processing for viewing on a 3D monitor. The core section of this research work is based on view morphing, a type of image based rendering. The image based rendering is a technique used to render a scene from a number of source images. According to the knowledge of geometric information of captured scenes, the image based rendering technique can be classified into three categories: rendering without geometry, rendering with implicit geometry and rendering with explicit geometry. The view morphing technique, a subset of the second category, requires less geometric information and a few source images of captured scenes. These reduce the complexities of both computation and hardware configuration of the proposed system, moreover, the quality of interpolated virtual in—between views by view morphing technique is good enough for visualisation applications. In this thesis, the research work is presented from two aspects: the algorithmic and the system's points of view separately. The algorithmic development and optimisation consist of the procedures of automatically interpolating virtual in—between views from two source images. The work begun with two cameras calibration with the objective of finding out the geometric relationship in 3D space between the two cameras. Image rectification is followed to project two source images into two parallel planes. This enables to obtain physically valid virtual in—between views and also reduces the computational cost for correspondence estimation. Subsequently, stereo matching is applied to establish feature correspondences between the two rectified source images. A novel feature based correspondence estimation algorithm is proposed to raise the level of the computational efficiency and the reliability. After that, interval interpolation is used to synthesise virtual in—between views. Finally, image derectification is applied to obtain final interpolated virtual in—between views. A novel pseudo real—time 3D visualisation system is proposed in the system development and optimisation. The proposed system has been developed using the TI (Texas Instruments) DM642 EVM board which is a standalone Digital Media Processing board. The system also includes a stereo video capture module consisting of two PAL cameras and the X3D-19 DISPLAY AD 3D display unit for visualisation of 3D video output. The core algorithm utilises images captured from the cameras and generates 6 virtual in—between views using interpolation techniques. The combined views (eight views of 2D images) are displayed on the 3D monitor using a proprietary method developed specifically for X31) monitors. The advantage of the proposed system is that real 3D impressions are able to be visualised in front of the 3D monitor in near real—time without any special glasses. The proposed system has been evaluated on a number of real scenes. The experimental results indicate that the performance of the proposed 3D visualisation system is about 4.7 FPS.
|
3 |
Stochastic fault simulation of triple-modular redundant asynchronous pipeline circuitsLynch, John Daniel 10 1900 (has links)
Ph.D. / Electrical Engineering / The expected unreliability of nano-scale electronic components has renewed interest in the decades-old field of fault-tolerant logic design. Fault-tolerant design makes it possible to build reliable systems from unreliable components. This has spurred recent research into the application of classical FT techniques to nanoelectronics. Meanwhile, the growing gap between logic gate and wire delays, and the growing power consumption of clock generation and distribution circuits, in nanometer-scale silicon integrated circuits has renewed research in asynchronous, or clockless, logic design. This dissertation examines the application of triple modular redundancy (TMR), one of several FT circuit design techniques, to improve the reliability of a variety of clockless circuits and systems. A new fault model, appropriate for clockless circuits is derived and applied to measure the reliability of nonredundant and triplex micropipelines. A new circuit element that combines the functionality of a Muller C-element and a majority gate is introduced to solve special problems at the simplex-triplex interface. The effectiveness of asynchronous FT circuit design strategies based on the results of Monte Carlo simulation experiments with representative circuits modeled in Verilog hardware description language (HDL) is presented.
|
4 |
RSFQ digital circuit design automation and optimisationMuller, Louis C. 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: In order to facilitate the creation of complex and robust RSFQ digital logic
circuits an extensive library of electronic design automation (EDA) tools is a
necessity. It is the aim of this work to introduce various methods to improve
the current state of EDA in RSFQ circuit design.
Firstly, Monte Carlo methods such as Latin Hypercube sampling and Sobol
sequences are applied for their variance reduction abilities in approximating
circuit yield. In addition, artificial neural networks are also investigated for
their applicability in modeling the parameter-yield space.
Secondly, a novel technique for circuit functional testing using automated
state machine extraction is presented, which greatly simplifies the logical verification
of a circuit. This method is also used, along with critical timing
extraction, to automatically generate Hardware Description Language(HDL)
models which can be used for high level circuit design.
Lastly, the Greedy Local search, Simulated Annealing and Genetic Algorithm
meta-heuristics were statistically compared in a novel manner using a
yield model provided by artificial neural networks. This is done to ascertain
their performance in optimising RSFQ circuits in relation to yield.
The variance reduction techniques of Latin Hypercube Sampling and Sobol
sequences were shown to be beneficial for the use with RSFQ circuits. For
optimisation purposes the use of Simulated Annealing and Genetic Algorithms
were shown to improve circuit optimisation for possible multi-modal search
spaces. An HDL model is also successfully generated from a complex RSFQ
circuit for use in high level circuit design which includes critical timing and
propagation latency.
All the techniques presented in this study form part of a software library
that can be further refined and extended in future work.
|
5 |
A Logic Test Chip for Optimal Test and DiagnosisNiewenhuis, Benjamin T. 01 May 2018 (has links)
The benefits of the continued progress in integrated circuit manufacturing have been numerous, most notably in the explosion of computing power in devices ranging from cell phones to cars. Key to this success has been strategies to identify, manage, and mitigate yield loss. One such strategy is the use of test structures to identify sources of yield loss early in the development of a new manufacturing process. However, the aggressive scaling of feature dimensions, the integration of new materials, and the increase in structural complexity in modern technologies has challenged the capabilities of conventional test structures. To help address these challenges, a new logic test chip, called the Carnegie Mellon Logic Characterization Vehicle (CM-LCV), has been developed. The CM-LCV utilizes a two- dimensional array of functional unit blocks (FUBs) that each implement an innovative functionality. Properties including fault coverage, logical and physical design features, and fault distinguishability are shown to be composable within the FUB array; that is, they exist regardless of the size and composition of the FUB array. A synthesis ow that leverages this composability to adapt the FUB array to a wide range of test chip design requirements is presented. The connection between the innovative FUB functionality and orthogonal Latin squares is identified and used to analyze the universe of possible FUB functions. Two additional variants to the FUB array are also developed: heterogenous FUB arrays utilize multiple FUB functions to improve the synthesis ow performance, while pipelined FUB arrays incorporate sequential circuit elements (e.g., ip- ops and latches) that are absent from the original combinational FUB array. In addition to the design of the CM-LCV, methods for testing it are presented. Techniques to create minimal sets of test patterns that exhaustively exercise each FUB within the FUB array are developed. Additional constraints are described for the heterogenous and pipelined FUB arrays that allow these techniques to be applied for both variant FUB arrays. Furthermore, a simple built-in self test (BIST) scheme is described and applied to a reference design, resulting in a 88.0% reduction in the number of test cycles required without loss in fault coverage. A hierarchical FUB array diagnosis methodology (HFAD) is also presented for the CM- LCV that leverages its unique properties to improve performance for multiple defects. Experiments demonstrate that this HFAD methodology is capable of perfect accuracy in 93.1% of simulations with two injected faults, an improvement on the state-of-the-art commercial diagnosis. Additionally, silicon fail data was collected from a CM-LCV manufactured using a 14nm process by an industry partner. A comparison of the diagnosis results for the 1,375 fail logs examined shows that the HFAD methodology discovers additional defects during multiple defect diagnosis that the commercial tool misses for 40 of the diagnosed fail logs. Examination of these cases shows that the additional defects found by the HFAD methodology can result in improved diagnosis confidence and more precise descriptions of the defect behavior(s). The contributions of this dissertation can thus be summarized as the description of the design, test, and diagnosis of a new logic test chip for use in yield learning during process development. This CM-LCV can be adapted to meet a wide range of test chip requirements, can be efficiently and rigorously tested, and exhibits properties that can be used to improve diagnosis outcomes. All of these claims are validated through both simulated experiments and silicon data.
|
6 |
Physical design of cryptographic applications : constrained environments and power analysis resistanceMacé, François 24 April 2008 (has links)
Modern cryptography responds to the need for security that has arisen with the emergence of communication appliances. However, its adapted integration in the wide variety of existing communication systems has opened new design challenges. Amongst them, this thesis addresses two in particular, related to hardware integration of cryptographic algorithms: constrained environments and side-channel security.
In the context of constrained environments, we propose to study the interest of the Scalable Encryption Algorithm SEA for constrained hardware applications. We investigate both the FPGA and ASIC contexts and illustrate, using practical implementation results, the interest of this algorithm. Indeed, we demonstrate how hardware implementations can keep its high scalability properties while achieving interesting implementation figures in comparison to conventional algorithms such as the AES.
Next, we deal with three complementary aspects related to side-channel resistance.
We first propose a new class of dynamic and differential logic families achieving low-power performance with matched leakage of information to state of-the-art countermeasures.
We then discuss a power consumption model for these logic styles and apply it to DyCML implementations. It is based on the use of the isomorphism existing between the gate structures of the implemented functions and the binary decision diagrams describing them. Using this model, we are not only able to predict the power consumption, and therefore attack such implementations, but also to efficiently choose the gate structures achieving the best resistance against this model.
We finally study a methodology for the security evaluation of cryptographic applications all along their design and test phases. We illustrate the interest of such a methodology at different design steps and with different circuit complexity, using either simulations or power consumption measurements.
|
7 |
Circuit Performance Verification and Optimization in the Presence of VariabilityOnaissi, Sari 11 January 2012 (has links)
The continued scaling of digital integrated circuits has led to an increasingly larger impact of process, supply voltage, and temperature (PVT) variations. The effect of these variations on logic cell and interconnect delays has introduced challenges to both circuit performance (timing)verification and optimization. In order for us to fully take advantage of the
benefits of technology scaling, it is essential that ``variation-aware''techniques for performance verification and optimization be developed and used in modern design flows.
In this thesis such techniques for both performance verification and optimization are presented. First, we present a fast method for finding the worst-case slacks over all process and environmental corners. This method uses the standard set of PVT corners available in industry, and provides large runtime gains while maintaining a high degree of accuracy. After that, we propose an efficient block-based parameterized timing analysis technique that can accurately capture circuit delays at every point in the parameter space, by reporting all paths that can become critical. This method employs parameterized static timing analysis (PSTA) variability models, and allows one to easily examine local robustness to parameters in different regions of the parameter space. Next, we introduce an optimization method that alters
clock network lines so that a circuit meets its timing constraints at all PVT settings under PSTA variability models. This is formulated as a Linear Program (LP), which is based on a clock skew optimization formulation, and as a result it can be solved efficiently. Finally, we present a method that uses characterized, pre-silicon, PSTA variational timing models to identify
speedpaths that can best explain the observed delay measurements during silicon debug. This is a crucial step, required for both ``fixing'' failing paths and for accurate learning from silicon data.
|
8 |
Circuit Performance Verification and Optimization in the Presence of VariabilityOnaissi, Sari 11 January 2012 (has links)
The continued scaling of digital integrated circuits has led to an increasingly larger impact of process, supply voltage, and temperature (PVT) variations. The effect of these variations on logic cell and interconnect delays has introduced challenges to both circuit performance (timing)verification and optimization. In order for us to fully take advantage of the
benefits of technology scaling, it is essential that ``variation-aware''techniques for performance verification and optimization be developed and used in modern design flows.
In this thesis such techniques for both performance verification and optimization are presented. First, we present a fast method for finding the worst-case slacks over all process and environmental corners. This method uses the standard set of PVT corners available in industry, and provides large runtime gains while maintaining a high degree of accuracy. After that, we propose an efficient block-based parameterized timing analysis technique that can accurately capture circuit delays at every point in the parameter space, by reporting all paths that can become critical. This method employs parameterized static timing analysis (PSTA) variability models, and allows one to easily examine local robustness to parameters in different regions of the parameter space. Next, we introduce an optimization method that alters
clock network lines so that a circuit meets its timing constraints at all PVT settings under PSTA variability models. This is formulated as a Linear Program (LP), which is based on a clock skew optimization formulation, and as a result it can be solved efficiently. Finally, we present a method that uses characterized, pre-silicon, PSTA variational timing models to identify
speedpaths that can best explain the observed delay measurements during silicon debug. This is a crucial step, required for both ``fixing'' failing paths and for accurate learning from silicon data.
|
9 |
Architecture hybride tolérante aux fautes pour l'amélioration de la robustesse des circuits et systèmes intégrés numériques. / A Hybrid Fault-Tolerant Architecture for Robustness Improvement of Digital Integrated Circuits and SystemsTran, Duc Anh 21 December 2012 (has links)
L'évolution de la technologie CMOS consiste à la miniaturisation continue de la taille des transistors. Cela permet la réalisation de circuits et systèmes intégrés de plus en plus complexes et plus performants, tout en réduisant leur consommation énergétique, ainsi que leurs coûts de fabrication. Cependant, chaque nouveau noeud technologique CMOS doit faire face aux problèmes de fiabilité, dues aux densités de fautes et d'erreurs croissantes. Par conséquence, les techniques de tolérance aux fautes, qui utilisent des ressources redondantes pour garantir un fonctionnement correct malgré la présence des fautes, sont devenus indispensables dans la conception numérique. Ce thèse étudie une nouvelle architecture hybride tolérante aux fautes pour améliorer la robustesse des circuits et systèmes numériques. Elle s'adresse à tous les types d'erreur dans la partie combinatoire des circuits, c'est-à-dire des erreurs permanentes (« hard errors »), des erreurs transitoires (« SETs ») et des comportements temporels fautifs (« timing errors »). L'architecture proposée combine la redondance de l'information (pour la détection d'erreur), la redondance de temps (pour la correction des erreurs transitoires) et la redondance matérielle (pour la correction des erreurs permanentes). Elle permet de réduire considérablement la consommation d'énergie, tout en ayant une surface de silicium similaire comparée aux solutions existantes. En outre, elle peut également être utilisée dans d'autres applications, telles que pour traiter des problèmes de vieillissement, pour tolérer des fautes dans les architectures pipelines, et pour être combiné avec des systèmes avancés de protection des erreurs transitoires dans la partie séquentielle des circuits logiques (« SEUs »). / Evolution of CMOS technology consists in continuous downscaling of transistor features sizes, which allows the production of smaller and cheaper integrated circuits with higher performance and lower power consumption. However, each new CMOS technology node is facing reliability problems due to increasing rate of faults and errors. Consequently, fault-tolerance techniques, which employ redundant resources to guarantee correct operations of digital circuits and systems despite the presence of faults, have become essential in digital design. This thesis studies a novel hybrid fault-tolerant architecture for robustness improvement of digital circuits and systems. It targets all kinds of error in combinational part of logic circuits, i.e. hard, SETs and timing errors. Combining information redundancy for error detection, timing redundancy for transient error correction and hardware redundancy for permanent error corrections, the proposed architecture allows significant power consumption saving, while having similar silicon area compared to existing solutions. Furthermore, it can also be used in other applications, such as dealing with aging phenomenon, tolerating faults in pipeline architecture, and being combined with advanced SEUs protection scheme for sequential parts of logic circuits.
|
10 |
Static noise margin analysis for CMOS logic cells in near-thresholdBortolon, Felipe Todeschini January 2018 (has links)
Os avanços na tecnologia de semicondutores possibilitou que se fabricasse dispositivos com atividade de chaveamento mais rápida e com maior capacidade de integração de transistores. Estes avanços, todavia, impuseram novos empecilhos relacionados com a dissipação de potência e energia. Além disso, a crescente demanda por dispositivos portáteis levaram à uma mudança no paradigma de projeto de circuitos para que se priorize energia ao invés de desempenho. Este cenário motivou à reduzir a tensão de alimentação com qual os dispositivos operam para um regime próximo ou abaixo da tensão de limiar, com o objetivo de aumentar sua duração de bateria. Apesar desta abordagem balancear características de performance e energia, ela traz novos desafios com relação a tolerância à ruído. Ao reduzirmos a tensão de alimentação, também reduz-se a margem de ruído disponível e, assim, os circuitos tornam-se mais suscetíveis à falhas funcionais. Somado à este efeito, circuitos com tensões de alimentação nestes regimes são mais sensíveis à variações do processo de fabricação, logo agravando problemas com ruído. Existem também outros aspectos, tais como a miniaturização das interconexões e a relação de fan-out de uma célula digital, que incentivam a avaliação de ruído nas fases iniciais do projeto de circuitos integrados Por estes motivos, este trabalho investiga como aprimorar a margem de ruído estática de circuitos síncronos digitais que irão operar em tensões no regime de tensão próximo ou abaixo do limiar. Esta investigação produz um conjunto de três contribuições originais. A primeira é uma ferramenta capaz de avaliar automaticamente a margem de ruído estática de células CMOS combinacionais. A segunda contribuição é uma metodologia realista para estimar a margem de ruído estática considerando variações de processo, tensão e temperatura. Os resultados obtidos mostram que a metodologia proposta permitiu reduzir até 70% do pessimismo das margens de ruído estática, Por último, a terceira contribuição é um fluxo de projeto de células combinacionais digitais considerando ruído, e uma abordagem para avaliar a margem de ruído estática de circuitos complexos durante a etapa de síntese lógica. A biblioteca de células resultante deste fluxo obteve maior margem de ruído (até 24%) e menor variação entre diferentes células (até 62%). / The advancement of semiconductor technology enabled the fabrication of devices with faster switching activity and chips with higher integration density. However, these advances are facing new impediments related to energy and power dissipation. Besides, the increasing demand for portable devices leads the circuit design paradigm to prioritize energy efficiency instead of performance. Altogether, this scenario motivates engineers towards reducing the supply voltage to the near and subthreshold regime to increase the lifespan of battery-powered devices. Even though operating in these regime offer interesting energy-frequency trade-offs, it brings challenges concerning noise tolerance. As the supply voltage reduces, the available noise margins decrease, and circuits become more prone to functional failures. In addition, near and subthreshold circuits are more susceptible to manufacturing variability, hence further aggravating noise issues. Other issues, such as wire minimization and gate fan-out, also contribute to the relevance of evaluating the noise margin of circuits early in the design Accordingly, this work investigates how to improve the static noise margin of digital synchronous circuits that will operate at the near/subthreshold regime. This investigation produces a set of three original contributions. The first is an automated tool to estimate the static noise margin of CMOS combinational cells. The second contribution is a realistic static noise margin estimation methodology that considers process-voltage-temperature variations. Results show that the proposed methodology allows to reduce up to 70% of the static noise margin pessimism. Finally, the third contribution is the noise-aware cell design methodology and the inclusion of a noise evaluation of complex circuits during the logic synthesis. The resulting library achieved higher static noise margin (up to 24%) and less spread among different cells (up to 62%).
|
Page generated in 0.0592 seconds