Spelling suggestions: "subject:"4digital circuits"" "subject:"deigital circuits""
61 |
Toward Generative Artificial Intelligence in Circuit DesignHagy, Kyle C 01 January 2024 (has links) (PDF)
In recent years, there has been an explosion of advancements in artificial intelligence, especially in language models. These models have become essential in aiding and providing information for various tasks. This study explores five proprietary and open-source large language models (LLMs) and examines their reliability and accuracy in selecting parts and constructing connections of ten circuit design tasks from our benchmark. During our investigations, we assessed that the default textual outputs from these LLMs could lead to ambiguous responses that are either too general or open to multiple interpretations. To enhance clarity, we developed an artificial intelligence (AI)-based pipeline that translates responses from LLMs into netlists, eliminating the need for further training or fine-tuning. Our study aims to highlight the reliability and accuracy of the default responses, develop a solution that provides a more explicit netlist description, and compare default and netlist outputs.
|
62 |
Online Techniques for Enhancing the Diagnosis of Digital CircuitsTanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication.
The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis.
Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments.
Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method.
Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6.
In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D. / When a new microchip fabrication technology is introduced, the manufacturing is far from perfect. A lot of work goes into updating the fabrication rules and microchip designs before we get a higher proportion of good or defect-free chips. With continued advancements in the fabrication technology, this enhancement work has become increasingly difficult. This is primarily because of the sheer number of transistors that can be fabricated on a single chip, which has practically doubled every two years for the last four decades. The microchip testing process involves application of stimuli and checking the responses. These stimuli cater for a huge number of possible defects inside the chips. With the increase in the number of transistors, covering all possible defects is becoming practically impossible within the business constraints.
This research proposes a solution to this problem, which is to make various activities in this process adaptive to the actual defects in the chips. The stimuli, we mentioned above, now depend upon the feedback from the chip. By utilizing this feedback, we have demonstrated significant improvements in three primary activities namely failure logging, scan testing and scan chain diagnosis over state-of-the-art industrial tools. These activities are essential steps related to improving the proportion of good chips in the manufactured lot.
|
63 |
Efficient FPGA Architectures for Separable Filters and Logarithmic Multipliers and Automation of Fish Feature Extraction Using Gabor FiltersJoginipelly, Arjun Kumar 13 August 2014 (has links)
Convolution and multiplication operations in the filtering process can be optimized by minimizing the resource utilization using Field Programmable Gate Arrays (FPGA) and separable filter kernels. An FPGA architecture for separable convolution is proposed to achieve reduction of on-chip resource utilization and external memory bandwidth for a given processing rate of the convolution unit.
Multiplication in integer number system can be optimized in terms of resources, operation time and power consumption by converting to logarithmic domain. To achieve this, a method altering the filter weights is proposed and implemented for error reduction. The results obtained depict significant error reduction when compared to existing methods, thereby optimizing the multiplication in terms of the above mentioned metrics.
Underwater video and still images are used by many programs within National Oceanic Atmospheric and Administration (NOAA) fisheries with the objective of identifying, classifying and quantifying living marine resources. They use underwater cameras to get video recording data for manual analysis. This process of manual analysis is labour intensive, time consuming and error prone. An efficient solution for this problem is proposed which uses Gabor filters for feature extraction. The proposed method is implemented to identify two species of fish namely Epinephelus morio and Ocyurus chrysurus. The results show higher rate of detection with minimal rate of false alarms.
|
64 |
Uma ferramenta alternativa para síntese de circuitos lógicos usando a técnica de circuito evolutivo /Goulart Sobrinho, Edilton Furquim. January 2007 (has links)
Orientador: Suely Cunha Amaro Mantovani / Banca: José Raimundo de Oliveira / Banca: Nobuo Oki / Resumo: Neste trabalho descreve-se uma metodologia para síntese e otimização de circuitos digitais, usando a teoria de algoritmos evolutivos e como plataforma os dispositivos reconfiguráveis, denominada Hardware Evolutivo do inglês- Evolvable Hardware - EHW. O EHW, tornou-se viável com o desenvolvimento em grande escala dos dispositivos reconfiguráveis, Programmable Logic Devices (PLDs), cuja arquitetura e função podem ser determinadas por programação. Cada circuito pode ser representado como um indivíduo em um processo evolucionário, evoluindo-o através de operações genéticas para um resultado desejado. Como algoritmo evolutivo, aplicou-se o Algoritmo Genético (AG), uma das técnicas da computação evolutiva que utiliza os conceitos da genética e seleção natural. O processo de síntese aplicado neste trabalho, inicia por uma descrição do comportamento do circuito, através de uma tabela verdade para circuitos combinacionais e a tabela de estados para os circuitos seqüenciais. A técnica aplicada busca o arranjo correto e minimizado do circuito que desempenhe uma função proposta. Com base nesta metodologia, são implementados alguns exemplos em duas diferentes representações (mapas de fusíveis e matriz de portas lógicas). / Abstract: In this work was described a methodology for optimization and synthesis of digital circuits, which consist of evolving circuits through evolvable algorithms using as platforms reconfigurable devices, denominated Evolvable Hardware (EHW). It was became viable with the large scale development of reconfigurable devices, whose architecture and function can be determined by programming. Each circuit can be represented as an individual within an evolutionary process, evolving through genetic operations to desire results. Genetic Algorithm (GA) was applied as evolutionary algorithm where this technique evolvable computation as concepts of genetics and natural selection. The synthesis process applied in this work starts from a description from the circuits behavior. Trust table for combinatorial circuits and state transition table for sequential circuits were used for synthesis process. This technic applied search the correct arrange and minimized circuit which response the propose function. Based on this methodology, some examples are implemented in two different representations (fuse maps and logic gate matrices). / Mestre
|
65 |
Técnicas de reconfigurabilidade dos FPGAs da família APEX 20K - Altera. / Reconfigurability technics for the FPGAs of family APEX 20K - Altera.Teixeira, Marco Antonio 26 August 2002 (has links)
Os dispositivos lógicos programáveis pertencentes à família APEX 20K, são configurados no momento da inicialização do sistema com dados armazenados em dispositivos especificamente desenvolvidos para esse fim. Esta família de FPGAs possui uma interface otimizada, permitindo também que microprocessadores os configure de maneira serial ou paralela, síncrona ou assíncronamente. Depois de configurados, estes FPGAs podem ser reconfigurados em tempo real com novos dados de configuração. A reconfiguração em tempo real conduz a inovadoras aplicações de computação reconfigurável. Os dispositivos de configuração disponíveis comercialmente, limitam-se a configurar os FPGAs apenas no momento da inicialização do sistema e sempre com o mesmo arquivo de configuração. Este trabalho apresenta a implementação de um controlador de configuração capaz de gerenciar a configuração e reconfiguração de múltiplos FPGAs, a partir de vários arquivos distintos de configuração. Todo o projeto é desenvolvido, testado e validado através da ferramenta EDA Quartus II, que propicia um ambiente de desenvolvimento integrado de projeto, compilação e síntese lógica, simulação e análise de tempo. / The APEX 20K programmable logic devices family, are configured at system power-up with data stored in a specific serial configuration device. This family of FPGAs contain an optimized interface that permits microprocessors to configure APEX 20K devices serially or in parallel, and synchronously or asynchronously. After configured, it can be reconfigured in-circuit by resetting the device and loading new data. Real-time changes lead to innovative reconfigurable computing applications. The commercial available configuration devices limit to configure the APEX 20K devices only on the system power-up and always with the same configuration data file. This work shows a configuration controller implementation that can manage the configuration and reconfiguration of several FPGAs from multiple configuration files. The entire project is developed, tested and validated through the EDA tool Quartus II, that provide a integrated package with HDL and schematic design entry, compilation and logic synthesis, full simulation and worst-case timing analysis.
|
66 |
Process Variability-Aware Performance Modeling In 65 nm CMOSHarish, B P 12 1900 (has links)
With the continued and successful scaling of CMOS, process, voltage, and temperature (PVT), variations are increasing with each technology generation. The process variability impacts all design goals like performance, power budget and reliability of circuits significantly, resulting in yield loss. Hence, variability needs to be modeled and cancelled out by design techniques during the design phase itself. This thesis addresses the variability issues in 65 nm CMOS, across the domains of process technology, device physics and circuit design, with an eventual goal of accurate modeling and prediction of propagation delay and power dissipation.
We have designed and optimized 65 nm gate length NMOS/PMOS devices to meet the specifications of the International Technology Roadmap for Semiconductors (ITRS), by two dimensional process and device simulation based design. Current design sign-off practices, which rely on corner case analysis to model process variations, are pessimistic and are becoming impractical for nanoscale technologies. To avoid substantial overdesign, we have proposed a generalized statistical framework for variability-aware circuit design, for timing sign-off and power budget analysis, based on standard cell characterization, through mixed-mode simulations. Two input NAND gate has been used as a library element. Second order statistical hybrid models have been proposed to relate gate delay, static leakage power and dynamic power directly in terms of the underlying process parameters, using statistical techniques of Design Of Experiments - Response Surface Methodology (DOE-RSM) and Least Squares Method (LSM).
To extend this methodology for a generic technology library and for computational efficiency, analytical models have been proposed to relate gate delays to the device saturation current, static leakage power to device drain/gate resistance characterization and dynamic power to device CV-characterization. The hybrid models are derived based on mixed-mode simulated data, for accuracy and the analytical device characterization, for computational efficiency. It has been demonstrated that hybrid models based statistical design results in robust and reliable circuit design. This methodology is scalable to a large library of cells for statistical static timing analysis (SSTA) and statistical circuit simulation at the gate level for estimating delay, leakage power and dynamic power, in the presence of process variations. This methodology is useful in bridging the gap between the Technology CAD and Design CAD, through standard cell library characterization for delay, static leakage power and dynamic power, in the face of ever decreasing timing windows and power budgets.
Finally, we have explored the gate-to-source/drain overlap length as a device design parameter for a robust variability-aware device structure and demonstrated the presence of trade-off between performance and variability, both at the device level and circuit level.
|
67 |
Elektrostatische Aufladung organischer Feldeffekttransistoren zur Verbesserung von gedruckten SchaltungenReuter, Kay 15 November 2012 (has links) (PDF)
Topic of the thesis is the production of unipolar digital circuits by means of mass-printing technologies. For this purpose accumulation-mode and depletion-mode field-effect transistors have been used. To realize depletion-mode field-effect transistors charges are injected and stored in the gate-dielectric.
Consequently, the charge transport on the semiconductor-dielectric interface is influenced and the threshold voltage can be controlled. To inject charges into the dielectric different technologies have been used and will be discussed in terms of their process parameters. Finally, fully-printed digital circuits with enhanced performance are introduced. / Gegenstand der vorliegenden Arbeit ist die drucktechnische Herstellung von unipolaren digitalen Schaltungen durch eine Kombination von organischen Feldeekttransistoren vom Anreicherungs- und Verarmungstyp. Zur Realisierung von Transistoren vom Verarmungstyp werden Überschussladung in den Gate- Isolator eingebracht und gespeichert, wodurch der Ladungstransport im Transistorkanal insbesondere die Schwellspannung beeinflusst wird. Es werden verschiedene Aufladungstechnologien und deren Prozessparameter diskutiert. Abschließend werden vollständig mit Massendruckverfahren prozessierte, digitale Schaltungen mit verbesserter Signalübertragungscharakteristik vorgestellt.
|
68 |
Analyse de robustesse de systèmes intégrés numériques / Robustness analysis of digital integrated systemsChibani, Kais 10 November 2016 (has links)
Les circuits intégrés ne sont pas à l'abri d'interférences naturelles ou malveillantes qui peuvent provoquer des fautes transitoires conduisant à des erreurs (Soft errors) et potentiellement à un comportement erroné. Ceci doit être maîtrisé surtout dans le cas des systèmes critiques qui imposent des contraintes de sûreté et/ou de sécurité. Pour optimiser les stratégies de protection de tels systèmes, il est fondamental d'identifier les éléments les plus critiques. L'évaluation de la criticité de chaque bloc permet de limiter les protections aux blocs les plus sensibles. Cette thèse a pour objectif de proposer des approches permettant d'analyser, tôt dans le flot de conception, la robustesse d'un système numérique. Le critère clé utilisé est la durée de vie des données stockées dans les registres, pour une application donnée. Dans le cas des systèmes à base de microprocesseur, une approche analytique a été développée et validée autour d'un microprocesseur SparcV8 (LEON3). Celle-ci repose sur une nouvelle méthodologie permettant de raffiner les évaluations de criticité des registres. Ensuite, une approche complémentaire et plus générique a été mise en place pour calculer la criticité des différents points mémoires à partir d'une description synthétisable. L'outil mettant en œuvre cette approche a été éprouvé sur des systèmes significatifs tels que des accélérateurs matériels de chiffrement et un système matériel/logiciel basé sur le processeur LEON3. Des campagnes d'injection de fautes ont permis de valider les deux approches proposées dans cette thèse. En outre, ces approches se caractérisent par leur généralité, leur efficacité en termes de précision et de rapidité, ainsi que leur faible coût de mise en œuvre et leur capacité à ré-exploiter les environnements de validation fonctionnelle. / Integrated circuits are not immune to natural or malicious interferences that may cause transient faults which lead to errors (soft errors) and potentially to wrong behavior. This must be mastered particularly in the case of critical systems which impose safety and/or security constraints. To optimize protection strategies of such systems, it is essential to identify the most critical elements. The assessment of the criticality of each block allows limiting the protection to the most sensitive blocks. This thesis aims at proposing approaches in order to analyze, early in the design flow, the robustness of a digital system. The key criterion used is the lifetime of data stored in the registers for a given application. In the case of microprocessor-based systems, an analytical approach has been developed and validated on a SparcV8 microprocessor (LEON3). This approach is based on a new methodology to refine assessments of registers criticality. Then a more generic and complementary approach was implemented to compute the criticality of all flip-flops from a synthesizable description. The tool implementing this approach was tested on significant systems such as hardware crypto accelerators and a hardware/software system based on the LEON3 processor. Fault injection campaigns have validated the two approaches proposed in this thesis. In addition, these approaches are characterized by their generality, their efficiency in terms of accuracy and speed and a low-cost implementation. Another benefit is also their ability to re-use the functional verification environments.
|
69 |
Uma ferramenta alternativa para síntese de circuitos lógicos usando a técnica de circuito evolutivoGoulart Sobrinho, Edilton Furquim [UNESP] 25 May 2007 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:35Z (GMT). No. of bitstreams: 0
Previous issue date: 2007-05-25Bitstream added on 2014-06-13T20:49:18Z : No. of bitstreams: 1
goulartsobrinho_ef_me_ilha.pdf: 944900 bytes, checksum: 47dc5d964428b7cb8bd18e1e00e1d994 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho descreve-se uma metodologia para síntese e otimização de circuitos digitais, usando a teoria de algoritmos evolutivos e como plataforma os dispositivos reconfiguráveis, denominada Hardware Evolutivo do inglês- Evolvable Hardware - EHW. O EHW, tornou-se viável com o desenvolvimento em grande escala dos dispositivos reconfiguráveis, Programmable Logic Devices (PLD s), cuja arquitetura e função podem ser determinadas por programação. Cada circuito pode ser representado como um indivíduo em um processo evolucionário, evoluindo-o através de operações genéticas para um resultado desejado. Como algoritmo evolutivo, aplicou-se o Algoritmo Genético (AG), uma das técnicas da computação evolutiva que utiliza os conceitos da genética e seleção natural. O processo de síntese aplicado neste trabalho, inicia por uma descrição do comportamento do circuito, através de uma tabela verdade para circuitos combinacionais e a tabela de estados para os circuitos seqüenciais. A técnica aplicada busca o arranjo correto e minimizado do circuito que desempenhe uma função proposta. Com base nesta metodologia, são implementados alguns exemplos em duas diferentes representações (mapas de fusíveis e matriz de portas lógicas). / In this work was described a methodology for optimization and synthesis of digital circuits, which consist of evolving circuits through evolvable algorithms using as platforms reconfigurable devices, denominated Evolvable Hardware (EHW). It was became viable with the large scale development of reconfigurable devices, whose architecture and function can be determined by programming. Each circuit can be represented as an individual within an evolutionary process, evolving through genetic operations to desire results. Genetic Algorithm (GA) was applied as evolutionary algorithm where this technique evolvable computation as concepts of genetics and natural selection. The synthesis process applied in this work starts from a description from the circuits behavior. Trust table for combinatorial circuits and state transition table for sequential circuits were used for synthesis process. This technic applied search the correct arrange and minimized circuit which response the propose function. Based on this methodology, some examples are implemented in two different representations (fuse maps and logic gate matrices).
|
70 |
Técnicas de reconfigurabilidade dos FPGAs da família APEX 20K - Altera. / Reconfigurability technics for the FPGAs of family APEX 20K - Altera.Marco Antonio Teixeira 26 August 2002 (has links)
Os dispositivos lógicos programáveis pertencentes à família APEX 20K, são configurados no momento da inicialização do sistema com dados armazenados em dispositivos especificamente desenvolvidos para esse fim. Esta família de FPGAs possui uma interface otimizada, permitindo também que microprocessadores os configure de maneira serial ou paralela, síncrona ou assíncronamente. Depois de configurados, estes FPGAs podem ser reconfigurados em tempo real com novos dados de configuração. A reconfiguração em tempo real conduz a inovadoras aplicações de computação reconfigurável. Os dispositivos de configuração disponíveis comercialmente, limitam-se a configurar os FPGAs apenas no momento da inicialização do sistema e sempre com o mesmo arquivo de configuração. Este trabalho apresenta a implementação de um controlador de configuração capaz de gerenciar a configuração e reconfiguração de múltiplos FPGAs, a partir de vários arquivos distintos de configuração. Todo o projeto é desenvolvido, testado e validado através da ferramenta EDA Quartus II, que propicia um ambiente de desenvolvimento integrado de projeto, compilação e síntese lógica, simulação e análise de tempo. / The APEX 20K programmable logic devices family, are configured at system power-up with data stored in a specific serial configuration device. This family of FPGAs contain an optimized interface that permits microprocessors to configure APEX 20K devices serially or in parallel, and synchronously or asynchronously. After configured, it can be reconfigured in-circuit by resetting the device and loading new data. Real-time changes lead to innovative reconfigurable computing applications. The commercial available configuration devices limit to configure the APEX 20K devices only on the system power-up and always with the same configuration data file. This work shows a configuration controller implementation that can manage the configuration and reconfiguration of several FPGAs from multiple configuration files. The entire project is developed, tested and validated through the EDA tool Quartus II, that provide a integrated package with HDL and schematic design entry, compilation and logic synthesis, full simulation and worst-case timing analysis.
|
Page generated in 0.054 seconds