• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • Tagged with
  • 19
  • 19
  • 19
  • 12
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On Detection, Analysis and Characterization of Transient and Parametric Failures in Nano-scale CMOS VLSI

Sanyal, Alodeep 01 May 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: hii a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by hiii an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
12

FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

Qiang, Qiang 10 April 2006 (has links)
No description available.
13

Online Techniques for Enhancing the Diagnosis of Digital Circuits

Tanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication. The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis. Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments. Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method. Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6. In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D.
14

Modeling defective part level due to static and dynamic defects based upon site observation and excitation balance

Dworak, Jennifer Lynn 30 September 2004 (has links)
Manufacture testing of digital integrated circuits is essential for high quality. However, exhaustive testing is impractical, and only a small subset of all possible test patterns (or test pattern pairs) may be applied. Thus, it is crucial to choose a subset that detects a high percentage of the defective parts and produces a low defective part level. Historically, test pattern generation has often been seen as a deterministic endeavor. Test sets are generated to deterministically ensure that a large percentage of the targeted faults are detected. However, many real defects do not behave like these faults, and a test set that detects them all may still miss many defects. Unfortunately, modeling all possible defects as faults is impractical. Thus, it is important to fortuitously detect unmodeled defects using high quality test sets. To maximize fortuitous detection, we do not assume a high correlation between faults and actual defects. Instead, we look at the common requirements for all defect detection. We deterministically maximize the observations of the leastobserved sites while randomly exciting the defects that may be present. The resulting decrease in defective part level is estimated using the MPGD model. This dissertation describes the MPGD defective part level model and shows how it can be used to predict defective part levels resulting from static defect detection. Unlike many other predictors, its predictions are a function of site observations, not fault coverage, and thus it is generally more accurate at high fault coverages. Furthermore, its components model the physical realities of site observation and defect excitation, and thus it can be used to give insight into better test generation strategies. Next, we investigate the effect of additional constraints on the fortuitous detection of defects-specifically, as we focus on detecting dynamic defects instead of static ones. We show that the quality of the randomness of excitation becomes increasingly important as defect complexity increases. We introduce a new metric, called excitation balance, to estimate the quality of the excitation, and we show how excitation balance relates to the constant τ in the MPGD model.
15

Modeling defective part level due to static and dynamic defects based upon site observation and excitation balance

Dworak, Jennifer Lynn 30 September 2004 (has links)
Manufacture testing of digital integrated circuits is essential for high quality. However, exhaustive testing is impractical, and only a small subset of all possible test patterns (or test pattern pairs) may be applied. Thus, it is crucial to choose a subset that detects a high percentage of the defective parts and produces a low defective part level. Historically, test pattern generation has often been seen as a deterministic endeavor. Test sets are generated to deterministically ensure that a large percentage of the targeted faults are detected. However, many real defects do not behave like these faults, and a test set that detects them all may still miss many defects. Unfortunately, modeling all possible defects as faults is impractical. Thus, it is important to fortuitously detect unmodeled defects using high quality test sets. To maximize fortuitous detection, we do not assume a high correlation between faults and actual defects. Instead, we look at the common requirements for all defect detection. We deterministically maximize the observations of the leastobserved sites while randomly exciting the defects that may be present. The resulting decrease in defective part level is estimated using the MPGD model. This dissertation describes the MPGD defective part level model and shows how it can be used to predict defective part levels resulting from static defect detection. Unlike many other predictors, its predictions are a function of site observations, not fault coverage, and thus it is generally more accurate at high fault coverages. Furthermore, its components model the physical realities of site observation and defect excitation, and thus it can be used to give insight into better test generation strategies. Next, we investigate the effect of additional constraints on the fortuitous detection of defects-specifically, as we focus on detecting dynamic defects instead of static ones. We show that the quality of the randomness of excitation becomes increasingly important as defect complexity increases. We introduce a new metric, called excitation balance, to estimate the quality of the excitation, and we show how excitation balance relates to the constant τ in the MPGD model.
16

Functional timing analysis of VLSI circuits containing complex gates / Análise de timing funcional de circuitos VLSI contendo portas complexas

Guntzel, Jose Luis Almada January 2000 (has links)
Os recentes avanços experimentados pela tecnologia CMOS tem permitido a fabricação de transistores em dimensões submicrônicas, possibilitando a integração de dezenas de milhões de dispositivos numa única pastilha de silício, os quais podem ser usados na implementação de sistemas eletrônicos muito complexos. Este grande aumento na complexidade dos projetos fez surgir uma demanda por ferramentas de verificação eficientes e sobretudo que incorporassem modelos físicos e computacionais mais adequados. A verificação de timing objetiva determinar se as restrições temporais impostas ao projeto podem ou não ser satisfeitas quando de sua fabricação. Ela pode ser levada a cabo por meio de simulação ou por análise de timing. Apesar da simulação oferecer estimativas mais precisas, ela apresenta a desvantagem de ser dependente de estímulos. Assim, para se assegurar que a situação crítica é considerada, é necessário simularem-se todas as possibilidades de padrões de entrada. Obviamente, isto não é factível para os projetos atuais, dada a alta complexidade que os mesmos apresentam. Para contornar este problema, os projetistas devem lançar mão da análise de timing. A análise de timing é uma abordagem independente de vetor de entrada que modela cada bloco combinacional do circuito como um grafo acíclico direto, o qual é utilizado para estimar o atraso do circuito. As primeiras ferramentas de análise de timing utilizavam apenas a topologia do circuito para estimar o atraso, sendo assim referenciadas como analisadores de timing topológicos. Entretanto, tal aproximação pode resultar em estimativas demasiadamente pessimistas, uma vez que os caminhos mais longos do grafo podem não ser capazes de propagar transições, i.e., podem ser falsos. A análise de timing funcional, por sua vez, considera não apenas a topologia do circuito, mas também as relações temporais e funcionais entre seus elementos. As ferramentas de análise de timing funcional podem diferir por três aspectos: o conjunto de condições necessárias para se declarar um caminho como sensibilizável (i.e., o chamado critério de sensibilização), o número de caminhos simultaneamente tratados e o método usado para determinar se as condições de sensibilização são solúveis ou não. Atualmente, as duas classes de soluções mais eficientes testam simultaneamente a sensibilização de conjuntos inteiros de caminhos: uma baseia-se em técnicas de geração automática de padrões de teste (ATPG) enquanto que a outra transforma o problema de análise de timing em um problema de solvabilidade (SAT). Apesar da análise de timing ter sido exaustivamente estudada nos últimos quinze anos, alguns tópicos específicos não têm recebido a devida atenção. Um tal tópico é a aplicabilidade dos algoritmos de análise de timing funcional para circuitos contendo portas complexas. Este constitui o objeto básico desta tese de doutorado. Além deste objetivo, e como condição sine qua non para o desenvolvimento do trabalho, é apresentado um estudo sistemático e detalhado sobre análise de timing funcional. / The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
17

Functional timing analysis of VLSI circuits containing complex gates / Análise de timing funcional de circuitos VLSI contendo portas complexas

Guntzel, Jose Luis Almada January 2000 (has links)
Os recentes avanços experimentados pela tecnologia CMOS tem permitido a fabricação de transistores em dimensões submicrônicas, possibilitando a integração de dezenas de milhões de dispositivos numa única pastilha de silício, os quais podem ser usados na implementação de sistemas eletrônicos muito complexos. Este grande aumento na complexidade dos projetos fez surgir uma demanda por ferramentas de verificação eficientes e sobretudo que incorporassem modelos físicos e computacionais mais adequados. A verificação de timing objetiva determinar se as restrições temporais impostas ao projeto podem ou não ser satisfeitas quando de sua fabricação. Ela pode ser levada a cabo por meio de simulação ou por análise de timing. Apesar da simulação oferecer estimativas mais precisas, ela apresenta a desvantagem de ser dependente de estímulos. Assim, para se assegurar que a situação crítica é considerada, é necessário simularem-se todas as possibilidades de padrões de entrada. Obviamente, isto não é factível para os projetos atuais, dada a alta complexidade que os mesmos apresentam. Para contornar este problema, os projetistas devem lançar mão da análise de timing. A análise de timing é uma abordagem independente de vetor de entrada que modela cada bloco combinacional do circuito como um grafo acíclico direto, o qual é utilizado para estimar o atraso do circuito. As primeiras ferramentas de análise de timing utilizavam apenas a topologia do circuito para estimar o atraso, sendo assim referenciadas como analisadores de timing topológicos. Entretanto, tal aproximação pode resultar em estimativas demasiadamente pessimistas, uma vez que os caminhos mais longos do grafo podem não ser capazes de propagar transições, i.e., podem ser falsos. A análise de timing funcional, por sua vez, considera não apenas a topologia do circuito, mas também as relações temporais e funcionais entre seus elementos. As ferramentas de análise de timing funcional podem diferir por três aspectos: o conjunto de condições necessárias para se declarar um caminho como sensibilizável (i.e., o chamado critério de sensibilização), o número de caminhos simultaneamente tratados e o método usado para determinar se as condições de sensibilização são solúveis ou não. Atualmente, as duas classes de soluções mais eficientes testam simultaneamente a sensibilização de conjuntos inteiros de caminhos: uma baseia-se em técnicas de geração automática de padrões de teste (ATPG) enquanto que a outra transforma o problema de análise de timing em um problema de solvabilidade (SAT). Apesar da análise de timing ter sido exaustivamente estudada nos últimos quinze anos, alguns tópicos específicos não têm recebido a devida atenção. Um tal tópico é a aplicabilidade dos algoritmos de análise de timing funcional para circuitos contendo portas complexas. Este constitui o objeto básico desta tese de doutorado. Além deste objetivo, e como condição sine qua non para o desenvolvimento do trabalho, é apresentado um estudo sistemático e detalhado sobre análise de timing funcional. / The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
18

Functional timing analysis of VLSI circuits containing complex gates / Análise de timing funcional de circuitos VLSI contendo portas complexas

Guntzel, Jose Luis Almada January 2000 (has links)
Os recentes avanços experimentados pela tecnologia CMOS tem permitido a fabricação de transistores em dimensões submicrônicas, possibilitando a integração de dezenas de milhões de dispositivos numa única pastilha de silício, os quais podem ser usados na implementação de sistemas eletrônicos muito complexos. Este grande aumento na complexidade dos projetos fez surgir uma demanda por ferramentas de verificação eficientes e sobretudo que incorporassem modelos físicos e computacionais mais adequados. A verificação de timing objetiva determinar se as restrições temporais impostas ao projeto podem ou não ser satisfeitas quando de sua fabricação. Ela pode ser levada a cabo por meio de simulação ou por análise de timing. Apesar da simulação oferecer estimativas mais precisas, ela apresenta a desvantagem de ser dependente de estímulos. Assim, para se assegurar que a situação crítica é considerada, é necessário simularem-se todas as possibilidades de padrões de entrada. Obviamente, isto não é factível para os projetos atuais, dada a alta complexidade que os mesmos apresentam. Para contornar este problema, os projetistas devem lançar mão da análise de timing. A análise de timing é uma abordagem independente de vetor de entrada que modela cada bloco combinacional do circuito como um grafo acíclico direto, o qual é utilizado para estimar o atraso do circuito. As primeiras ferramentas de análise de timing utilizavam apenas a topologia do circuito para estimar o atraso, sendo assim referenciadas como analisadores de timing topológicos. Entretanto, tal aproximação pode resultar em estimativas demasiadamente pessimistas, uma vez que os caminhos mais longos do grafo podem não ser capazes de propagar transições, i.e., podem ser falsos. A análise de timing funcional, por sua vez, considera não apenas a topologia do circuito, mas também as relações temporais e funcionais entre seus elementos. As ferramentas de análise de timing funcional podem diferir por três aspectos: o conjunto de condições necessárias para se declarar um caminho como sensibilizável (i.e., o chamado critério de sensibilização), o número de caminhos simultaneamente tratados e o método usado para determinar se as condições de sensibilização são solúveis ou não. Atualmente, as duas classes de soluções mais eficientes testam simultaneamente a sensibilização de conjuntos inteiros de caminhos: uma baseia-se em técnicas de geração automática de padrões de teste (ATPG) enquanto que a outra transforma o problema de análise de timing em um problema de solvabilidade (SAT). Apesar da análise de timing ter sido exaustivamente estudada nos últimos quinze anos, alguns tópicos específicos não têm recebido a devida atenção. Um tal tópico é a aplicabilidade dos algoritmos de análise de timing funcional para circuitos contendo portas complexas. Este constitui o objeto básico desta tese de doutorado. Além deste objetivo, e como condição sine qua non para o desenvolvimento do trabalho, é apresentado um estudo sistemático e detalhado sobre análise de timing funcional. / The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
19

Metody pro testování analogových obvodů / Methods for testing of analog circuits

Kincl, Zdeněk January 2013 (has links)
Práce se zabývá metodami pro testování lineárních analogových obvodů v kmitočtové oblasti. Cílem je navrhnout efektivní metody pro automatické generování testovacího plánu. Snížením počtu měření a výpočetní náročnosti lze výrazně snížit náklady za testování. Práce se zabývá multifrekveční parametrickou poruchovou analýzou, která byla plně implementována do programu Matlab. Vhodnou volbou testovacích kmitočtů lze potlačit chyby měření a chyby způsobené výrobními tolerancemi obvodových prvků. Navržené metody pro optimální volbu kmitočtů byly statisticky ověřeny metodou MonteCarlo. Pro zvýšení přesnosti a snížení výpočetní náročnosti poruchové analýzy byly vyvinuty postupy založené na metodě nejmenších čtverců a přibližné symbolické analýze.

Page generated in 0.1606 seconds