Spelling suggestions: "subject:"selftest"" "subject:"selfinterest""
61 |
Teste de dispositivos analógicos programáveis (FPAAS)Balen, Tiago Roberto January 2006 (has links)
Neste trabalho o teste de dispositivos analógicos programáveis é abordado. Diversas metodologias de teste analógico existentes são estudadas e algumas delas são utilizadas nas estratégias desenvolvidas. Dois FPAAs (Field Programmable Analog Arrays) comerciais de fabricantes e modelos distintos são utilizados para validar as estratégias de teste propostas. O primeiro dispositivo estudado é um FPAA de tempo contínuo (capaz de implementar circuitos contínuos no tempo) da Lattice Semiconductors. Tal dispositivo é marcado pela característica estrutural de sua programabilidade. Por esta razão, a estratégia a ele aplicada é baseada em um método de teste também estrutural, conhecido como OBT (Oscillation-Based Test). Neste método o circuito é dividido em blocos simples que são transformados em osciladores. Os parâmetros do sinal obtido, tais como a freqüência de oscilação e a amplitude, têm relação direta com os componentes utilizados na implementação do oscilador. Desta maneira, é possível detectar falhas no FPAA observando os parâmetros do sinal gerado. Esta estratégia é estudada inicialmente considerando uma análise externa dos parâmetros do sinal. Como uma alternativa de redução de custos e melhoria na cobertura de falhas, um analisador de resposta baseado em um duplo integrador é adotado, permitindo que a avaliação do sinal gerado pelo oscilador seja feita internamente, utilizando-se os recursos programáveis do próprio FPAA. Os resultados obtidos para as análises interna e externa são então comparados. O segundo FPAA estudado, da Anadigm Company, é um dispositivo a capacitores chaveados que tem como característica a programabilidade funcional. Por esta razão o desenvolvimento de uma técnica de teste estrutural é dificultado, pois não se conhece detalhes da arquitetura do componente. Por esta razão, uma técnica de teste funcional, conhecida como Transient Response Analysis Method, é aplicada ao teste deste FPAA. Neste método o circuito sob teste é dividido em blocos funcionais de primeira e segunda ordem e a resposta transiente destes blocos para um dado estímulo de entrada é analisada. O bloco sob teste é então duplicado e um esquema de auto-teste integrado baseado em redundância é desenvolvido, com o intuito de se obter um sinal de erro. Este sinal de erro representa a diferença das respostas transientes dos blocos duplicados. Como proposta para se aumentar a observabilidade do sinal de erro o mesmo é integrado ao longo tempo, aumentando a capacidade de detecção de falhas quando utilizado este método. Em ambas estratégias o objetivo principal do trabalho é testar os blocos analógicos programáveis dos FPAAs explorando ao máximo a programabilidade dos dispositivos e utilizando recursos pré-existentes para auxiliar no teste. Os resultados obtidos mostram que as estratégias desenvolvidas configuram boas alternativas para o auto-teste integrado deste tipo de componente. / This work addresses the test of programmable analog devices. Several analog test methodologies are studied and some of them are applied in the developed strategies. In order to validate these strategies, two commercial FPAAs (Field Programmable Analog Arrays), of different vendors and distinct models, are considered as devices under test. The first studied device is a continuous-time FPAA from Lattice Semiconductors. One important characteristic of such device is the structural programmability. For this reason the test strategy applied to this FPAA is based in a structural method known as OBT (Oscillation-Based Test). In this method, blocks of the circuit under test are individually converted into oscillators. The parameters of the generated signal, such as the frequency and amplitude, can be expressed as function of the components used in the oscillator implementation. This way, it is possible to detect faults in the FPAA simply observing such parameters. This method is firstly studied considering an external analysis of the signal parameters. However, in a second moment, an internal response analyzer, based on a double integrator, is built with the available programmable resources of the FPAA. This way, overall test cost is reduced, while the fault coverage is increased with no area overhead. The obtained results considering the external analysis and the built-in response evaluation are compared. The second considered FPAA, from Anadigm Company, is a switched capacitor device whose programming characteristic is strictly functional. Thus, a structural test method cannot be easily developed and applied without the previous knowledge of he device architectural details. For this reason, a functional test method known as TRAM (Transient Response Analysis Method) is adopted. In this method the Circuit Under Test (CUT) is programmed to implement first and second order blocks and the transient response of these blocks for a given input stimuli is analyzed. Taking advantage of the inherent programmability of the FPAAs, a BIST-based scheme is used in order to obtain an error signal representing the difference between the fault-free and faulty Configurable Analog Blocks (CABs). As a proposal to augmenting the observability, the error signal is integrated, enhancing de fault detection capability when using this method. In both developed strategies the main objective is to test the CABs of the FPAAs exploiting the device programmability, using the existing resources in order to aid the test. The obtained results show that the developed strategies represent good alternatives to the built-in self-test of such type of device.
|
62 |
Teste de dispositivos analógicos programáveis (FPAAS)Balen, Tiago Roberto January 2006 (has links)
Neste trabalho o teste de dispositivos analógicos programáveis é abordado. Diversas metodologias de teste analógico existentes são estudadas e algumas delas são utilizadas nas estratégias desenvolvidas. Dois FPAAs (Field Programmable Analog Arrays) comerciais de fabricantes e modelos distintos são utilizados para validar as estratégias de teste propostas. O primeiro dispositivo estudado é um FPAA de tempo contínuo (capaz de implementar circuitos contínuos no tempo) da Lattice Semiconductors. Tal dispositivo é marcado pela característica estrutural de sua programabilidade. Por esta razão, a estratégia a ele aplicada é baseada em um método de teste também estrutural, conhecido como OBT (Oscillation-Based Test). Neste método o circuito é dividido em blocos simples que são transformados em osciladores. Os parâmetros do sinal obtido, tais como a freqüência de oscilação e a amplitude, têm relação direta com os componentes utilizados na implementação do oscilador. Desta maneira, é possível detectar falhas no FPAA observando os parâmetros do sinal gerado. Esta estratégia é estudada inicialmente considerando uma análise externa dos parâmetros do sinal. Como uma alternativa de redução de custos e melhoria na cobertura de falhas, um analisador de resposta baseado em um duplo integrador é adotado, permitindo que a avaliação do sinal gerado pelo oscilador seja feita internamente, utilizando-se os recursos programáveis do próprio FPAA. Os resultados obtidos para as análises interna e externa são então comparados. O segundo FPAA estudado, da Anadigm Company, é um dispositivo a capacitores chaveados que tem como característica a programabilidade funcional. Por esta razão o desenvolvimento de uma técnica de teste estrutural é dificultado, pois não se conhece detalhes da arquitetura do componente. Por esta razão, uma técnica de teste funcional, conhecida como Transient Response Analysis Method, é aplicada ao teste deste FPAA. Neste método o circuito sob teste é dividido em blocos funcionais de primeira e segunda ordem e a resposta transiente destes blocos para um dado estímulo de entrada é analisada. O bloco sob teste é então duplicado e um esquema de auto-teste integrado baseado em redundância é desenvolvido, com o intuito de se obter um sinal de erro. Este sinal de erro representa a diferença das respostas transientes dos blocos duplicados. Como proposta para se aumentar a observabilidade do sinal de erro o mesmo é integrado ao longo tempo, aumentando a capacidade de detecção de falhas quando utilizado este método. Em ambas estratégias o objetivo principal do trabalho é testar os blocos analógicos programáveis dos FPAAs explorando ao máximo a programabilidade dos dispositivos e utilizando recursos pré-existentes para auxiliar no teste. Os resultados obtidos mostram que as estratégias desenvolvidas configuram boas alternativas para o auto-teste integrado deste tipo de componente. / This work addresses the test of programmable analog devices. Several analog test methodologies are studied and some of them are applied in the developed strategies. In order to validate these strategies, two commercial FPAAs (Field Programmable Analog Arrays), of different vendors and distinct models, are considered as devices under test. The first studied device is a continuous-time FPAA from Lattice Semiconductors. One important characteristic of such device is the structural programmability. For this reason the test strategy applied to this FPAA is based in a structural method known as OBT (Oscillation-Based Test). In this method, blocks of the circuit under test are individually converted into oscillators. The parameters of the generated signal, such as the frequency and amplitude, can be expressed as function of the components used in the oscillator implementation. This way, it is possible to detect faults in the FPAA simply observing such parameters. This method is firstly studied considering an external analysis of the signal parameters. However, in a second moment, an internal response analyzer, based on a double integrator, is built with the available programmable resources of the FPAA. This way, overall test cost is reduced, while the fault coverage is increased with no area overhead. The obtained results considering the external analysis and the built-in response evaluation are compared. The second considered FPAA, from Anadigm Company, is a switched capacitor device whose programming characteristic is strictly functional. Thus, a structural test method cannot be easily developed and applied without the previous knowledge of he device architectural details. For this reason, a functional test method known as TRAM (Transient Response Analysis Method) is adopted. In this method the Circuit Under Test (CUT) is programmed to implement first and second order blocks and the transient response of these blocks for a given input stimuli is analyzed. Taking advantage of the inherent programmability of the FPAAs, a BIST-based scheme is used in order to obtain an error signal representing the difference between the fault-free and faulty Configurable Analog Blocks (CABs). As a proposal to augmenting the observability, the error signal is integrated, enhancing de fault detection capability when using this method. In both developed strategies the main objective is to test the CABs of the FPAAs exploiting the device programmability, using the existing resources in order to aid the test. The obtained results show that the developed strategies represent good alternatives to the built-in self-test of such type of device.
|
63 |
Samočinné testování mikrokontrolerů / Self-Testing of MicrocontrollersDenk, Filip January 2019 (has links)
This Master's thesis deals with functional safety of electronic systems. Specifically, it focuses on self-testing of the microprocessor and its peripherals at the software level. The main aim of the thesis is to design and implement a set of functions written in programming language C or assembly language, which automatically test the selected areas of the microcontroller. Resources and methods used in the implemented solution also aim to meet the requirements according to the safety standard IEC 60730-1, Annex H, Software Class B. The microcontroller NXP LPC55S69 was chosen as a hardware platform. It consists of two ARM Cortex-M33 cores. As a result, the example application is provided, which uses implemented test functions at the run-time. Example application also contains a graphical user interface with fault injection ability.
|
64 |
Time-based All-Digital Technique for Analog Built-in Self TestVasudevamurthy, Rajath January 2013 (has links) (PDF)
A scheme for Built-in-Self-Test (BIST) of analog signals with minimal area overhead, for measuring on-chip voltages in an all-digital manner is presented in this thesis. With technology scaling, the inverter switching times are becoming shorter thus leading to better resolution of edges in time. This time resolution is observed to be superior to voltage resolution in the face of reducing supply voltage and increasing variations as physical dimensions shrink. In this thesis, a new method of observability of analog signals is proposed, which is digital-friendly and scalable to future deep sub-micron (DSM) processes. The low-bandwidth analog test voltage is captured as the delay between a pair of clock signals. The delay thus setup is measured digitally in accordance with the desired resolution.
Such an approach lends itself easily to distributed manner, where the routing of analog signals over long paths is minimized. A small piece of circuitry, called sampling head (SpH) placed near each test voltage, acts as a transducer converting the test voltage to a delay between a pair of low-frequency clocks. A probe clock and a sampling clock is routed serially to the sampling heads placed at the nodes of analog test voltages. This sampling head, present at each test node consists of a pair of delay cells and a pair of flip-flops, giving rise to as many sub-sampled signal pairs as the number of nodes. To measure a certain analog voltage, the corresponding sub-sampled signal pair is fed to a Delay Measurement Unit (DMU) to measure the skew between this pair. The concept is validated by designing a test chip in UMC 130 nm CMOS process. Sub-mV accuracy for static signals is demonstrated for a measurement time of few milliseconds and ENOB of 5.29 is demonstrated for low bandwidth signals in the absence of sample-and-hold circuitry.
The sampling clock is derived from the probe clock using a PLL and the design equations are worked out for optimal performance. To validate the concept, the duty-cycle of the probe clock, whose ON-time is modulated by a sine wave, is measured by the same DMU. Measurement results from FPGA implementation confirm 9 bits of resolution.
|
65 |
Methodologies for low-cost testing and self-healing of rf systemsGoyal, Abhilash 21 April 2011 (has links)
This thesis proposes a multifaceted production test and post-manufacture yield enhancement framework for RF systems. This framework uses low-cost test and post-manufacture calibration/tuning techniques. Since the test cost and the yield of the RF circuits/sub-system directly contribute to the manufacturing cost of RF systems, the proposed framework minimizes overall RF systems' manufacturing cost by taking two approaches. In the first approach, low-cost testing methodologies are proposed for RF amplifiers and integrated RF substrates with an embedded RF passive filter and interconnect. Techniques are developed to test RF circuits by the analysis of low-frequency signal of the order of few MHz and without using any external RF test-stimulus. Oscillation principles are used to enable testing of RF circuits without any external test-stimulus. In the second approach, to increase the yield of the RF circuits for parametric defects, RF circuits are tuned to compensate for a performance loss during production test using on-board or on-chip resources. This approach includes a diagnosis algorithm to identify faulty circuits within the system, and performs a compensation process that adjusts tunable components to enhance the performance of the RF circuits. In the proposed yield improvement methodologies, the external test stimulus is not required because the stimulus is generated by the RF circuit itself with the help of additional circuitry and faulty circuits are detected using low-cost test methods developed in this research. As a result, the proposed research enables low-cost testing and self-healing of RF systems.
|
66 |
Méthodologie d'estimation des métriques de test appliquée à une nouvelle technique de BIST de convertisseur SIGMA / DELTA / Methodology for test metrics estimation built-in design flow of hard-to-simulate analog/mixed-signal circuitsDubois, Matthieu 23 June 2011 (has links)
L'expansion du marché des semi-conducteurs dans tous les secteurs d'activité résulte de la capacité de créer de nouvelles applications grâce à l'intégration de plus en plus de fonctionnalités sur une surface de plus en plus faible. Pour chaque entreprise, la compétitivité dépend du coût de fabrication mais aussi de la fiabilité du produit. Ainsi, la phase de test d'un circuit intégré, et plus particulièrement des circuits analogiques et mixtes, est le facteur prédominant dans les choix d'un compromis entre ces deux critères antagonistes, car son coût est désormais proche du coût de production. Cette tendance contraint les acteurs du marché à mettre en place de nouvelles solutions moins onéreuses. Parmi les recherches dans ce domaine, la conception en vue du test (DfT) consiste à intégrer pendant le développement de la puce, une circuiterie additionnelle susceptible d'en faciliter le test, voire d'effectuer un auto-test (BIST). Mais la sélection d'une de ces techniques nécessite une évaluation de leur capacité de différencier les circuits fonctionnels des circuits défaillants. Ces travaux de recherche introduisent une méthodologie d'estimation de la qualité d'une DfT ou d'un BIST dans le flot de conception de circuits analogiques et mixtes. Basée sur la génération d'un large échantillon prenant en compte l'impact des variations d'un procédé technologique sur les performances et les mesures de test du circuit, cette méthodologie calcule les métriques de test exprimant la capacité de chaque technique de détecter les circuits défaillants sans rejeter des circuits fonctionnels et d'accepter les circuits fonctionnels en rejetant les circuits défaillant. Ensuite, le fonctionnement d'un auto-test numérique adapté aux convertisseurs sigma-delta est présenté ainsi qu'une nouvelle méthode de génération et d'injection du stimulus de test. La qualité de ces techniques d'auto-test est démontrée en utilisant la méthodologie d'estimation des métriques de test. Enfin, un démonstrateur développé sur un circuit programmable démontre la possibilité d'employer une technique d'auto-test dans un système de calibrage intégré. / The pervasiveness of the semiconductor industry in an increasing range of applications that span human activity stems from our ability to integrate more and more functionalities onto a small silicon area. The competitiveness in this industry, apart from product originality, is mainly defined by the manufacturing cost, as well as the product reliability. Therefore, finding a trade-off between these two often contradictory objectives is a major concern and calls for efficient test solutions. The focus nowadays is mainly on Analog and Mixed-Signal (AMS) circuits since the associated testing cost can amount up to 70% of the overall manufacturing cost despite that AMS circuits typically occupy no more than 20% of the die area. To this end, there are intensified efforts by the industry to develop more economical test solutions without sacrificing product quality. Design-for-Test (DfT) is a promising alternative to the standard test techniques. It consists of integrating during the development phase of the chip extra on-chip circuitry aiming to facilitate testing or even enable a built-in self-test (BIST). However, the adoption of a DFT technique requires a prior evaluation of its capability to distinguish the functional circuits from the defective ones. In this thesis, we present a novel methodology for estimating the quality of a DfT technique that is readily incorporated in the design flow of AMS circuits. Based on the generation of a large synthetic sample of circuits that takes into account the impact of the process ariations on the performances and test measurements, this methodology computes test metrics that determine whether the DFT technique is capable of rejecting defective devices while passing functional devices. In addition, the thesis proposes a novel, purely digital BIST technique for Sigma-Delta analog-to-digital converters. The efficiency of the test metrics evaluation methodology is demonstrated on this novel BIST technique. Finally, a hardware prototype developed on an FPGA shows the possibility of adapting the BIST technique within a calibration system.
|
67 |
Spelansvarsverktygens effekt : En kvalitativ kartläggning av faktorer som påverkar spelansvarsverktygens effekt / Effect of responsible gambling tools : A qualitative survey of factors that affect responsible gambling toolsBroström, Leonard, Thorslund, Kasper January 2022 (has links)
With the advent of the internet, the opportunity to gamble for money is no longer limited by opening hours or that gambling is only available in specific places, which has meant that more and more people have developed an unhealthy relationship with gambling. Having an unhealthy relationship with gambling can have several negative consequences. To address this problem, relatively new legislation has been introduced in Sweden. According to the new legislation, all companies with a Swedish gambling license must implement responsible gambling tools. The purpose of these tools is to encourage users to play in a healthy manner. The results in existing research and literature on responsible gambling tools are contradictory, but in general it has emerged that responsible gambling tools have a low impact, especially among the players who have the greatest need for them, as they do not use the tools in a proper way. This study aims to investigate why responsible gambling tools have low impact among players with gambling problems. The study has a qualitative research approach with semi-structured interviews as a method for data collection. With this as a starting point, eight semi-structured interviews were conducted in which the informant was asked to explain opinions and perceptions about gambling responsible tools. The interviews were analyzed through a thematic analysis, which resulted in seven themes that individually and together can explain the study's purpose. The themes the study found were interpretable legislation, unlicensed gambling, lack of standardization, lack of centralization, lack in user knowledge, lack in provided information and a lack in self-exclusion tools. This study also develops the general knowledge of what the view of responsible gambling tools looks like from the various actors. The themes presented will also be a contribution, that can be used in future research and continued development of responsible gambling tools. / I och med internets framväxt är möjligheten att spela om pengar inte längre begränsad av öppettider eller av specifika platser, vilket medfört att allt fler har utvecklat ett osunt förhållande till spel. Att ha ett osunt förhållande till spel om pengar kan få negativa konsekvenser. För att bemöta detta problem har en relativt ny lagstiftning införts. Alla bolag med svensk spellicens måste enligt den nya lagstiftningen implementera så kallade spelansvarsverktyg. Syftet med dessa verktyg är att uppmuntra användarna att spela på ett sunt sätt. Tidigare forskningsstudier och litteratur om spelansvarsverktyg är motsägelsefulla, men generellt har det framkommit att spelansvarsverktyg har en liten effekt speciellt på de individer som har störst behov av dem, då dessa inte använder spelansvarsverktygen. Denna studie avser att undersöka varför spelansvarsverktyg har liten effekt på spelare med spelproblem. Studien har en kvalitativ forskningsansats med semistrukturerade intervjuer som metod för datainsamling. Med detta som utgångspunkt genomfördes åtta semistrukturerade intervjuer där informanterna ombads redogöra för åsikter och uppfattningar om spelansvarsverktyg. Resultatet har analyserats genom en tematisk analys som resulterat i sju teman som var för sig och tillsammans förklarar studiens syfte och frågeställning. De sju teman var: icke licensierade spel, bristande lagstiftning, avsaknad av centralisering, avsaknad av standardisering, brister i användarkunskapen, avsaknad av information och brist i självuteslutning. Denna studie bidrar också med kunskap om hur de olika aktörerna ser på spelansvarsverktyg. De presenterade temana blir således ett bidrag som kan användas i framtida forskning och fortsatt utveckling av spelansvarsverktyg
|
68 |
Conception en vue de test de convertisseurs de signal analogique-numérique de type pipeline. / Design for test of pipelined analog to digital converters.Laraba, Asma 20 September 2013 (has links)
La Non-Linéarité-Différentielle (NLD) et la Non-Linéarité-Intégrale (NLI) sont les performances statiques les plus importantes des Convertisseurs Analogique-Numérique (CAN) qui sont mesurées lors d’un test de production. Ces deux performances indiquent la déviation de la fonction de transfert du CAN par rapport au cas idéal. Elles sont obtenues en appliquant une rampe ou une sinusoïde lente au CAN et en calculant le nombre d’occurrences de chacun des codes du CAN.Ceci permet la construction de l’histogramme qui permet l’extraction de la NLD et la NLI. Cette approche requiert lacollection d’une quantité importante de données puisque chacun des codes doit être traversé plusieurs fois afin de moyenner le bruit et la quantité de données nécessaire augmente exponentiellement avec la résolution du CAN sous test. En effet,malgré que les circuits analogiques et mixtes occupent une surface qui n’excède pas généralement 5% de la surface globald’un System-on-Chip (SoC), leur temps de test représente souvent plus que 30% du temps de test global. Pour cette raison, la réduction du temps de test des CANs est un domaine de recherche qui attire de plus en plus d’attention et qui est en train deprendre de l’ampleur. Les CAN de type pipeline offrent un bon compromis entre la vitesse, la résolution et la consommation.Ils sont convenables pour une variété d’applications et sont typiquement utilisés dans les SoCs destinés à des applicationsvidéo. En raison de leur façon particulière du traitement du signal d’entrée, les CAN de type pipeline ont des codes de sortiequi ont la même largeur. Par conséquent, au lieu de considérer tous les codes lors du test, il est possible de se limiter à un sous-ensemble, ce qui permet de réduire considérablement le temps de test. Dans ce travail, une technique pour l’applicationdu test à code réduit pour les CANs de type pipeline est proposée. Elle exploite principalement deux propriétés de ce type deCAN et permet d’obtenir une très bonne estimation des performances statiques. La technique est validée expérimentalementsur un CAN 11-bit, 55nm de STMicroelectronics, obtenant une estimation de la NLD et de la NLI pratiquement identiques àla NLD et la NLI obtenues par la méthode classique d’histogramme, en utilisant la mesure de seulement 6% des codes. / Differential Non Linearity (DNL) and Integral Non Linearity (INL) are the two main static performances ofAnalog to-Digital Converters (ADCs) typically measured during production testing. These two performances reflect thedeviation of the transfer curve of the ADC from its ideal form. In a classic testing scheme, a saturated sine-wave or ramp isapplied to the ADC and the number of occurrences of each code is obtained to construct the histogram from which DNL andINL can be readily calculated. This standard approach requires the collection of a large volume of data because each codeneeds to be traversed many times to average noise. Furthermore, the volume of data increases exponentially with theresolution of the ADC under test. According to recently published data, testing the mixed-signal functions (e.g. dataconverters and phase locked loops) of a System-on-Chip (SoC) contributes to more than 30% of the total test time, althoughmixed-signal circuits occupy a small fraction of the SoC area that typically does not exceed 5%. Thus, reducing test time forADCs is an area of industry focus and innovation. Pipeline ADCs offer a good compromise between speed, resolution, andpower consumption. They are well-suited for a variety of applications and are typically present in SoCs intended for videoapplications. By virtue of their operation, pipeline ADCs have groups of output codes which have the same width. Thus,instead of considering all the codes in the testing procedure, we can consider measuring only one code out of each group,thus reducing significantly the static test time. In this work, a technique for efficiently applying reduced code testing onpipeline ADCs is proposed. It exploits two main properties of the pipeline ADC architecture and allows obtaining an accurateestimation of the static performances. The technique is validated on an experimental 11-bit, 55nm pipeline ADC fromSTMicroelectronics, resulting in estimated DNL and INL that are practically indistinguishable from DNL and INL that areobtained with the standard histogram technique, while measuring only 6% of the codes.
|
69 |
Τεχνικές ελέγχου ορθής λειτουργίας με έμφαση στη χαμηλή κατανάλωση ισχύος / VLSI testing techniques focused on low power dissipationΜπέλλος, Μάτσιεϊ 25 June 2007 (has links)
Η διατριβή ασχολείται με το αντικείμενο του ελέγχου ορθής λειτουργίας κυκλωμάτων κατά τον οποίο λαμβάνεται υπόψη και η συμπεριφορά ως προς την κατανάλωση ισχύος. Οι τεχνικές που προτείνονται αφορούν α) τη συμπίεση ενός συνόλου δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου με χρήση εξωτερικών ελεγκτών, β) την εμφώλευση διανυσμάτων δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου και γ) τη μείωση της κατανάλωση ισχύς και ενέργειας σε περιβάλλον εξωτερικού ελέγχου. Η συμπίεση των δεδομένων βασίζεται στην παρατήρηση ότι ένα διάνυσμα δοκιμής μπορεί να παραχθεί από το προηγούμενό του με την αντικατάσταση κάποιων τμημάτων του. Μεγαλύτερη συμπίεση επιτυγχάνεται όταν γίνει αναδιαταξή διανυσμάτων και αναδιάταξη των φλιπ-φλοπ της αλυσίδας ανίχνευσης. Αν η αναδιάταξη των φλιπ-φλοπ γίνει με βάση τη συχνότητα αλλαγών κατάστασης γειτονικών φλιπ-φλοπ τότε επιτυγχάνεται και μείωση της κατανάλωσης ισχύος. Όσον αφορά τις τεχνικές ενσωματωμένου αυτοελέγχου, μελετήθηκε το πρόβλημα της εμφώλευσης διανυσμάτων δοκιμής. Προτάθηκαν αποδοτικά κυκλώματα παραγωγής διανυσμάτων δοκιμής βασισμένα σε ολισθητές γραμμικής ανάδρασης και δέντρα πυλών XOR και σε ολισθητές συνδυασμένους με δέντρα πυλών OR. Όταν τα κυκλώματα υπό έλεγχο είναι κανονικής μορφής όπως είναι οι αθροιστές του αριθμητικού συστήματος υπολοίπων, προτείνονται κυκλώματα που εκμεταλεύονται την κανονική μορφή του συνόλου δοκιμής. Τέλος, σε περιβάλλον εξωτερικού ελέγχου, προτείνονται μέθοδοι αναδιάταξης διανυσμάτων δοκιμής με επανάληψη διανυσμάτων που μειώνουν την κατανάλωση. Οι μέθοδοι αυτές βασίζονται στην επιλογή των κατάλληλων ελάχιστων γεννητικών δέντρων και στη μετατροπή των κατάλληλων επαναλαμβανόμενων διανυσμάτων επιτυγχάνοντας σημαντική μείωση στην κατανάλωση ενέργειας, στη μέση και στη μέγιστη κατανάλωση ισχύος. / The dissertation is focused on VLSI testing while power dissipation is also taken into account. The techniques proposed are: a) test data compression in an embedded test environment, b) test set embedding in a built-in self test environment and c) reduction in test power dissipation in an external testing environment. Test data compression is based on the observation that a test vector can be produced from the previous one by replacing some parts of the previous vector with new parts of the current vector. The compression is even higher when the test vectors are ordered and scan cell reordering is also performed. If the scan cell reordering is based on a transition frequency approach then reduction in power dissipation is also achieved. In the case of built-in self test the problem of test set embedding was studied and efficient circuits based on linear feedback shift registers combined with XOR trees or shift registers combined with OR trees were proposed. If the circuits have a regular structure, such as the structure of residue number system adders, then a circuit taking advantage of the regular form of the test set can be derived. Finally, when external testing is considered, we proposed test vector ordering with vector repetition methods, which reduce power consumption. The methods are based on the selection of the appropriate minimum spanning trees and through the modification of the repeated vectors they achieve considerable savings in energy, average and peak power dissipation.
|
70 |
Ανάπτυξη εξομοιωτή σφαλμάτων για σφάλματα μετάβασης σε ψηφιακά ολοκληρωμένα κυκλώματαΚασερίδης, Δημήτριος 26 September 2007 (has links)
Η μεταπτυχιακή αυτή εργασία μπορεί να χωριστεί σε δύο λογικά μέρη (Μέρος Α’ και Μέρος Β’). Το πρώτο μέρος αφορά τον έλεγχο ορθής λειτουργίας ψηφιακών κυκλωμάτων χρησιμοποιώντας το μοντέλο των Μεταβατικών (Transient) σφαλμάτων και πιο συγκεκριμένα περιλαμβάνει την μελέτη για το μοντέλο, τρόπο λειτουργίας και την υλοποίηση ενός Εξομοιωτή Μεταβατικών Σφαλμάτων (Transition Faults Simulator). Ο εξομοιωτής σφαλμάτων αποτελεί το πιο σημαντικό μέρος της αλυσίδας εργαλείων που απαιτούνται για τον σχεδιασμό και εφαρμογή τεχνικών ελέγχου ορθής λειτουργίας και η ύπαρξη ενός τέτοιου εργαλείου επιτρέπει την μελέτη νέων τεχνικών ελέγχου κάνοντας χρήση του Μεταβατικού μοντέλου σφαλμάτων.
Το δεύτερο μέρος της εργασίας συνοψίζει την μελέτη που πραγματοποιήθηκε για την δημιουργία ενός νέου αλγόριθμου επιλογής διανυσμάτων ελέγχου στην περίπτωση των Test Set Embedding τεχνικών ελέγχου. Ο αλγόριθμος επιτυγχάνει σημαντικές μειώσεις τόσο στον όγκο των απαιτούμενων δεδομένων που είναι απαραίτητο να αποθηκευτούν για την αναπαραγωγή του ελέγχου, σε σχέση με τις κλασικές προσεγγίσεις ελέγχου, όσο και στο μήκος των απαιτούμενων ακολουθιών ελέγχου που εφαρμόζονται στο υπό-έλεγχο κύκλωμα σε σχέση με προγενέστερους Test Set Embedding αλγορίθμους. Στο τέλος του μέρους Β’ προτείνεται μία αρχιτεκτονική για την υλοποίηση του αλγόριθμου σε Built-In Self-Test περιβάλλον ελέγχου ορθής λειτουργίας ακολουθούμενη από την εκτίμηση της απόδοσης αυτής και σύγκριση της με την καλύτερη ως τώρα προτεινόμενη αρχιτεκτονική που υπάρχει στην βιβλιογραφία (Βλέπε Παράρτημα Α). / The thesis consists of two basic parts that apply in the field of VLSI testing of integrated circuits. The first one concludes the work that has been done in the field of VLSI testing using the Transient Fault model and more specifically, analyzes the model and the implementation of a Transition Fault Simulator. The transient fault model moves beyond the scope of the simple stuck-at fault model that is mainly used in the literature, by introducing the concept of time and therefore enables the testing techniques to be more precise and closer to reality. Furthermore, a fault simulator is probably the most important part of the tool chain that is required for the design, implementation and study of vlsi testing techniques and therefore having such a tool available, enables the study of new testing techniques using the transient fault model.
The second part of the thesis summaries the study that took place for a new technique that reduces the test sequences of reseeding-based schemes in the case of Test Set Embedding testing techniques. The proposed algorithm features significant reductions in both the volumes of test data that are required to be stored for the precise regeneration of the test sequences, and the length of test vector sequences that are applied on the circuit under test, in comparison to the classical proposed test techniques that are available in the literature. In addition to the algorithm, a low hardware overhead architecture for implementing the algorithm in Built-in Self-Test environment is presented for which the imposed hardware overhead is confined to just one extra bit per seed, plus one, very small, extra counter in the scheme’s control logic. In the end of the second part, the proposed architecture is compared with the best so far proposed architecture available in the literature (see Appendix A)
|
Page generated in 0.2013 seconds