41 |
Designing and implementing an architecture for single-page applications in Javascript and HTML5Petersson, Jesper January 2012 (has links)
A single-page application is a website that retrieves all needed components in one single page load. The intention is to get a user experience that reminds more of a native appli- cation rather than a website. Single-page applications written in Javascript are becoming more and more popular, but when the size of the applications grows the complexity is also increased. A good architecture or a suitable framework is therefore needed. The thesis begins by analyzing a number of design patterns suitable for applications containing a graphical user interface. Based on a composition of these design patterns, an architecture that targets single-page applications was designed. The architecture was designed to make applications easy to develop, test and maintain. Initial loading time, data synchronization and search engine optimizations were also important aspects that were considered. A framework based on the architecture was implemented, tested and compared against other frameworks available on the market. The framework that was implemented was designed to be modular, supports routing and templates as well as a number of different drivers for communicating with a server-side database. The modules were designed with a variant of the pattern Model-View-Controller (MVC), where a presentation model was introduced between the controller and the view. This allows unit tests to bypass the user interface and instead communicate directly with the core of the application. After minification and compression, the size of the framework is only 14.7 kB including all its dependencies. This results in a low initial loading time. Finally, a solution that allows a Javascript application to be indexed by a search engine is presented. It is based on PhantomJS in order to produce a static snapshot that can be served to the search engines. The solution is fast, scalable and easy to maintain.
|
42 |
Automating IEEE 1500 wrapper insertionHuss, Niklas January 2009 (has links)
Integrated circuits (ICs) are becoming increasingly complex, which leadsto long design and development times. Designing ICs in a modular fashionis efficient to shorten design and development times. Due to imperfection inIC manufacturing, all ICs are tested. An IC designed in a modular fashioncan be tested in a modular manner. To enable modular test, the IEEE 1500std has been developed to enable isolation and access of modules. Whilethe IEEE 1500 std is adopted, there is yet no commercial tool available. In this thesis we have (1) developed an IEEE 1500 std wrapper and (2)included it in a design flow based on a commercial tool, and developed scriptto automate the process. Given a module in VHDL, our design automationautomatically makes synthesis, scan insertion, test generation (ATPG), andwrapper insertion. We have applied the design flow to several benchmarksand through simulation verified the correctness.
|
43 |
Testabilité des services Web / Web services testabilityRabhi, Issam 09 January 2012 (has links)
Cette thèse s’est attaquée sous diverses formes au test automatique des services Web : une première partie est consacrée au test fonctionnel à travers le test de robustesse. La seconde partie étend les travaux précédents pour le test de propriétés non fonctionnelles, telles que les propriétés de testabilité et de sécurité. Nous avons abordé ces problématiques à la fois d’un point de vue théorique et pratique. Nous avons pour cela proposé une nouvelle méthode de test automatique de robustesse des services Web non composés, à savoir les services Web persistants (stateful) et ceux non persistants. Cette méthode consiste à évaluer la robustesse d’un service Web par rapport aux opérations déclarées dans sa description WSDL, en examinant les réponses reçues lorsque ces opérations sont invoquées avec des aléas et en prenant en compte l’environnement SOAP. Les services Web persistants sont modélisés grâce aux systèmes symboliques. Notre méthode de test de robustesse dédiée aux services Web persistants consiste à compléter la spécification du service Web afin de décrire l’ensemble des comportements corrects et incorrects. Puis, en utilisant cette spécification complétée, les services Web sont testés en y intégrant des aléas. Un verdict est ensuite rendu. Nous avons aussi réalisé une étude sur la testabilité des services Web composés avec le langage BPEL. Nous avons décrit précisément les problèmes liés à l’observabilité qui réduisent la faisabilité du test de services Web. Par conséquent, nous avons évalué des facteurs de la testabilité et proposé des solutions afin d’améliorer cette dernière. Pour cela, nous avons proposé une approche permettant, en premier lieu, de transformer la spécification ABPEL en STS. Cette transformation consiste à convertir successivement et de façon récursive chaque activité structurée en un graphe de sous-activités. Ensuite, nous avons proposé des algorithmes d’améliorations permettant de réduire ces problèmes de testabilité. Finalement, nous avons présenté une méthode de test de sécurité des services Web persistants. Cette dernière consiste à évaluer quelques propriétés de sécurité, tel que l’authentification, l’autorisation et la disponibilité, grâce à un ensemble de règles. Ces règles ont été crée, avec le langage formel Nomad. Cette méthodologie de test consiste d’abord à transformer ces règles en objectifs de test en se basant sur la description WSDL, ensuite à compléter, en parallèle, la spécification du service Web persistant et enfin à effectuer le produit synchronisé afin de générer les cas de test. / This PhD thesis focuses on diverse forms of automated Web services testing : on the one hand, is dedicated to functional testing through robustness testing. On the other hand, is extends previous works on the non-functional properties testing, such as the testability and security properties. We have been exploring these issues both from a theoretical and practical perspective. We proposed a robustness testing method which generates and executes test cases automatically from WSDL descriptions. We analyze the Web service over hazards to find those which may be used for testing. We show that few hazards can be really handled and then we improve the robustness issue detection by separating the SOAP processor behavior from the Web service one. Stateful Web services are modeled with Symbolic Systems. A second method dedicated to stateful Web services consists in completing the Web service specification to describe correct and incorrect behaviors. By using this completed specification, the Web services are tested with relevant hazards and a verdict is returned. We study the BPEL testability on a well-known testability criterion called observability. To evaluate, we have chosen to transform ABPEL specifications into STS to apply existing methods. Then, from STS testability issues, we deduce some patterns of ABPEL testability degradation. These latter help to finally propose testability enhancement methods of ABPEL specifications. Finally, we proposed a security testing method for stateful Web Services. We define some specific security rules with the Nomad language. Afterwards, we construct test cases from a symbolic specification and test purposes derived from the previous rules. Moreover, to validate our proposal, we have applied our testing approach on real size case studies.
|
44 |
Zkoumání souvislostí mezi pokrytím poruch a testovatelností elektronických systémů / Investigating of Relations between Fault-Coverage and Testability of Electronic SystemsRumplík, Michal January 2010 (has links)
This work deals with testability analysis of digital circuits and fault coverage. It contains a desription of digital systems, their diagnosis, a description of tools for generating and applying tests and sets of benchmark circuits. It describes the testing of circuits and experimentation in tool TASTE for testability analysis and commercial tool for generating and applying tests. The experiments are focused on increase the testability of circuits.
|
45 |
A Design Methodology for Physical Design for TestabilityAlmajdoub, Salahuddin A. 01 July 1996 (has links)
Physical design for testability (PDFT) is a strategy to design circuits in a way to avoid or reduce realistic physical faults. The goal of this work is to define and establish a speci c methodology for PDFT. The proposed design methodology includes techniques to reduce potential bridging faults in complementary metal-oxide-semiconductor (CMOS) circuits. To compare faults, the design process utilizes a new parameter called the fault index. The fault index for a particular fault is the probability of occurrence of the fault divided by the testability of the fault. Faults with the highest fault indices are considered the worst faults and are targeted by the PDFT design process to eliminate them or reduce their probability of occurrence.
An implementation of the PDFT design process is constructed using several new tools in addition to other "off-the-shelf" tools. The first tool developed in this work is a testability measure tool for bridging faults. Two other tools are developed to eliminate or reduce the probability of occurrence of bridging faults with high fault indices. The row enhancer targets faults inside the logic elements of the circuit, while the channel enhancer targets faults inside the routing part of the circuit.
To demonstrate the capabilities and test the eff ectiveness of the PDFT design process, this work conducts an experiment which includes designing three CMOS circuits from the ISCAS 1985 benchmark circuits. Several layouts are generated for every circuit. Every layout, except the rst one, utilizes information from the previous layout to minimize the probability of occurrence for faults with high fault indices. Experimental results show that the PDFT design process successfully achieves two goals of PDFT, providing layouts with fewer faults and minimizing the probability of occurrence of hard-to-test faults. Improvement in the total fault index was about 40 percent in some cases, while improvement in total critical area was about 30 percent in some cases. However, virtually all the improvements came from using the row enhancer; the channel enhancer provided only marginal improvements. / Ph. D.
|
46 |
On Detection, Analysis and Characterization of Transient and Parametric Failures in Nano-scale CMOS VLSISanyal, Alodeep 01 May 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: hii a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by hiii an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
|
47 |
An 8 bit Serial Communication module Chip Design Using Synopsys tools and ASIC Design Flow MethodologyMunugala, Anvesh 23 May 2018 (has links)
No description available.
|
48 |
Integrated Enhancement of Testability and Diagnosability for Digital CircuitsRahagude, Nikhil Prakash 29 November 2010 (has links)
While conventional test point insertions commonly used in design for testability can improve fault coverage, the test points selected may not necessarily be the best candidates to aid <em>silicon diagnosis</em>. In this thesis, test point insertions are conducted with the aim to detect more faults and also synergistically distinguish currently indistinguishable fault-pairs. We achieve this by identifying those points in the circuit, which are not only hard-to-test but also lie on distinguishable frontiers, as Testability-Diagnosability (TD) points. To this end, we propose a novel low-cost metric to identify such TD points. Further, we propose a new DFT + DFD architecture, which adds just one pin (to identify test/functional mode) and small additional combinational logic to the circuit under test. Our experiments indicate that the proposed architecture can distinguish 4x more previously indistinguishable fault-pairs than existing DFT architectures while maintaining similar fault coverages. Further, the experiments illustrate that quality results can be achieved with an area overhead of around 5%. Additional experiments conducted on hard-to-test circuits show an increase in <em>fault coverage</em> by 48% while maintaining similar diagnostic resolution.
Built-in Self Test (BIST) is a technique of adding additional blocks of hardware to the circuits to allow them to perform self-testing. This enables the circuits to test themselves thereby reducing the dependency on the expensive external automated test equipment (ATE). At the end of a test session, BIST generates a signature which is a compaction of the obtained output responses of the circuit for that session. Comparison of this signature with the reference signature categorizes the circuit as error free or buggy. While BIST provides a quick and low cost alternative to check circuit's correctness, diagnosis in BIST environment remains poor because of the limited information present in the lossily compacted final signature. The signature does not give any information about the possible defect location in the circuit. To facilitate diagnosis, researchers have proposed the use of two additional on-chip embedded memories,response memory to store reference responses and fail memory to store failing responses. We propose a novel architecture in which only one additional memory is required. Experimental results conducted on benchmark circuits substantiate that the same fault coverage can be maintained using just 5% of the available test vectors. This reduces the size of memory required to store responses which in turn reduces area overhead. Further, by adding test points to the circuit using our proposed architecture, we can improve the diagnostic resolution by 60% with respect to external testing. / Master of Science
|
49 |
Análise empírica sobre a influência das métricas CK na testabilidade de software orientado a objetos / Empirical analysis on the influence of CK metrics on object-oriented software testabilityCruz, Robinson Crusoé da 11 December 2017 (has links)
Teste de Software tem o objetivo de executar um programa sob teste com o objetivo de revelar suas falhas, portanto é uma das fases mais importante do ciclo de vida do desenvolvimento de um software. A testabilidade é um atributo de qualidade fundamental para o sucesso da atividade de teste, pois ela pode ser entendida como o esforço necessário para criar, executar e avaliar os casos de teste em um software. Este atributo não é uma qualidade intrínseca do software, portanto não pode ser medido diretamente como a quantidade de linhas de código, por exemplo. Entretanto, ela pode ser inferida por meio das características ou métricas internas e externas de um software. Entre as características comumente utilizadas na análise da testabilidade estão as métricas CK, que foram propostas por Chidamber e Kemerer com objetivo de analisar software orientado a objetos. A maioria dos trabalhos nesta linha, entretanto, relaciona o tamanho e a quantidade de casos testes com a testabilidade de um software. Entretanto, é fundamental analisar a qualidade dos testes para saber se eles atingem os objetivos para os quais foram propostos, independente de quantidade e tamanho. Portanto, este trabalho de mestrado apresenta um estudo empírico sobre a relação entre as métricas CK e a testabilidade de um software com base na análise da adequação de seus casos de teste unitários, critérios de teste estrutural e de mutação. Inicialmente foi realizada uma Revisão Sistemática cujo objetivo foi avaliar o estado da arte da testabilidade e as métricas CK. Os resultados mostraram que apesar de existirem várias pesquisas relacionadas ao tema, existem lacunas que motivam novas pesquisas no que concerne a análise da qualidade dos testes e identificação das características das métricas que podem ser inferidas para medir e analisar a testabilidade. Em seguida, foram realizadas duas análises empíricas. Na primeira análise, as métricas foram analisadas por meio da correlação das métricas CK com a cobertura de linha de código, cobertura de \\textit (arestas, ramos ou desvio de fluxo) e escore de mutação. Os resultados desta análise demonstraram a importância de cada métrica dentro do contexto da testabilidade. Na segunda análise, foi realizada uma proposta de clusterização das métricas para tentar identificar grupos de classes com características semelhantes relacionadas à testabilidade. Além das análises empíricas, foi desenvolvida e apresentada uma ferramenta de coleta e análise de métricas CK com objetivo de contribuir com novas pesquisas relacionados a proposta deste projeto. Apesar das limitações das análises, os resultados deste trabalho mostraram a importância de cada métrica CK dentro do contexto da testabilidade e fornece aos desenvolvedores e projetistas uma ferramenta de apoio e dados empíricos para melhor desenvolverem e projetarem seus sistemas com o objetivo de facilitar a atividade de teste de software / Software testing have aim to run a program under test with the aim of revealing its failures, so it is one of the most important phases of the software development lifecycle. Testability is a key quality attribute for the success of the test activity, because it can be understood as the effort required to create, execute and evaluate test cases in software. This attribute is not an intrinsic quality of the software, so it can not be measured directly as the number of lines code, for example. However, it can be inferred through the or internal and external metrics of a software. Among the features commonly used in testability analysis are CK metrics, which were proposed by Chidamber and Kemerer in order to analyze object-oriented software. Most of the works in this line, however, relate the size and quantity of test cases with software testability. However, it\'s critical to analyze the quality of the tests to see if they achieve the objectives for which they were proposed, independent of quantity and size. Therefore, this Master\'s degree work presents an empirical study on the relationship between CK metrics and software testability based on the analysis of the adequacy of its unit test cases, structural test criteria and mutation. Initially, a Systematic Review was carried out to evaluate the state of the art of testability and CK metrics. The results showed that although there are several researches related to the subject, there are gaps that motivate new research in what concerns the analysis of the quality of the tests and identification of the features of the metrics that can be inferred to measure and analyze the testability. Two empirical analyzes were performed. In the first analysis, the metrics were analyzed through the correlation of the CK metrics with the code line coverage, branch coverage or mutation score. The results of this analysis showed the importance of each metric within the context of testability. In the second analysis, a metric clustering proposal was made to try to identify groups of classes with similar features related to testability. In addition to the empirical analysis, a tool for the collection and analysis of CK metrics was developed and presented, with aim to contribute with new researches related to the proposal of this project. Despite the limitations of the analyzes, the results of this work showed the importance of each CK metric within the context of testability and provides developers and designers with a support tool and empirical data to better develop and design their systems with the aim of facilitate the activity of software testing
|
50 |
Test de mémoires SRAM à faible consommation / Test of Low-Power SRAM MemoriesBonet Zordan, Leonardo Henrique 06 December 2013 (has links)
De nos jours, les mémoires embarquées sont les composants les plus denses dans les "System-On-Chips" (SOCs), représentant actuellement plus que 90% de leur superficie totale. Parmi les différents types de mémoires, les SRAMs sont très largement utilisées dans la conception des SOCs, particulièrement en raison de leur haute performance et haute densité d'intégration. En revanche, les SRAMs conçues en utilisant des technologies submicroniques sont devenus les principaux contributeurs de la consommation d'énergie globale des SOCs. Par conséquent, un effort élevé est actuellement consacré à la conception des SRAMs à faible consommation. En plus, en raison de leur structure dense, les SRAMs sont devenus de plus en plus susceptibles aux défauts physiques comparativement aux autres blocs du circuit, notamment dans les technologies les plus récentes. Par conséquent, les SRAMs se posent actuellement comme le principal détracteur du rendement des SOCs, ce qui cause la nécessité de développer des solutions de test efficaces ciblant ces dispositifs.Dans cette thèse, des simulations électriques ont été réalisées pour prédire les comportements fautifs causés par des défauts réalistes affectant les blocs de circuits spécifiques aux technologies SRAM faible consommation. Selon les comportements fautifs identifiés, différents tests fonctionnels, ainsi que des solutions de tests matériels, ont été proposés pour détecter les défauts étudiés. Par ailleurs, ce travail démontre que les circuits d'écriture et lecture, couramment incorporés dans les SRAMs faible consommation, peuvent être réutilisés pour augmenter le stress dans les SRAMs lors du test, ce qui permet d'améliorer la détection des défauts affectant la mémoire. / Nowadays, embedded memories are the densest components within System-On-Chips (SOCs), accounting for more than 90% of the overall SOC area. Among different types of memories, SRAMs are still widely used for realizing complex SOCs, especially because they allow high access performance, high density and fast integration in CMOS designs. On the other hand, high density SRAMs designed with deep-submicrometer technologies have become the main contributor to the overall SOC power consumption. Hence, there is an increasing need to design low-power SRAMs, which embed mechanisms to reduce their power consumption. Moreover, due to their dense structure, SRAMs are more are more prone to defects compared to other circuit blocks, especially in recent technologies. Hence, SRAMs are arising as the main SOC yield detractor, which raises the need to develop efficient test solutions targeting such devices.In this thesis, failure analysis based on electrical simulations has been exploited to predict faulty behaviors caused by realistic defects affecting circuit blocks that are specific to low-power SRAMs, such as power gating mechanisms and voltage regulation systems. Based on identified faulty behaviors, efficient March tests and low area overhead design for testability schemes have been proposed to detect studied defects. Moreover, the reuse of read and write assist circuits, which are commonly embedded in low-power SRAMs, has been evaluated as an alternative to increase stress in the SRAM during test phase and then improve the defect coverage.
|
Page generated in 0.0618 seconds