Spelling suggestions: "subject:"fault model"" "subject:"vault model""
1 |
Utilização de modelos de falhas na metodologia dos observadores de estado para detecção de trincas em sistemas contínuosAraujo, Marco Anderson da Cruz [UNESP] 18 March 2005 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2005-03-18Bitstream added on 2014-06-13T19:14:29Z : No. of bitstreams: 1
araujo_mac_me_ilha.pdf: 1330850 bytes, checksum: 25dd9210500f7395c9f2891f717017d6 (MD5) / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Hoje em dia um dos fatores de interesse das indústrias no desenvolvimento de novas técnicas de detecção e localização de falhas é a preocupação com a segurança de seus sistemas, tendo-se a necessidade de supervisão e monitoramento de modo que a falha seja detectada e corrigida o mais rápido possível. Verifica-se na prática que determinados parâmetros dos sistemas podem variar durante o processo, devido a características específicas ou o desgaste natural de seus componentes. Sabe-se também que, mesmo nos sistemas bem projetados, a ocorrência de trincas em alguns componentes pode provocar perdas econômicas ou conduzir a situações perigosas. Os observadores de estado podem reconstruir os estados não medidos do sistema, desde que o mesmo seja observável, tornando possível, desta forma, estimar as medidas nos pontos de difícil acesso. A técnica dos observadores de estado consiste em desenvolver um modelo para o sistema em análise e comparar a estimativa da saída com a saída medida, a diferença entre os dois sinais presentes resulta em um resíduo que é utilizado para análise. Neste trabalho foi montado um banco de observadores associado a um modelo de trinca de modo a acompanhar o progresso da mesma. Os resultados obtidos através de simulações computacionais em uma viga engastada discretizada pela técnica dos... . / Nowadays a main factor of interest in industries in the development of new techniques for detection and localization of faults it is the concern with the security of its systems. The need for supervising and monitoring is to detect and correct the fault as fastest as possible. It is verified, practically, that some determined parameters of the systems can vary during the process, due the specific characteristics or the natural wearing of its components. It is known that even in well-designed systems the occurrence of cracks in some components can provoke economic losses or lead to dangerous situations. The state observers methodology can reconstruct the unmeasured states of the system, since that it is observable, becoming possible in this way to esteem the measures for points of difficult access. The technique of state observers consists of developing a model for the system under analysis and to compare the estimate of exit with the measured exit, the difference between these two signals results in a residue that is used for analysis. In this work was assembled a bank of observers associated to a model of crack in order to follow its progress. The results gotten through computational simulations in a cantilever beam discretized by using the technique of finite elements and carried through experimental... (Complete abstract click electronic address below).
|
2 |
Distributed fault detection and diagnostics using artificial intelligence techniques / A. LucouwLucouw, Alexander January 2009 (has links)
With the advancement of automated control systems in the past few years, the focus
has also been moved to safer, more reliable systems with less harmful effects on the
environment. With increased job mobility, less experienced operators could cause more
damage by incorrect identification and handling of plant faults, often causing faults to
progress to failures. The development of an automated fault detection and diagnostic
system can reduce the number of failures by assisting the operator in making correct
decisions. By providing information such as fault type, fault severity, fault location
and cause of the fault, it is possible to do scheduled maintenance of small faults rather
than unscheduled maintenance of large faults.
Different fault detection and diagnostic systems have been researched and the best
system chosen for implementation as a distributed fault detection and diagnostic
architecture. The aim of the research is to develop a distributed fault detection and
diagnostic system. Smaller building blocks are used instead of a single system that
attempts to detect and diagnose all the faults in the plant.
The phases that the research follows includes an in-depth literature study followed by
the creation of a simplified fault detection and diagnostic system. When all the aspects
concerning the simple model are identified and addressed, an advanced fault detection
and diagnostic system is created followed by an implementation of the fault detection
and diagnostic system on a physical system. / Thesis (M.Ing. (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2009.
|
3 |
Distributed fault detection and diagnostics using artificial intelligence techniques / A. LucouwLucouw, Alexander January 2009 (has links)
With the advancement of automated control systems in the past few years, the focus
has also been moved to safer, more reliable systems with less harmful effects on the
environment. With increased job mobility, less experienced operators could cause more
damage by incorrect identification and handling of plant faults, often causing faults to
progress to failures. The development of an automated fault detection and diagnostic
system can reduce the number of failures by assisting the operator in making correct
decisions. By providing information such as fault type, fault severity, fault location
and cause of the fault, it is possible to do scheduled maintenance of small faults rather
than unscheduled maintenance of large faults.
Different fault detection and diagnostic systems have been researched and the best
system chosen for implementation as a distributed fault detection and diagnostic
architecture. The aim of the research is to develop a distributed fault detection and
diagnostic system. Smaller building blocks are used instead of a single system that
attempts to detect and diagnose all the faults in the plant.
The phases that the research follows includes an in-depth literature study followed by
the creation of a simplified fault detection and diagnostic system. When all the aspects
concerning the simple model are identified and addressed, an advanced fault detection
and diagnostic system is created followed by an implementation of the fault detection
and diagnostic system on a physical system. / Thesis (M.Ing. (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2009.
|
4 |
Utilização de modelos de falhas na metodologia dos observadores de estado para detecção de trincas em sistemas contínuos /Araujo, Marco Anderson da Cruz. January 2005 (has links)
Orientador: Gilberto Pechoto de Melo / Banca: Aparecido Carlos Gonçalves / Banca: Jorge Nei Brito / Resumo: Hoje em dia um dos fatores de interesse das indústrias no desenvolvimento de novas técnicas de detecção e localização de falhas é a preocupação com a segurança de seus sistemas, tendo-se a necessidade de supervisão e monitoramento de modo que a falha seja detectada e corrigida o mais rápido possível. Verifica-se na prática que determinados parâmetros dos sistemas podem variar durante o processo, devido a características específicas ou o desgaste natural de seus componentes. Sabe-se também que, mesmo nos sistemas bem projetados, a ocorrência de trincas em alguns componentes pode provocar perdas econômicas ou conduzir a situações perigosas. Os observadores de estado podem reconstruir os estados não medidos do sistema, desde que o mesmo seja observável, tornando possível, desta forma, estimar as medidas nos pontos de difícil acesso. A técnica dos observadores de estado consiste em desenvolver um modelo para o sistema em análise e comparar a estimativa da saída com a saída medida, a diferença entre os dois sinais presentes resulta em um resíduo que é utilizado para análise. Neste trabalho foi montado um banco de observadores associado a um modelo de trinca de modo a acompanhar o progresso da mesma. Os resultados obtidos através de simulações computacionais em uma viga engastada discretizada pela técnica dos... (Resumo completo, clicar acesso eletrônico abaixo). / Abstract: Nowadays a main factor of interest in industries in the development of new techniques for detection and localization of faults it is the concern with the security of its systems. The need for supervising and monitoring is to detect and correct the fault as fastest as possible. It is verified, practically, that some determined parameters of the systems can vary during the process, due the specific characteristics or the natural wearing of its components. It is known that even in well-designed systems the occurrence of cracks in some components can provoke economic losses or lead to dangerous situations. The state observers methodology can reconstruct the unmeasured states of the system, since that it is observable, becoming possible in this way to esteem the measures for points of difficult access. The technique of state observers consists of developing a model for the system under analysis and to compare the estimate of exit with the measured exit, the difference between these two signals results in a residue that is used for analysis. In this work was assembled a bank of observers associated to a model of crack in order to follow its progress. The results gotten through computational simulations in a cantilever beam discretized by using the technique of finite elements and carried through experimental... (Complete abstract click electronic address below). / Mestre
|
5 |
Fault Modeling and Analysis for Multiple-Voltage Power Supplies in Low-Power DesignVelaga, Srikirti 14 October 2013 (has links)
No description available.
|
6 |
Fault model-based variability testingMachado, Ivan do Carmo 21 July 2014 (has links)
Submitted by Kleber Silva (kleberbs@ufba.br) on 2017-05-31T19:53:55Z
No. of bitstreams: 1
Ph.D. Thesis - Ivan Machado - Full Version-1.pdf: 3242625 bytes, checksum: 76299cf9d79afd85a7c46155029ae95e (MD5) / Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-06-06T15:19:55Z (GMT) No. of bitstreams: 1
Ph.D. Thesis - Ivan Machado - Full Version-1.pdf: 3242625 bytes, checksum: 76299cf9d79afd85a7c46155029ae95e (MD5) / Made available in DSpace on 2017-06-06T15:19:55Z (GMT). No. of bitstreams: 1
Ph.D. Thesis - Ivan Machado - Full Version-1.pdf: 3242625 bytes, checksum: 76299cf9d79afd85a7c46155029ae95e (MD5) / Software Product Line (SPL) engineering has emerged as an important strategy to cope with the increasing demand of large-scale product customization. Owing to its variability management capabilities, SPL has provided companies with an efficient and effective means of delivering a set of products with higher quality at a lower cost, when compared to traditional software engineering strategies. However, such a benefit does not come for free. SPL demands cost-effective quality assurance techniques that attempt to minimize the overall effort, while improving, or at least not hurting, fault detection rates. Software testing, the most widely used approach for improving software quality in practice, has been largely explored to address this particular topic.
State of the art SPL testing techniques are mainly focused on handling variability testing from a high level perspective, namely through the analysis of feature models, rather than concerning issues from a source code perspective. However, we believe that improvements in the quality of variable assets entail addressing testing issues both from high and low-level perspectives.
By carrying out a series of empirical studies, gathering evidence from both the literature and the analysis of defects reported in three open source software systems, we identified and analyzed commonly reported defects from Java-based variability implementation mechanisms. Based on such evidence, we designed a method for building fault models for variability testing, from two main perspectives: test assessment, which focuses on the evaluation of the effectiveness of existing test suites; and test design, which aims to aid the construction of test sets, by focusing on fault-prone elements.
The task of modeling typical or important faults provides a means to coming up with certain test inpus that can expose faults in the program unit. Hence, we hypothesize that understanding the nature of typical or important faults prior to developing the test sets would enhance their capability to find a particular set of errors.
We performed a controlled experiment to assess the test effectiveness of using fault models to provide SPL testing with support to design test inputs. We observed promising results that confirm the hypothesis that combining fault models in an SPL testing process performs significantly better on improving the quality of test inputs. / A Engenharia de Linhas de Produtos de Software (LPS) surgiu como uma importante estratégia
para lidar com a crescente demanda de customização de produtos de software em larga escala.
Por sua capacidade de gerenciar variabilidade de forma sistemática, o paradigma de LPS tem
proporcionado às empresas métodos eficientes e eficazes para alcançar a entrega de produtos de
software com maior qualidade, a um custo de produção reduzido, quando comparado a estratégias
tradicionais de desenvolvimento de software. No entanto, a obtenção de tais benefícios não é
trivial. O paradigma impõe a necessidade de técnicas de garantia de qualidade eficazes, com
bom custo-benefício, que tentem minimizar o esforço global, ao tempo em que se alcance
melhorias nas taxas de detecção de falhas. Assim, a disciplina de testes de software, abordagem
comumente utilizada na busca por melhoria na qualidade dos produtos de software, tem sido
largamente explorada no contexto de LPS.
As mais relevantes técnicas de testes em LPS estão focadas principalmente no gerenciamento
de testes de variabilidade sob uma perspectiva de alto nível, notadamente através da análise
de modelos, em sobreposição aos aspectos de mais baixo nível, isto é, sob o ponto de vista do
código fonte. Entretanto, acreditamos que melhorias na qualidade dos artefatos de software
variáveis implica na investigação de aspectos da disciplina de testes, em ambas as perspectivas,
quer seja alto nível quer seja baixo nível.
Através da realização de uma série de estudos empíricos, evidências foram obtidas a partir
da análise de textos publicados na literatura, e a partir da análise de defeitos reportados em três
sistemas de software de código aberto. Neste último caso, identificamos e analisamos defeitos
provenientes do uso de mecanismos de implementação de variabilidade em Java. Com base
nas evidências, construímos uma abordagem para construir modelos de falhas que auxiliem o
teste de variabilidade, sob duas perspectivas principais: avaliação de teste, que incide sobre
a avaliação da eficácia dos casos de testes existentes; e o projeto de teste, que visa auxiliar a
construção de casos de teste, concentrando-se em elementos propensos a falhas.
A tarefa de modelagem de falhas típicas ou importantes fornece um meio para identificar
certas entradas de teste que podem expor falhas na execução do programa. Desta forma, a
nossa hipótese é que a compreensão da natureza das falhas típicas, ou importantes, como tarefa
anterior ao desenvolvimento dos casos de teste, tende a aumentar a capacidade dos testes em
encontrar um determinado conjunto de defeitos, quando executados.
Para avaliar a eficácia da abordagem proposta nesta tese, planejamos e executamos um
experimento controlado. Os resultados mostraram-se promissores, provendo indícios de que a ideia de se combinar modelos de falha em um processo de teste de LPS pode trazer ganhos significativos a atividade de teste, bem como melhorar a qualidade dos dados de entrada de
testes.
|
7 |
Méthode pour la spécification de responsabilité pour les logiciels : Modelisation, Tracabilité et Analyse de dysfonctionnements / Method for software liability specifications : Modelisation, Traceability and Incident AnalysisSampaio Elesbao Mazza, Eduardo 26 June 2012 (has links)
Malgré les progrès importants effectués en matière de conception de logiciels et l'existence de méthodes de développement éprouvées, il faut reconnaître que les défaillances de systèmes causées par des logiciels restent fréquentes. Il arrive même que ces défaillances concernent des logiciels critiques et provoquent des dommages significatifs. Considérant l'importance des intérêts en jeu, et le fait que la garantie de logiciel "zéro défaut" est hors d'atteinte, il est donc important de pouvoir déterminer en cas de dommages causés par des logiciels les responsabilités des différentes parties. Pour établir ces responsabilités, un certain nombre de conditions doivent être réunies: (i) on doit pouvoir disposer d'éléments de preuve fiables, (ii) les comportements attendus des composants doivent avoir été définis préalablement et (iii) les parties doivent avoir précisé leurs intentions en matière de répartition des responsabilités. Dans cette thèse, nous apportons des éléments de réponse à ces questions en proposant un cadre formel pour spécifier et établir les responsabilités en cas de dysfonctionnement d'un logiciel. Ce cadre formel peut être utilisé par les parties dans la phase de rédaction du contrat et pour concevoir l'architecture de logs du système. Notre première contribution est une méthode permettant d'intégrer les définitions formelles de responsabilité et d'éléments de preuves dans le contrat juridique. Les éléments de preuves sont fournis par une architecture de logs dite "acceptable" qui dépend des types de griefs considérés par les parties. La seconde contribution importante est la définition d'une procédure incrémentale, qui est mise en ?uvre dans l'outil LAPRO, pour l'analyse incrémentale de logs distribués. / Despite the effort made to define methods for the design of high quality software, experience shows that failures of IT systems due to software errors remain very common and one must admit that even critical systems are not immune from that type of errors. One of the reasons for this situation is that software requirements are generally hard to elicit precisely and it is often impossible to predict all the contexts in which software products will actually be used. Considering the interests at stake, it is therefore of prime importance to be able to establish liabilities when damages are caused by software errors. Essential requirements to define these liabilities are (1) the availability of reliable evidence, (2) a clear definition of the expected behaviors of the components of the system and (3) the agreement between the parties with respect to liabilities. In this thesis, we address these problems and propose a formal framework to precisely specify and establish liabilities in a software contract. This framework can be used to assist the parties both in the drafting phase of the contract and in the definition of the architecture to collect evidence. Our first contribution is a method for the integration of a formal definition of digital evidence and liabilities in a legal contract. Digital evidence is based on distributed execution logs produced by "acceptable log architectures". The notion of acceptability relies on a formal threat model based on the set of potential claims. Another main contribution is the definition of an incremental procedure, which is implemented in the LAPRO tool, for the analysis of distributed logs.
|
8 |
Investigation of Methods for Testing Aspect Oriented SoftwareBanik, Kallol January 2014 (has links)
Aspect-oriented programming is a comparatively new programming paradigm which intends to overcome some limitations that approaches such as procedural programming and object-oriented programming have. Traditional approaches are unable to properly capture some design decisions. Aspect-oriented programming introduces some new properties that we don’t find in the structural programming or object-oriented programming. New design patterns of aspect-oriented software introduce new fault types and new challenges for testing. Testing is an important part in the software development to produce quality software. Research on testing aspect-oriented software has been going on for several years but it still remains to invent testing approaches that cover all features of aspect-oriented software. This dissertation surveys test methods for aspect-oriented software and presents a comparison among the testing methods which reveals the strengths and weaknesses of current methods for testing of aspect-oriented software. This comparative overview of proposed test methods can be helpful for testers who intend to test aspect-oriented software. The conclusion presents the research contribution of this dissertation and proposes future work.
|
9 |
Search State Extensibility based Learning Framework for Model Checking and Test GenerationChandrasekar, Maheshwar 20 September 2010 (has links)
The increasing design complexity and shrinking feature size of hardware designs have created resource intensive design verification and manufacturing test phases in the product life-cycle of a digital system. On the contrary, time-to-market constraints require faster verification and test phases; otherwise it may result in a buggy design or a defective product. This trend in the semiconductor industry has considerably increased the complexity and importance of Design Verification, Manufacturing Test and Silicon Diagnosis phases of a digital system production life-cycle. In this dissertation, we present a generalized learning framework, which can be customized to the common solving technique for problems in these three phases.
During Design Verification, the conformance of the final design to its specifications is verified. Simulation-based and Formal verification are the two widely known techniques for design verification. Although the former technique can increase confidence in the design, only the latter can ensure the correctness of a design with respect to a given specification. Originally, Design Verification techniques were based on Binary Decision Diagram (BDD) but now such techniques are based on branch-and-bound procedures to avoid space explosion. However, branch-and-bound procedures may explode in time; thus efficient heuristics and intelligent learning techniques are essential. In this dissertation, we propose a novel extensibility relation between search states and a learning framework that aids in identifying non-trivial redundant search states during the branch-and-bound search procedure. Further, we also propose a probability based heuristic to guide our learning technique. First, we utilize this framework in a branch-and-bound based preimage computation engine. Next, we show that it can be used to perform an upper-approximation based state space traversal, which is essential to handle industrial-scale hardware designs. Finally, we propose a simple but elegant image extraction technique that utilizes our learning framework to compute over-approximate image space. This image computation is later leveraged to create an abstraction-refinement based model checking framework.
During Manufacturing Test, test patterns are applied to the fabricated system, in a test environment, to check for the existence of fabrication defects. Such patterns are usually generated by Automatic Test Pattern Generation (ATPG) techniques, which assume certain fault types to model arbitrary defects. The size of fault list and test set has a major impact on the economics of manufacturing test. Towards this end, we propose a fault col lapsing approach to compact the size of target fault list for ATPG techniques. Further, from the very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain. However, ATPG is a problem in the multi-valued domain; thus we propose a multi-valued ATPG framework to utilize this underlying nature. We also employ our learning technique for branch-and-bound procedures in this multi-valued framework.
To improve the yield for high-volume manufacturing, silicon diagnosis identifies a set of candidate defect locations in a faulty chip. Subsequently physical failure analysis - an extremely time consuming step - utilizes these candidates as an aid to locate the defects. To reduce the number of candidates returned to the physical failure analysis step, efficient diagnostic patterns are essential. Towards this objective, we propose an incremental framework that utilizes our learning technique for a branch-and-bound procedure. Further, it learns from the ATPG phase where detection-patterns are generated and utilizes this information during diagnostic-pattern generation. Finally, we present a probability based heuristic for X-filling of detection-patterns with the objective of enhancing the diagnostic resolution of such patterns. We unify these techniques into a framework for test pattern generation with good detection and diagnostic ability. Overall, we propose a learning framework that can speed up design verification, test and diagnosis steps in the life cycle of a hardware system. / Ph. D.
|
10 |
A new fault model and its application in synthesizing Toffoli networksZhong, Jing 29 October 2008 (has links)
Reversible logic computing is a rapidly developing research area. Both reversible logic synthesis and testing reversible logic circuits are very important issues in this area. In this thesis, we present our work in these two aspects.
We consider a new fault model, namely the crosspoint fault, for reversible circuits. The effects of this kind of fault on the behaviour of the circuits are studied. A randomized test pattern generation algorithm targeting this kind of fault is introduced and analyzed. The relationship between the crosspoint faults and stuck-at faults is also investigated.
The crosspoint fault model is then studied for possible applications in reversible logic synthesis. One type of redundancy exists in Toffoli networks in the form of undetectable multiple crosspoint faults. So redundant circuits can be simplified by deleting those undetectable faults. The testability of multiple crosspoint faults is analyzed in detail. Several important properties are proved and integrated into the simplifying algorithm so as to speed up the process.
We also provide an optimized implementation of a Reed-Muller spectra based reversible logic synthesis algorithm. This new implementation uses a compact form of the Reed-Muller spectra table of the specified reversible function to save memory during execution. Experimental results are presented to illustrate the significant improvement of this new implementation.
|
Page generated in 0.0735 seconds