• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 104
  • 104
  • 67
  • 36
  • 19
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Formal specification and verification of a JVM and its bytecode verifier

Liu, Hanbing 28 August 2008 (has links)
Not available / text
52

Automatic Extraction of Program Models for Formal Software Verification

de Carvalho Gomes, Pedro January 2015 (has links)
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool / Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade. / <p>QC 20151101</p>
53

The application of machine learning methods in software verification and validation

Phuc, Nguyen Vinh, 1955- 04 January 2011 (has links)
Machine learning methods have been employed in data mining to discover useful, valid, and beneficial patterns for various applications of which, the domain encompasses areas in business, medicine, agriculture, census, and software engineering. Focusing on software engineering, this report presents an investigation of machine learning techniques that have been utilized to predict programming faults during the verification and validation of software. Artifacts such as traces in program executions, information about test case coverage and data pertaining to execution failures are of special interest to address the following concerns: Completeness for test suite coverage; Automation of test oracles to reduce human intervention in Software testing; Detection of faults causing program failures; Defect prediction in software. A survey of literature pertaining to the verification and validation of software also revealed a novel concept designed to improve black-box testing using Category-Partition for test specifications and test suites. The report includes two experiments using data extracted from source code available from the website (15) to demonstrate the application of a decision tree (C4.5) and the multilayer perceptron for fault prediction, and an example that shows a potential candidate for the Category-Partition scheme. The results from several research projects shows that the application of machine learning in software testing has achieved various degrees of success in effectively assisting software developers to improve their test strategy in verification and validation of software systems. / text
54

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
55

A kernel to support computer-aided verification of embedded software

Grobler, Leon D 03 1900 (has links)
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2006. / Formal methods, such as model checking, have the potential to improve the reliablility of software. Abstract models of systems are subjected to formal analysis, often showing subtle defects not discovered by traditional testing.
56

Proposta de metodologia para verificação e validação software de equipamentos eletromédicos / Proposed methodology for verification and validation of medical electrical equipment

Viviani, Carlos Alessandro Bassi 19 August 2018 (has links)
Orientador: Vera Lúcia da Silveira Nantes Button / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T17:47:06Z (GMT). No. of bitstreams: 1 Viviani_CarlosAlessandroBassi_M.pdf: 2757752 bytes, checksum: d214acf48f75ac539aed913dc8547646 (MD5) Previous issue date: 2011 / Resumo: Hoje boa parte dos equipamentos eletromedicos (EEM) possui algum tipo de controle realizado por software; esse controle pode ser restrito a um ou mais subsistemas do equipamento, ou ainda ser total. A partir do momento em que o software representa papel fundamental no controle de EEM ele deixa de ser um risco intrinseco do equipamento e deve ser analisado com o mesmo rigor e criterio da analise do hardware do equipamento. A analise rigorosa dos equipamentos e concentrada no funcionamento do hardware em si e nao esta associada aos sistemas de controle, que sao feitos por softwares de controle. Uma quantidade significativa de software critico e desenvolvida por pequenas empresas, principalmente na industria de dispositivos medicos. Esse trabalho teve como objetivo primario apresentar uma proposta de metodologia para organizar o processo de teste do software de controle dos EEM, bem como definir toda a documentacao necessaria para a gerencia desse processo de teste tomando como base a norma IEEE 829:2008. Essa metodologia, que prioriza a realizacao de testes sistematicos, podera ser empregada para a verificacao e validacao dos softwares de controle de qualquer tipo de EEM, e esta dividida em duas partes fundamentais: Processo de Teste e Geracao de Documentos. Essa metodologia foi aplicada em um monitor cardiaco hospitalar comercial a fim de valida-lo e, como isso, pode garantir que o equipamento atendeu os requisitos do fabricante e principalmente da norma ao qual ele esta sujeito, e dessa forma considerou o equipamento seguro para uso clinico do ponto de vista da seguranca do software. A obtencao de todo o conteudo necessario para o processo de teste foi feita atraves do manual de utilizacao do EEM, das especificacoes tecnicas apontadas pelo fabricante e das especificacoes definidas na norma especifica do EEM que estao sujeitos a certificacao compulsoria prevista na Resolucao no. 32 da ANVISA. Como resultado dessa pesquisa foi gerado um conjunto de documentos, baseados na IEEE 829:2008, que foram utilizados desde o planejamento dos testes ate o registro dos resultados. Esses documentos sao: 1) Plano de Teste, que e uma modelagem detalhada do fluxo de trabalho durante o processo de teste; 2) Especificacao do Projeto de Teste, que refina a abordagem apresentada no Plano de Teste e identifica as funcionalidades e caracteristicas que foram testadas pelo projeto e por seus testes associados; 3) Especificacao dos Casos de Teste, que definiu os casos de teste, incluindo dados de entrada, resultados esperados, acoes e condicoes gerais para a execucao do teste; 4)Especificacao do Procedimento de Teste, que especificou os passos para executar um conjunto de casos de teste; 5) Diario de Teste, apresentou os registros cronologicos dos detalhes relevantes relacionados a execucao dos testes; 6) Relatorio de Incidente de Teste, documentou os eventos que ocorreram durante a atividade de teste e que precisaram de uma analise posterior; e 7) Relatorio resumo de Teste, apresentou, de forma resumida, os resultados das atividades de teste associadas com uma ou mais especificacoes de projeto de teste e realizou as avaliacoes baseadas nesses resultados. Dessa forma, como objetivos secundarios, foram apresentados os processos e os agentes envolvidos na certificacao de EEM no Brasil e no mundo. Na literatura foram encontrados diversos problemas com os EEM devidos, principalmente, a erros encontrados em seu software de controle. A partir dessas observacoes foram apresentados os reguladores de EEM no Brasil e como e feito o processo de certificacao, comercializacao e vigilancia pos-venda destes produtos. Para apontar os problemas que sao encontrados e documentados referentes aos EEM foi apresentado o conceito de recall e tambem como esse processo ocorre no Brasil e no mundo. A partir desta problematica foram apresentadas as normas aplicadas ao desenvolvimento de software englobando desde o processo de qualidade ate o processo final de teste onde o software de fato sera validado a fim de garantir que novos problemas relacionados aos equipamentos nao voltem a ocorrer. Como resultado primario deste trabalho teve-se a geracao dos documentos que serviram como base para o processo de teste, desde seu planejamento ate a execucao e o registro das atividades de teste. Essa documentacao consistiu em um modelo macro que podera ser aplicado em qualquer EEM. A partir da documentacao proposta pode-se realizar sua aplicacao em um monitor cardiaco hospitalar para sua verificacao (estudo de caso). Os testes funcionais aplicados aos sistemas embarcados do monitor cardiaco puderam ser considerados eficazes em diversas condicoes de uso simuladas, normais e tambem criticas ou que poderiam apresentar algum risco aos usuarios dos equipamentos. Esse estudo resultou em uma importante contribuicao para a organizacao do processo de verificacao e validacao de software de controle de EEM. A aplicacao desta proposta no monitor cardiaco sob teste pode realizar sua verificacao e validacao do ponto de vista de qualidade do software de controle, uma vez que nao apresentou defeitos, apenas um tipo uma falha considerada leve o que qualifica tal monitor cardiaco como apto para utilizacao segura / Abstract: Today a great part of electromedical equipments (EME) have some kind of control performed by software. This control can be restrict to one or more subsystems of the equipment or yet be total. Since software became a key factor in the EME control it represents an intrinsic risk and must be analyzed with the same accuracy and criterion of the equipment's hardware analysis. The rigorous analysis of the equipments is concentrated in the functioning of the hardware itself and is not associated to the software control systems. A significant amount of critical software is developed by small enterprises mainly in the EME industry. This study had as main goal to present a methodology proposal to organize the process of EME control software test as well as to define all necessary documentation for the management of this test process using the standard IEEE 829:2008. As a secondary goal of this work, the processes and agents involved in the EME certification in Brazil and in the world were reported. Several EME malfunctioning problems especially due to mistakes found in their control software were found in literature. Brazilian EME regulators and how the process of certification, commercialization and post-market surveillance of the medical products are done, were also reported. To point the problems found and documented regarding EME, the concept of recall was presented and also how this process occurs in Brazil and in the world. The proposed methodology, which prioritizes the achievement of systematic tests, can be used for verification and validation of any kind of EME control software and was divided in two fundamental parts: test process and generation of documents. The methodology was applied to a commercial hospital heart monitor in order to validate it and therefore to guarantee that the equipment has complied with the manufacture's requirements and with the standard it is subjected to. This way the equipment can be considered safe for clinical use from the software's security point of view. Some characteristics data and technical specifications, necessary for the test process, were obtained through the EME user manual and pointed by the manufacturer and EME standard specification, which are subject to compulsory certification provided by the ANVISA Brazilian resolution number 32. As a result of this research a set of documents was produced, based on the IEEE 829:2008 standard and were used from the test planning until the results record. Those documents are: 1) Test plan - detailed modeling of workflow during the test process. 2) Specification of test project - refines the approach presented in the test plan and identifies the functionalities and characteristics tested by project and associated tests. 3) Specification of test cases - specified steps to execute a set of test cases. 5) Test board - presented the chronological records of relevant details related to test execution. 6) Test incident report - documents the events occurred during the test activity that needed later analysis and 7) Test summary report - resumes briefly the results of test activities associated to one or more test project specifications and performed evaluations based on these results. As a primary result of this work there was the production of documents that were the basis for the testing process, from planning to execution and recording of test activities. This documentation consisted of a macro model that can be applied to any EME and it was used to test a hospital heart monitor. The functional tests applied to the heart monitor embedded systems were considered effective in various simulated situations, normal and critical or that could represent a risk to users of the equipment. This study resulted in an important contribution to the organization of the process of verification and validation of EME control software. The implementation of the proposed methodology on the heart monitor test was able to perform verification and validation from the point of view of control software and it was considered safe to be used since only a light kind of failure was observed / Mestrado / Engenharia Biomedica / Mestre em Engenharia Elétrica e de Computação
57

Performance evaluation based on data from code reviews

Andrej, Sekáč January 2016 (has links)
Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation. Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis. Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.
58

System for firmware verification

Nilsson, Daniel January 2009 (has links)
Software verification is an important part of software development and themost practical way to do this today is through dynamic testing. This reportexplains concepts connected to verification and testing and also presents thetesting-framework Trassel developed during the writing of this report.Constructing domain specific languages and tools by using an existinglanguage as a starting ground can be a good strategy for solving certainproblems, this was tried with Trassel where the description-language forwriting test-cases was written as a DSL using Python as the host-language.
59

Vérification d’analyses statiques pour langages de bas niveau / Verified static analyzes for low-level languages

Laporte, Vincent 25 November 2015 (has links)
L'analyse statique des programmes permet d'étudier les comportements possibles des programmes sans les exécuter. Les analyseurs statiques sont employés par exemple pour garantir que l'exécution d'un programme ne peut pas produire d'erreurs. Ces outils d'analyse étant eux-mêmes des programmes, ils peuvent être incorrects. Pour accroître la confiance que l'on peut accorder aux résultats d'une telle analyse, nous étudions dans cette thèse comment on peut formellement établir la correction de l'implantation d'un tel analyseur statique. En particulier, nous construisons au moyen de l'assistant à la preuve Coq des interpréteurs abstraits et prouvons qu'ils sont corrects ; c'est-à-dire nous établissons formellement que le résultat de l'analyse d'un programme caractérise bien toutes les exécutions possibles de ce programme. Ces interpréteurs abstraits s'intègrent, dans la mesure du possible, au compilateur vérifié CompCert, ce qui permet de garantir que les propriétés de sûreté prouvées sur le code source d'un programme sont aussi valides pour la version compilée de ce programme. Nous nous concentrons sur l'analyse de programmes écrits dans des langages de bas niveau. C'est-à-dire des langages qui ne fournissent que peu d'abstractions (variables, fonctions, objets, types…) ou des abstractions que le programmeur a loisir de briser. Cela complexifie la tâche d'un analyseur qui ne peut pas s'appuyer sur ces abstractions pour être précis. Nous présentons notamment comment reconstruire automatiquement le graphe de flot de contrôle de programmes binaires auto-modifiants et comment prouver automatiquement qu'un programme écrit en C (où l'arithmétique de pointeurs est omniprésente) ne peut pas produire d'erreurs à l'exécution. / Static analysis of programs enables to study the possible behaviours of programs without running them. Static analysers may be used to guarantee that the execution of a program cannot result in a run-time error. Such analysis tools are themselves programs: they may have bugs. So as to increase the confidence in the results of an analysis, we study in this thesis how the implementation of static analysers can be formally proved correct. In particular, we build abstract interpreters within the Coq proof assistant and prove them correct. Namely, we formally establish that analysis results characterize all possible executions of the analysed program. Such abstract interpreters are integrated to the formally verified CompCert compiler, when relevant ; this enables to guarantee that safety properties that are proved on source code also hold for the corresponding compiled code. We focus on the analysis of programs written in low-level languages. Namely, languages which feature little or no abstractions (variables, functions, objects, types…) or abstractions that the programmer is allowed to break. This hampers the task of a static analyser which thus cannot rely on these abstractions to yield precise results. We discuss in particular how to automatically recover the control-flow graph of binary self-modifying programs, and how to automatically prove that a program written in C (in which pointer arithmetic is pervasive) cannot produce a run-time error.
60

Testing and maintenance of graphical user interfaces / Test et maintenance des interfaces graphiques

Lelli leitao, Valeria 19 November 2015 (has links)
La communauté du génie logiciel porte depuis ses débuts une attention spéciale à la qualité et la fiabilité des logiciels. De nombreuses techniques de test logiciel ont été développées pour caractériser et détecter des erreurs dans les logiciels. Les modèles de fautes identifient et caractérisent les erreurs pouvant affecter les différentes parties d’un logiciel. D’autre part, les critères de qualité logiciel et leurs mesures permettent d’évaluer la qualité du code logiciel et de détecter en amont du code potentiellement sujet à erreur. Les techniques d’analyses statiques et dynamiques scrutent, respectivement, le logiciel à l’arrêt et à l’exécution pour trouver des erreurs ou réaliser des mesures de qualité. Dans cette thèse, nous prônons le fait que la même attention doit être portée sur la qualité et la fiabilité des interfaces utilisateurs (ou interface homme-machine, IHM), au sens génie logiciel du terme. Cette thèse propose donc deux contributions dans le domaine du test et de la maintenance d’interfaces utilisateur : 1. Classification et mutation des erreurs d’interfaces utilisateur. 2. Qualité du code des interfaces utilisateur. Nous proposons tout d’abord un modèle de fautes d’IHM. Ce modèle a été conçu à partir des concepts standards d’IHM pour identifier et classer les fautes d’IHM ; Au travers d’une étude empirique menée sur du code Java existant, nous avons montré l’existence d’une mauvaise pratique récurrente dans le développement du contrôleur d’IHM, objet qui transforme les évènements produits par l’interface utilisateur pour les transformer en actions. Nous caractérisons cette nouvelle mauvaise pratique que nous avons appelée Blob listener, en référence à la méthode Blob. Nous proposons également une analyse statique permettant d’identifier automatiquement la présence du Blob listener dans le code d’interface Java Swing. / The software engineering community takes special attention to the quality and the reliability of software systems. Software testing techniques have been developed to find errors in code. Software quality criteria and measurement techniques have also been assessed to detect error-prone code. In this thesis, we argue that the same attention has to be investigated on the quality and reliability of GUIs, from a software engineering point of view. We specifically make two contributions on this topic. First, GUIs can be affected by errors stemming from development mistakes. The first contribution of this thesis is a fault model that identifies and classifies GUI faults. We show that GUI faults are diverse and imply different testing techniques to be detected. Second, like any code artifact GUI code should be analyzed statically to detect implementation defects and design smells. As for the second contribution, we focus on design smells that can affect GUIs specifically. We identify and characterize a new type of design smell, called Blob listener. It occurs when a GUI listener, that gathers events to treat and transform as commands, can produce more than one command. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in a tool called InspectorGuidget. Experiments we conducted exhibits positive results regarding the ability of InspectorGuidget in detecting Blob listeners. To counteract the use of Blob listeners, we propose good coding practices regarding the development of GUI listeners.

Page generated in 0.1345 seconds