Spelling suggestions: "subject:"5oftware - desting"" "subject:"5oftware - ingesting""
441 |
Hardware/Software Interface Assurance with Conformance CheckingLei, Li 02 June 2015 (has links)
Hardware/Software (HW/SW) interfaces are pervasive in modern computer systems. Most of HW/SW interfaces are implemented by devices and their device drivers. Unfortunately, HW/SW interfaces are unreliable and insecure due to their intrinsic complexity and error-prone nature. Moreover, assuring HW/SW interface reliability and security is challenging. First, at the post-silicon validation stage, HW/SW integration validation is largely an ad-hoc and time-consuming process. Second, at the system deployment stage, transient hardware failures and malicious attacks make HW/SW interfaces vulnerable even after intensive testing and validation. In this dissertation, we present a comprehensive solution for HW/SW interface assurance over the system life cycle. This solution is composited of two major parts. First, our solution provides a systematic HW/SW co-validation framework which validates hardware and software together; Second, based on the co-validation framework, we design two schemes for assuring HW/SW interfaces over the system life cycle: (1) post-silicon HW/SW co-validation at the post-silicon validation stage; (2) HW/SW co-monitoring at the system deployment stage. Our HW/SW co-validation framework employs a key technique, conformance checking which checks the interface conformance between the device and its reference model. Furthermore, property checking is carried out to verify system properties over the interactions between the reference model and the driver. Based on the conformance between the reference model and the device, properties hold on the reference model/driver interface also hold on the device/driver interface. Conformance checking discovers inconsistencies between the device and its reference model thereby validating device interface implementations of both sides. Property checking detects both device and driver violations of HW/SW interface protocols. By detecting device and driver errors, our co-validation approach provides a systematic and ecient way to validate HW/SW interfaces. We developed two software tools which implement the two assurance schemes: DCC (Device Conformance Checker), a co-validation framework for post-silicon HW/SW integration validation; and CoMon (HW/SW Co-monitoring), a runtime verication framework for detecting bugs and malicious attacks across HW/SW interfaces. The two software tools lead to discovery of 42 bugs from four industry hardware devices, the device drivers, and their reference models. The results have demonstrated the signicance of our approach in HW/SW interface assurance of industry applications.
|
442 |
Design Of Incentive Compatible Mechanisms For Ticket Allocation In Software Maintenance ServicesSubbian, Karthik 12 1900 (has links)
Software Maintenance is becoming more and more challenging due to rapidly changing customer needs, technologies and need for highly skilled labor. Many problems that existed a decade ago continue to exist or have even grown. In this context organizations find it difficult to match engineer interest, skill to particular customer problem. Thus making it difficult for organization to keep the selfish and rational engineers motivated and productive. In this thesis we have used game theory and mechanism design to model the interactions among such selfish engineers to motivate truth revelation using incentive based allocation schemes for software maintenance problems, especially Ticket Allocation Problem.
Ticket allocation or problem allocation is a key problem in the software maintenance process.Tickets are usually allocated by the manager or the technical lead. In allocating a ticket, the manager or technical lead is normally guided by the complexity assessment of the ticket as provided by the maintenance engineers, who are entrusted with the responsibility of fixing the problem.The rationality of the maintenance engineers could induce them to report the complexity in an untruthfulway so as to increase their payoffs.This leads to non-optimal ticket allocation.
In this thesis we first address the problem of eliciting ticket complexities in a truthfulway from each individual maintenance engineer, using a mechanism design approach. In particular, we model the problem as that of designing an incentive compatible mechanism and we offer two possible solutions.The first one, TA-DSIC, a Dominant Strategy Incentive Compatible (DSIC) solution and the second solution, TA-BIC, is a Bayesian Incentive Compatible mechanism. We show that the proposed mechanisms outperform conventional allocation protocols in the context of a representative software maintenance organization.
In this thesis,we next address the incentive compatibility issue for group ticket allocation problem .Many times a ticket is also allocated to more than one engineers. This may be due to a quick customer delivery(time)deadline. The decision of such allocation is generally taken by the lead, based on customer deadlines and a guided complexity assessment from each maintenance engineer.The decision of allocation in such case should ensure that every individual reveals truth in the proposed group(or coalition) and has incentive to participate in the game as individual and in the coalition. We formulate this problem as Normal form game and propose three mechanisms, (1)Division of Labor, (2)Extended Second Price and (3)Greedy Division of Labor. We show that the proposed mechanisms are DSIC and we discuss their rationality properties.
|
443 |
Efficient Online Path ProfilingVaswani, Kapil 10 1900 (has links)
Most dynamic program analysis techniques such as profile-driven compiler optimizations, software testing and runtime property checking infer program properties by profiling one or more executions of a program. Unfortunately, program profiling does not come for free. For example, even the most efficient techniques for profiling acyclic, intra-procedural paths can slow down program execution by a factor of 2. In this thesis, we propose techniques that significantly lower the overheads of profiling paths, enabling the use of path-based dynamic analyzes in cost-sensitive environments.
Preferential path profiling (PPP) is a novel software-only path profiling scheme that efficiently profiles given subsets of paths, which we refer to as interesting paths. The algorithm is based on the observation that most consumers of path profiles are only interested in profiling a small set of paths known a priori. Our algorithm can be viewed as a generalization of the Ball-Larus path profiling algorithm. Whereas the Ball-Larus algorithm assigns weights to the edges of a given CFG such that the sum of the weights of the edges along each path through the CFG is unique, our algorithm assigns weights to the edges such that the sum of the weights along the edges of interesting paths is unique. Furthermore, our algorithm attempts to achieve a minimal and compact encoding of the interesting paths; such an encoding significantly reduces the overheads of path profiling by eliminating expensive hash operations during profiling. Interestingly, we find that both the Ball-Larus algorithm and PPP are essentially a form of arithmetic coding. We use this connection to prove that the numbering produced by PPP is optimal.
We also propose a programmable, non-intrusive hardware path profiler (HPP). The hardware profiler consists of a path detector that detects paths by monitoring the stream of retiring branch instructions emanating from the processor pipeline. The path detector can be programmed to detect various types of paths and track architectural events that occur along paths. The second component of the hardware profiling infrastructure is a Hot Path Table (HPT), that collects accurate hot path profiles.
Our experimental evaluation shows that PPP reduces the overheads of profiling paths to 15% on average (with a maximum of 26%). The algorithm can be easily extended to profile inter-procedural paths at minimal additional overheads (average of 26%). We modeled HPP using a cycle-accurate superscalar processor simulator and find that HPP generates accurate path profiles at extremely low overheads (0.6% on average) with a moderate hardware budget. We also evaluated the use of PPP and HPP in a realistic profiling scenarios. We find that the profiles generated by HPP can effectively replace expensive profiles used in profile-driven optimizations. We also find that even well-tested programs tend to exercise a large number of untested paths in the field, emphasizing the need for efficient profiling schemes that can be deployed in production environments.
|
444 |
Improving systematic constraint-driven analysis using incremental and parallel techniquesSiddiqui, Junaid Haroon 25 February 2013 (has links)
This dissertation introduces Pikse, a novel methodology for more effective and efficient checking of code conformance to specifications using parallel and incremental techniques, describes a prototype implementation that embodies the methodology, and presents experiments that demonstrate its efficacy. Pikse has at its foundation a well-studied approach -- systematic constraint-driven analysis -- that has two common forms: (1) constraint-based testing -- where logical constraints that define desired inputs and expected program behavior are used for test input generation and correctness checking, say to perform black-box testing; and (2) symbolic execution -- where a systematic exploration of (bounded) program paths using symbolic input values is used to check properties of program behavior, say to perform white-box testing.
Our insight at the heart of Pikse is that for certain path-based analyses, (1) the state of a run of the analysis can be encoded compactly, which provides a basis for parallel techniques that have low communication overhead; and (2) iterations performed by the analysis have commonalities, which provides the basis for incremental techniques that re-use results of computations common to successive iterations.
We embody our insight into a suite of parallel and incremental techniques that enable more effective and efficient constraint-driven analysis. Moreover, our techniques work in tandem, for example, for combined black-box constraint-based input generation with white-box symbolic execution. We present a series of experiments to evaluate our techniques. Experimental results show Pikse enables significant speedups over previous state-of-the-art. / text
|
445 |
Test Modeling of Dynamic Variable Systems using Feature Petri NetsPüschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links) (PDF)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
|
446 |
Duomenų bazių našumo tyrimo įrankis / Database performance audit toolGreibus, Justinas 13 August 2010 (has links)
Duomenų bazių našumo analizė yra viena iš pagrindinių siekių programinės įrangos testavimo srityje šiuo metu. Atliekant tyrimus sukurta nemažai metodikų, kurios leidžia nustatyti duomenų bazių našumo lygį. Tačiau priemonės sukurtos remiantis šiomis metodikomis yra prieinamos dažniausiai tik uždaroms bendruomenėms. Šiame darbe yra nagrinėjama duomenų bazių našumo tyrimo metodika, pagrįsta programinės įrangos apkrovos ir stresinio testavimo principais. Sukurtas įrankis suteikia vartotojui galimybę atlikti duomenų bazių našumo tyrimo scenarijus, bei pakartotinai atlikti istorinius scenarijus ir palyginti gautus rezultatus. Formuojamos ataskaitos pateikia daug svarbios informacijos skirtos analizei. / The analysis of the database performance is the common challenge in the nowadays software testing. There are several methodologies of the analysis of the database performance in the market. However, tools, which are based on these methodologies, are available for the narrow circle of the privileged persons. According to the results of the analysis, this master thesis investigates a new methodology, which is based on several other methodologies. The methodology of the database performance audit, which is discussed in this project, is based on the principles of the software load and stress testing. In order to identify the issues of database performance, values of the performance parameters are registered. These values are counted during the execution of the scenario of the automatic scenarios. The user has the possibility to re-execute historical scenarios and to compare the results of the separate executions. Generated reports with the deep data level facilitate the analysis of the database performance.
|
447 |
Dervish: a new gui for grammar-based test generationLy-Gagnon, David 20 April 2010 (has links)
Because software testing is a repetitive and time-intensive task, a practical solution is to turn to automation. Test automation, however, requires programming skills. Testers, who typically know a lot about the application under test, often do not have the programming skills to automate the testing effort. Grammar-based test generation (GBTG) uses context-free grammars to generate strings in the language described by the grammar. Given a grammar, a GBTG algorithm can produce test cases for the application under test. Since testers typically have little programming skills and are not likely to develop the grammars needed for practical testing, the power of GBTG is unavailable to many testing practitioners.
To help address this problem, we have developed Dervish, a graphical user interface which allows testers to use the power of GBTG. Our new tool allows testers to modify parts of a grammar, generate test cases, and visualize generation trees. To demonstrate the benefits of Dervish, we present the results of three case studies.
|
448 |
Combining over- and under-approximating program analyses for automatic software testingCsallner, Christoph 07 July 2008 (has links)
This dissertation attacks the well-known problem of path-imprecision in static program analysis. Our starting point is an existing static program analysis that over-approximates the execution paths of the analyzed program. We then make this over-approximating program analysis more precise for automatic testing in an object-oriented programming language. We achieve this by combining the over-approximating program analysis with usage-observing and under-approximating analyses. More specifically, we make the following contributions.
We present a technique to eliminate language-level unsound bug warnings produced by an execution-path-over-approximating analysis for object-oriented programs that is based on the weakest precondition calculus. Our technique post-processes the results of the over-approximating analysis by solving the produced constraint systems and generating and executing concrete test-cases that satisfy the given constraint systems. Only test-cases that confirm the results of the over-approximating static analysis are presented to the user. This technique has the important side-benefit of making the results of a weakest-precondition based static analysis easier to understand for human consumers. We show examples from our experiments that visually demonstrate the difference between hundreds of complicated constraints and a simple corresponding JUnit test-case.
Besides eliminating language-level unsound bug warnings, we present an additional technique that also addresses user-level unsound bug warnings. This technique pre-processes the testee with a dynamic analysis that takes advantage of actual user data. It annotates the testee with the knowledge obtained from this pre-processing step and thereby provides guidance for the over-approximating analysis.
We also present an improvement to dynamic invariant detection for object-oriented programming languages. Previous approaches do not take behavioral subtyping into account and therefore may produce inconsistent results, which can throw off automated analyses such as the ones we are performing for bug-finding.
Finally, we address the problem of unwanted dependencies between test-cases caused by global state. We present two techniques for efficiently re-initializing global state between test-case executions and discuss their trade-offs.
We have implemented the above techniques in the JCrasher, Check 'n' Crash, and DSD-Crasher tools and present initial experience in using them for automated bug finding in real-world Java programs.
|
449 |
Uma abordagem para a verifica??o do comportamento excepcional a partir de regras de designe e testesSales Junior, Ricardo Jos? 01 February 2013 (has links)
Made available in DSpace on 2014-12-17T15:48:06Z (GMT). No. of bitstreams: 1
RicardoJSJ_DISSERT.pdf: 4102063 bytes, checksum: 92b62a467283fb011a1258e8b80ca7b4 (MD5)
Previous issue date: 2013-02-01 / Checking the conformity between implementation and design rules in a system is an
important activity to try to ensure that no degradation occurs between architectural
patterns defined for the system and what is actually implemented in the source code.
Especially in the case of systems which require a high level of reliability is important to
define specific design rules for exceptional behavior. Such rules describe how
exceptions should flow through the system by defining what elements are responsible
for catching exceptions thrown by other system elements. However, current
approaches to automatically check design rules do not provide suitable mechanisms
to define and verify design rules related to the exception handling policy of
applications. This paper proposes a practical approach to preserve the exceptional
behavior of an application or family of applications, based on the definition and runtime
automatic checking of design rules for exception handling of systems developed in
Java or AspectJ. To support this approach was developed, in the context of this work,
a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that
extends the JUnit framework and allows automating test activities to exceptional
design rules. We conducted a case study with the primary objective of evaluating the
effectiveness of the proposed approach on a software product line. Besides this, an
experiment was conducted that aimed to realize a comparative analysis between the
proposed approach and an approach based on a tool called JUnitE, which also
proposes to test the exception handling code using JUnit tests. The results showed
how the exception handling design rules evolve along different versions of a system
and that VITTAE can aid in the detection of defects in exception handling code / Verificar a conformidade entre a implementa??o de um sistema e suas regras de
design ? uma atividade importante para tentar garantir que n?o ocorra a degrada??o
entre os padr?es arquiteturais definidos para o sistema e o que realmente est?
implementado no c?digo-fonte. Especialmente no caso de sistemas dos quais se
exige um alto n?vel de confiabilidade ? importante definir regras de design (design
rules) espec?ficas para o comportamento excepcional. Tais regras descrevem como
as exce??es devem fluir atrav?s do sistema, definindo quais s?o os elementos
respons?veis por capturar as exce??es lan?adas por outros elementos do sistema.
Entretanto, as abordagens atuais para verificar automaticamente regras de design
n?o proveem mecanismos adequados para definir e verificar regras de design
espec?ficas para a pol?tica de tratamento de exce??es das aplica??es. Este trabalho
prop?e uma abordagem pr?tica para preservar o comportamento excepcional de uma
aplica??o ou fam?lia de aplica??es, baseada na defini??o e verifica??o autom?tica em
tempo de execu??o de regras de design de tratamento de exce??o para sistemas
desenvolvidos em Java ou AspectJ. Para apoiar esta abordagem foi desenvolvida, no
contexto deste trabalho, uma ferramenta chamada VITTAE (Verification and
Information Tool to Analyze Exceptions) que estende o framework JUnit e permite
automatizar atividades do teste de regras de design excepcionais. Foi realizado um
estudo de caso preliminar com o objetivo de avaliar a efic?cia da abordagem proposta
sobre uma linha de produto de software. Al?m deste, foi realizado um experimento
cujo objetivo foi realizar uma an?lise comparativa entre a abordagem proposta e uma
abordagem baseada na ferramenta JUnitE, que tamb?m prop?e testar o c?digo de
tratamento de exce??es utilizando testes JUnit. Os resultados mostraram que as
regras de design excepcionais evoluem ao longo de diferentes vers?es de um sistema
e que a VITTAE pode auxiliar na detec??o de defeitos no c?digo de tratamento de
exce??o
|
450 |
POPT: uma abordagem de ensino de programa??o orientada a problema e testesLustosa Neto, Vicente Pires 05 August 2013 (has links)
Made available in DSpace on 2014-12-17T15:48:09Z (GMT). No. of bitstreams: 1
VicentePLN_DISSERT.pdf: 5303387 bytes, checksum: d5eb370b53d6220bf321369b13df3957 (MD5)
Previous issue date: 2013-08-05 / There is a growing interest of the Computer Science education community for
including testing concepts on introductory programming courses. Aiming at
contributing to this issue, we introduce POPT, a Problem-Oriented Programming and
Testing approach for Introductory Programming Courses. POPT main goal is to
improve the traditional method of teaching introductory programming that
concentrates mainly on implementation and neglects testing. POPT extends POP
(Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea
Mendon?a (UFCG). In both methodologies POPT and POP, students skills in dealing
with ill-defined problems must be developed since the first programming courses. In
POPT however, students are stimulated to clarify ill-defined problem specifications,
guided by de definition of test cases (in a table-like manner). This paper presents
POPT, and TestBoot a tool developed to support the methodology. In order to
evaluate the approach a case study and a controlled experiment (which adopted the
Latin Square design) were performed. In an Introductory Programming course of
Computer Science and Software Engineering Graduation Programs at the Federal
University of Rio Grande do Norte, Brazil. The study results have shown that, when
compared to a Blind Testing approach, POPT stimulates the implementation of
programs of better external quality the first program version submitted by POPT
students passed in twice the number of test cases (professor-defined ones) when
compared to non-POPT students. Moreover, POPT students submitted fewer
program versions and spent more time to submit the first version to the automatic
evaluation system, which lead us to think that POPT students are stimulated to think
better about the solution they are implementing. The controlled experiment confirmed
the influence of the proposed methodology on the quality of the code developed by
POPT students / Podemos perceber um crescente interesse por parte da comunidade de
educa??o de Ci?ncia da Computa??o na inclus?o de conceitos de testes em cursos
introdut?rios de programa??o. Visando contribuir neste sentido, apresentamos POPT
(do ingl?s: Problem Oriented Programing and Testing), uma abordagem de ensino
de programa??o orientada para o problema e testes, com foco nos cursos
introdut?rios. O principal objetivo de POPT ? o de melhorar o m?todo tradicional de
ensino de introdu??o a programa??o que se concentra essencialmente na
implementa??o (regras de sintaxe e sem?ntica da linguagem) negligenciando o teste
do c?digo sendo implementado. A metodologia POPT, estende a metodologia POP
(do ingl?s: Problem Oriented Programing) proposta na Tese de Doutorado de
Andrea Mendon?a. Ambas as metodologias pregam que devemos desenvolver a
habilidade dos alunos lidarem com especifica??es de problemas mal definidos. O
diferencial de POPT ? que os alunos s?o estimulados a desenvolver casos de teste
formatados em uma tabela com o objetivo de melhorar o entendimento sobre os
requisitos dos problemas (mal definidos) e tamb?m, para melhorar a qualidade do
c?digo gerado. Al?m de apresentar a metodologia POPT, este trabalho apresenta a
ferramenta TestBoot desenvolvida no contexto deste trabalho para dar suporte a
esta metodologia. Com o objetivo de avaliar a abordagem proposta em rela??o ?
metodologia tradicional de ensino, foi realizado um caso de estudo e um
experimento controlado (seguindo o design do Quadrado Latino). Tanto o estudo de
caso quando o experimento controlado foram realizados em disciplinas de
introdu??o a programa??o do curso de Ci?ncia da Computa??o e Engenharia de
software da Universidade Federal do Rio Grande do Norte, Brasil. Os resultados
destas avalia??es mostraram que, quando comparado com uma abordagem
tradicional, POPT estimula a implementa??o de programas de melhor qualidade. No
estudo de caso a primeira vers?o dos programas submetidos pelos alunos POPT
passaram em duas vezes o n?mero de casos de teste (definidos pelo professor)
quando comparados aos alunos n?o POPT; al?m disso, os alunos POPT
submeteram menos vers?es do programa e passaram mais tempo para apresentar a
primeira vers?o para o sistema de avalia??o autom?tica, o que nos leva a pensar
que os alunos s?o estimulados a pensar melhor sobre a solu??o que eles est?o a
programar. O experimento serviu para confirmar o impacto da metodologia proposta
na qualidade do c?digo gerado pelos alunos quando comparado a metodologia
tradicional
|
Page generated in 0.1025 seconds