• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Dosso - Automatic Detector Of Shared Objects In Multithreaded Java Programs

Tolubaeva, Munara 01 March 2009 (has links) (PDF)
In this thesis, we present a simple and efficient automated analysis tool called DoSSO that detects shared objects in multithreaded Java programs. DoSSO reports only the shared objects that are modified by at least one thread. Based on this tool, we propose a new approach in developing concurrent software where programmers implement the system without considering synchronization issues first and then use appropriate locking mechanism only for the objects reported by DoSSO. To evaluate the applicability of DoSSO, we have conducted a case study on a distributed and concurrent system with graphical user interfaces. Case study results showed that DoSSO is able to identify objects that become shared among explicitly defined threads and event threads, and objects that become shared through RMI.
72

Towards a Gold Standard for Points-to Analysis

Gutzmann, Tobias January 2010 (has links)
<p>Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis.</p><p>This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity.</p><p>We support our theories with a number of experiments.</p>
73

Systematic techniques for efficiently checking Software Product Lines

Kim, Chang Hwan Peter 25 February 2014 (has links)
A Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features. In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring, a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored. When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses, runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs using arrays. The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient. / text
74

Toward better server-side Web security

Son, Sooel 25 June 2014 (has links)
Server-side Web applications are constantly exposed to new threats as new technologies emerge. For instance, forced browsing attacks exploit incomplete access-control enforcement to perform security-sensitive operations (such as database writes without proper permission) by invoking unintended program entry points. SQL command injection attacks (SQLCIA) have evolved into NoSQL command injection attacks targeting the increasingly popular NoSQL databases. They may expose internal data, bypass authentication or violate security and privacy properties. Preventing such Web attacks demands defensive programming techniques that require repetitive and error-prone manual coding and auditing. This dissertation presents three methods for improving the security of server-side Web applications against forced browsing and SQL/NoSQL command injection attacks. The first method finds incomplete access-control enforcement. It statically identifies access-control logic that mediates security-sensitive operations and finds missing access-control checks without an a priori specification of an access-control policy. Second, we design, implement and evaluate a static analysis and program transformation tool that finds access-control errors of omission and produces candidate repairs. Our third method dynamically identifies SQL/NoSQL command injection attacks. It computes shadow values for tracking user-injected values and then parses a shadow value along with the original database query in tandem with its shadow value to identify whether user-injected parts serve as code. Remediating Web vulnerabilities and blocking Web attacks are essential for improving Web application security. Automated security tools help developers remediate Web vulnerabilities and block Web attacks while minimizing error-prone human factors. This dissertation describes automated tools implementing the proposed ideas and explores their applications to real-world server-side Web applications. Automated security tools are effective for identifying server-side Web application security holes and a promising direction toward better server-side Web security. / text
75

Automatic Extraction of Program Models for Formal Software Verification

de Carvalho Gomes, Pedro January 2015 (has links)
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool / Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade. / <p>QC 20151101</p>
76

A Concurrent IFDS Dataflow Analysis Algorithm Using Actors

Rodriguez, Jonathan David January 2010 (has links)
There has recently been a resurgence in interest in techniques for effective programming of multi-core computers. Most programmers find general-purpose concurrent programming to be extremely difficult. This difficulty severely limits the number of applications that currently benefit from multi-core computers. There already exist many concurrent solutions for the class of regular applications, which include various algorithms for linear algebra. For the class of irregular applications, which operate on dynamic and pointer- and graph-based structures, efficient concurrent solutions have so far remained elusive. Dataflow analysis applications, which are often found in compilers and program analysis tools, have received particularly little attention with regard to execution on multi-core machines. Operating on the theory that the Actor model, which structures computations as systems of asynchronously-communicating entities, is a more appropriate method for representing irregular algorithms than the shared-memory model, this work presents a concurrent Actor-based formulation of the IFDS, or Interprocedural Finite Distributive Subset, dataflow analysis algorithm. The implementation of this algorithm is done using the Scala language and its Actors library. This algorithm achieves significant speedup on multi-core machines without using any optimistic execution. This work contributes to Actor research by showing how the Actor model can be practically applied to a dataflow analysis problem. This work contributes to static analysis research by showing how a dataflow analysis algorithm can effectively make use of multi-core machines, allowing the possibility of faster and more precise analyses.
77

Statinės kodo analizės įrankių tyrimas ir tobulinimas / Research and development of static code analysis tools

Zonys, Andrius 26 August 2013 (has links)
Šiame darbe aptariama statinė ir dinaminė kodo analizė, jų privalumai ir trūkumai, analizės įrankių tipai bei jų paskirtis. Analizuojami „Gendarme“, „Cppcheck“ ir „FindBugs“ statinės kodo analizės įrankiai. Plačiau analizuojamas laisvai platinamas statinės kodo analizės įrankis „Gendarme“. Pateikiamos spragos, rastos esamose „Gendarme“ taisyklėse bei siūlomi jų patobulinimai, kurie turėtų padėti aptikti daugiau klaidų ir padidinti programų spartą, stabilumą, saugumą ir kodo skaitomumą. Taip pat pateikiamos naujos sukurtos taisyklės, kurios turėtų padėti aptikti daugiau perteklinio, nelogiško kodo, kurio kompiliatorius neaptinka. Naujos taisyklės turėtų padėti supaprastinti programos kodą ir taip pagerinti jo skaitomumą bei aptikti daugiau spragų, kuriomis pasinaudojus galima įtakoti programų veikimą. Atliktas eksperimentinis tyrimas, kurio metu buvo išanalizuotos programos su patobulintu ir nepatobulintu „Gendarme“ statinės kodo analizės įrankiu. Pateikiami eksperimento rezultatai ir išvados. / This paper discusses the static and dynamic code analysis, their advantages and disadvantages, types of analysis tools and their purpose. Analyzes the "Gendarme", "Cppcheck" and "FindBugs" static code analysis tools. Wider analyzes the freely distributed static source code analysis tool "Gendarme". Presented gaps which were found in existing "Gendarme" rules and the proposed modifications, which should help to detect more errors and improve program performance, stability, security and code readability. It also introduces new rules, which should help to detect more excessive, illogical code, which are not detected by the compiler. As well as simplify the code and thus to improve its readability and detect more vulnerabilities, which may be used to affect the functioning of applications. Experimental research was carried out in which some programs were analyzed with improved and not improved static code analysis tool "Gendarme". Experimental results and conclusions are presented.
78

Certification of static analysis in many-sorted first-order logic

Cornilleau, Pierre-Emmanuel 25 March 2013 (has links) (PDF)
Static program analysis is a core technology for both verifying and finding errors in programs but most static analyzers are complex pieces of software that are not without error. A Static analysis formalised as an abstract interpreter can be proved sound, however such proofs are significantly harder to do on the actual implementation of an analyser. To alleviate this problem we propose to generate Verification Conditions (VCs, formulae valid only if the results of the analyser are correct) and to discharge them using an Automated Theorem Prover (ATP). We generate formulae in Many-Sorted First-Order Logic (MSFOL), a logic that has been successfully used in deductive program verification. MSFOL is expressive enough to describe the results of complex analyses and to formalise the operational semantics of object-oriented languages. Using the same logic for both tasks allows us to prove the soundness of the VC generator using deductive verification tools. To ensure that VCs can be automatically discharged for complex analyses of the heap, we introduce a VC calculus that produces formulae belonging to a decidable fragment of MSFOL. Furthermore, to be able to certify different analyses with the same calculus, we describe a family of analyses with a parametric concretisation function and instrumentation of the semantics. To improve the reliability of ATPs, we also studied the result certification of Satisfiability Modulo Theory solvers, a family of ATPs dedicated to MSFOL. We propose a modular proof-system and a modular proof-verifier programmed and proved correct in Coq, that rely on exchangeable verifiers for each of the underlying theories.
79

FAULT LINKS: IDENTIFYING MODULE AND FAULT TYPES AND THEIR RELATIONSHIP

Michael, Inies Raphael Chemmannoor 01 January 2004 (has links)
The presented research resulted in a generic component taxonomy, a generic code-faulttaxonomy, and an approach to tailoring the generic taxonomies into domain-specific aswell as project-specific taxonomies. Also, a means to identify fault links was developed.Fault links represent relationships between the types of code-faults and the types ofcomponents being developed or modified. For example, a fault link has been found toexist between Controller modules (that forms a backbone for any software via. itsdecision making characteristics) and Control/Logic faults (such as unreachable code).The existence of such fault links can be used to guide code reviews, walkthroughs, testingof new code development, as well as code maintenance. It can also be used to direct faultseeding. The results of these methods have been validated. Finally, we also verified theusefulness of the obtained fault links through an experiment conducted using graduatestudents. The results were encouraging.
80

Simulation driven design : An iterative approach for mechanical engineers with focus on modal analysis

Erlandsson, Andreas January 2015 (has links)
This thesis of 15 hp has been implemented at Halmstad University, in collaboration with Saab Dynamics in Linköping. Saab Dynamics is a company operating in the defence industry where competition is tough. This necessitates new ways to increase efficiency in the company, which is the basis for this thesis. Saab Dynamics wants to introduce simulation driven design. Since Saab Dynamics engineers have little experience of simulation, required a user methodology with clear guidelines. Due to lack of time, they chose to assign the task to students, which resulted in this thesis. The aim of the thesis is to develop a methodology in mechanical design, where the designer uses the FE analysis early in the design process to develop the structures' mechanical properties. The methodology should be seen as a guide and a source of information to enable an iterative approach with FE-analysis, which is the basis of simulation-driven design. The iterative process of simulation driven design, which can lead to reduced lead times and cost savings in the design process. The work was carried out by three students from the mechanical engineering program between December 2014 and May 2015. Because of the scale of the project, it has been carried out by a total of three students with individual focus areas. The work has followed a self-developed method and the project began with theoretical studies of the topic to get an understanding of what has been done and what research in simulation driven design. Then conducted an empirical study on the Saab Dynamics in Linköping, in order to increase understanding of how the design process looks like. Meanwhile, sustainable development and ethical aspects has been taken into account. Much time has been devoted to investigate the possibilities and limitations of 3D Experience, which is Dassault Systèmes latest platform for 3D modelling- and simulation software. 3D Experience is the software, the methodology is based on. This thesis has resulted in a methodology for simulating at the designer level that the project team in consultation with the supervisor at Saab Dynamics managed to adapt to the company's requirements.

Page generated in 0.0805 seconds