• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 39
  • 29
  • 21
  • 19
  • 19
  • 16
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Method for Recommending Computer-Security Training for Software Developers

Nadeem, Muhammad 12 August 2016 (has links)
Vulnerable code may cause security breaches in software systems resulting in financial and reputation losses for the organizations in addition to loss of their customers’ confidential data. Delivering proper software security training to software developers is key to prevent such breaches. Conventional training methods do not take the code written by the developers over time into account, which makes these training sessions less effective. We propose a method for recommending computer–security training to help identify focused and narrow areas in which developers need training. The proposed method leverages the power of static analysis techniques, by using the flagged vulnerabilities in the source code as basis, to suggest the most appropriate training topics to different software developers. Moreover, it utilizes public vulnerability repositories as its knowledgebase to suggest community accepted solutions to different security problems. Such mitigation strategies are platform independent, giving further strength to the utility of the system. This research discussed the proposed architecture of the recommender system, case studies to validate the system architecture, tailored algorithms to improve the performance of the system, and human subject evaluation conducted to determine the usefulness of the system. Our evaluation suggests that the proposed system successfully retrieves relevant training articles from the public vulnerability repository. The human subjects found these articles to be suitable for training. The human subjects also found the proposed recommender system as effective as a commercial tool.
42

KtSpoon: Modelling Kotlin by extending Spoon’s Java Metamodel / KtSpoon: Modellering av Kotlin genom att utöka Spoons metamodell av Java

Lundholm, Jesper January 2021 (has links)
Kotlin is a relatively new language that has received much attention since its first stable release in February 2016. Despite the fast growth of the language, there is a lack of libraries to provide an intuitive, typed abstract syntax tree (AST). Recognizing the utility of user-friendly ASTs with support for various analysis and transformation tasks, we make a first contribution towards bringing one for Kotlin with KtSpoon. Kotlin’s interoperability capabilities with Java enables exploitation of Java’s mature echo system and we propose the use of the Spoon library with its Java metamodel as a base for a model of Kotlin. We show the feasibility of this approach with KtSpoon, which is implemented through small additions to the Spoon metamodel. It consists of a tree builder that outputs a Spoon AST from Kotlin source code and a pretty-printer that prints it back to source code. Through an empirical study, we find out that KtSpoon accurately can represent the full Kotlin language. We conclude that while it is possible to model the Kotlin language with small modifications to the Spoon metamodel, it will likely require a partial reimplementation for it to be an intuitive model for developers. / Kotlin är ett relativt nytt språk som har fått mycket uppmärksamhet sedan dess första stabila version släpptes i Februari 2016. Trots att språket vuxit snabbt så saknar det fortfarande ett bibliotek som tillhandahåller ett intuitivt och typat abstrakt syntaxträd (AST). Nyttan av användarvänliga abstrakta syntaxträd motiverar oss att ta ett första steg mot att skapa ett sådant för Kotlin med KtSpoon. Kotlins interoperabilitet med Java möjliggör nyttjandet av Javas omfattande ekosystem och vi föreslår därför användandet av biblioteket Spoon och dess metamodell av Java som grund för en modell av Kotlin. Genom skapandet av KtSpoon visar vi att det är möjligt att modellera kod skriven i Kotlin med små tillägg i Spoons metamodell. KtSpoon består av en trädbyggare som skapar ett AST från källkod skriven i Kotlin samt en skrivare som skriver tillbaka det till källkod. Genom en empirisk studie finner vi att det är möjligt att modellera hela språket Kotlin med små förändringar av Spoons metamodell, men att det sannolikt krävs en ny implementation av modellen för att den ska vara intuitiv för utvecklare att använda sig av.
43

Mining Software Repositories to Support Software Evolution

Kagdi, Huzefa H. 15 July 2008 (has links)
No description available.
44

Detecting Information Leakage in Android Malware Using Static Taint Analysis

Kelkar, Soham P. January 2017 (has links)
No description available.
45

Creating a Unit Testing Application Prototype for JavaScript / Skapandet av en prototyp för en enhetstestningsapplikation med stöd för JavaScript

Björkman, Mårten, Bergqvist, Jonathan January 2022 (has links)
Testing is an integral part of software development with the goal of verifying a system’s requirements. One of the most commonly used methods for verifying code is unit testing. If done properly, unit testing can guarantee the intended functionality of a code unit. Furthermore, sufficiently tested code provides dependents with a level of trust in the code’s abilities. The unit testing process has several flaws; it is difficult to identify which code must be tested; and it is challenging to maintain tests pertaining to inconstant systems. Automatic test case generation tools are a common alternative to writing tests manually. However, these tools often focus on high code coverage, but produce flaky and unreliable tests. The goal of this thesis is to develop a prototype for a unit testing application that facilitates unit test creation and maintenance for JavaScript. Using code analysis metrics, the application aims to give developers an aid in deciding what code units to test. In order to construct this prototype, a literature study was conducted, focused on unit testing and code metrics. After having constructed the prototype, a user study, in the form of an interview, was conducted in order to evaluate its usefulness. Results from the user study shows that the application could be beneficial for people with limited experience in unit testing. However, in order to determine the effectiveness of various code analysis metrics, a more thorough study would be needed. / Testning är en central del i mjukvaruutvecklingsprocessen med syftet att verifiera systemkrav. Enhetstestning är en av de mest vanliga metoderna för att verifiera kodlogik. Under rätt förutsättningar kommer enhetstester garantera den avsedda funktionaliteten hos kodsektioner. Vidare erhåller kodkomponenter beroende av väl testad kod en grad av förtroende för kodens utlovade funktion. Det finns ett flertal brister när det kommer till processen för att skapa enhetstester: identifiering av kod i behov av att testas är svårt och det är utmanande att underhålla tester för system under utveckling. Ett vanligt alternativ till det manuella testskapandet är att använda ett automatiskt testgenereringsverktyg. Fokuset hos dessa verktyg ligger på att uppnå stor kodteckning, men de genererade testerna är oftast bristfälliga samt opålitliga. Målet med detta examensarbete är att utveckla en prototyp för en enhetstestignsapplikation som förenklar skapandet och underhållandet av enhetstester för JavaScript. Applikationen ämnar att med hjälp av kodmått förenkla för utvecklaren när det kommer till att bestämma vad som bör testas. För att utveckla prototypen genomförs en litteraturstudie med fokus på enhetstestning och kodmått. Efter utvecklandet av prototypen genomfördes en användarstudie för att utvärdera dess användbarhet. Studien utfördes i form av en intervju. Resultatet från användarstudien visar att applikationen kan vara till nytta för personer med begränsad erfarenhet av enhetstestning. Dock skulle en mer gedigen studie behöva genomföras för att kunna utvärdera kodmåttens effektivitet.
46

Génération efficace de graphes d’appels dynamiques complets

Ikhlef, Hajar 11 1900 (has links)
Analyser le code permet de vérifier ses fonctionnalités, détecter des bogues ou améliorer sa performance. L’analyse du code peut être statique ou dynamique. Des approches combinants les deux analyses sont plus appropriées pour les applications de taille industrielle où l’utilisation individuelle de chaque approche ne peut fournir les résultats souhaités. Les approches combinées appliquent l’analyse dynamique pour déterminer les portions à problèmes dans le code et effectuent par la suite une analyse statique concentrée sur les parties identifiées. Toutefois les outils d’analyse dynamique existants génèrent des données imprécises ou incomplètes, ou aboutissent en un ralentissement inacceptable du temps d’exécution. Lors de ce travail, nous nous intéressons à la génération de graphes d’appels dynamiques complets ainsi que d’autres informations nécessaires à la détection des portions à problèmes dans le code. Pour ceci, nous faisons usage de la technique d’instrumentation dynamique du bytecode Java pour extraire l’information sur les sites d’appels, les sites de création d’objets et construire le graphe d’appel dynamique du programme. Nous démontrons qu’il est possible de profiler dynamiquement une exécution complète d’une application à temps d’exécution non triviale, et d’extraire la totalité de l’information à un coup raisonnable. Des mesures de performance de notre profileur sur trois séries de benchmarks à charges de travail diverses nous ont permis de constater que la moyenne du coût de profilage se situe entre 2.01 et 6.42. Notre outil de génération de graphes dynamiques complets, nommé dyko, constitue également une plateforme extensible pour l’ajout de nouvelles approches d’instrumentation. Nous avons testé une nouvelle technique d’instrumentation des sites de création d’objets qui consiste à adapter les modifications apportées par l’instrumentation au bytecode de chaque méthode. Nous avons aussi testé l’impact de la résolution des sites d’appels sur la performance générale du profileur. / Code analysis is used to verify code functionality, detect bugs or improve its performance. Analyzing the code can be done either statically or dynamically. Approaches combining both analysis techniques are most appropriate for industrial-scale applications where each one individually cannot provide the desired results. Blended analysis, for example, first applies dynamic analysis to identify problematic code regions and then performs a focused static analysis on these regions. However, the existing dynamic analysis tools generate inaccurate or incomplete data, or result in an unacceptably slow execution times. In this work, we focus on the generation of complete dynamic call graphs with additional information required for blended analysis. We make use of dynamic instrumentation techniques of Java bytecode to extract information about call sites and object creation sites, and to build the dynamic call graph of the program. We demonstrate that it is possible to profile real-world applications to efficiently extract complete and accurate information. Performance measurement of our profiler on three sets of benchmarks with various workloads places the overhead of our profiler between 2.01 and 6.42. Our profiling tool generating complete dynamic graphs, named dyko, is also an extensible platform for evaluating new instrumentation approaches. We tested a new adaptive instrumentation technique for object creation sites which accommodates instrumentation to the bytecode of each method. We also tested the impact of call sites resolution on the overall performance of the profiler.
47

Certification of a Tool Chain for Deductive Program Verification / Certification d'une chaine de vérification déductive de programmes

Herms, Paolo 14 January 2013 (has links)
Cette thèse s'inscrit dans le domaine de la vérification dulogiciel. Le but de la vérification du logiciel est d'assurer qu'uneimplémentation, un programme, répond aux exigences, satisfait saspécification. Cela est particulièrement important pour le logicielcritique, tel que des systèmes de contrôle d'avions, trains oucentrales électriques, où un mauvais fonctionnement pendantl'opération aurait des conséquences catastrophiques.Les exigences du logiciel peuvent concerner la sûreté ou lefonctionnement. Les exigences de sûreté, tel que l'absence d'accès à lamémoire en dehors des bornes valides, sont souvent implicites, dans lesens que toute implémentation est censée être sûre. D'autre part, les exigences fonctionnelles spécifient ce que leprogramme est censé faire. La spécification d'un programme est souventexprimée informellement en décrivant en anglais la mission d'une partie du code source. La vérification duprogramme se fait alors habituellement par relecture manuelle,simulation et tests approfondis. Par contre, ces méthodes negarantissent pas que tous les possibles cas d'exécution sontcapturés. La preuve déductive de programme est une méthode complète pour assurerla correction du programme. Ici, un programme, ainsi que saspécification formalisée à l'aide d'un langage logique, est un objetmathématique et ses propriétés désirées sont des théorèmes logiques àprouver formellement. De cette façon, si le système logiquesous-jacent est cohérent, on peut être complètement sûr que lapropriété prouvée est valide pour le programme en question et pourn'importe quel cas d'exécution. La génération de conditions de vérification est une techniquecensée aider le programmeur à prouver les propriétés qu'il veut surson programme. Ici, un outil (VCG) analyse un programme donné avec saspécification et produit une formule mathématique, dont la validitéimplique la correction du programme vis à vis de saspécification, ce qui est particulièrement intéressant lorsque lesformules générées peuvent être prouvées automatiquement à l'aide desolveurs SMT. Cette approche, basée sur des travaux de Hoare et Dijkstra,est bien comprise et prouvée correcte en théorie. Des outils devérification déductive ont aujourd'hui acquis une maturité qui leurpermet d'être appliqués dans un contexte industriel où un hautniveau d'assurance est requis. Mais leurs implémentations doiventgérer toute sorte de fonctionnalités des langages et peuvent donc devenir très complexes et contenir des erreurs ellesmêmes - au pire des cas affirmer qu'un programme est correct alorsqu'il ne l'est pas. Il se pose donc la question du niveau de confianceaccordée à ces outils.Le but de cette thèse est de répondre à cette question. Ondéveloppe et certifie, dans le système Coq, un VCGpour des programmes C annotés avec ACSL, le langage logique pour laspécification de programmes ANSI/ISO C.Notre première contribution est la formalisation d'un VCGexécutable pour le langage intermédiaire Whycert, un langageimpératif avec boucles, exceptions et fonctions récursives, ainsi quesa preuve de correction par rapport à la sémantique opérationnelle bloquante à grand pas du langage. Une deuxièmecontribution est la formalisation du langage logique ACSL et lasémantique des annotations ACSL dans Clight de Compcert. De lacompilation de programmes C annotés vers des programmes Whycert et sapreuve de préservation de la sémantique combiné avec uneaxiomatisation en Whycert du modèle mémoire Compcert résulte notrecontribution principale: une chaîne intégrée certifiée pour lavérification de programmes C, basée sur Compcert. En combinant notrerésultat de correction avec celui de Compcert, on obtient un théorèmeen Coq qui met en relation la validité des l'obligations de preuvegénérées avec la sûreté du code assembleur compilé. / This thesis belongs to the domain of software verification. The goalof verifying software is to ensure that an implementation, a program,satisfies the requirements, the specification. This is especiallyimportant for critical computer programs, such as control systems forair planes, trains and power plants. Here a malfunctioning occurringduring operation would have catastrophic consequences. Software requirements can concern safety or functioning. Safetyrequirements, such as not accessing memory locations outside validbounds, are often implicit, in the sense that any implementation isexpected to be safe. On the other hand, functional requirementsspecify what the program is supposed to do. The specification of aprogram is often expressed informally by describing in English or someother natural language the mission of a part of the program code.Usually program verification is then done by manual code review,simulation and extensive testing. But this does not guarantee that allpossible execution cases are captured. Deductive program proving is a complete way to ensure soundness of theprogram. Here a program along with its specificationis a mathematical object and its desired properties are logicaltheorems to be formally proved. This way, if the underlying logicsystem is consistent, we can be absolutely sure that the provenproperty holds for the program in any case.Generation of verification conditions is a technique helpingthe programmer to prove the properties he wants about his programs.Here a VCG tool analyses a program and its formal specification andproduces a mathematical formula, whose validity implies the soundnessof the program with respect to its specification. This is particularlyinteresting when the generated formulas can be proved automatically byexternal SMT solvers.This approach is based on works of Hoare and Dijkstra and iswell-understood and shown correct in theory. Deductive verificationtools have nowadays reached a maturity allowing them to be used inindustrial context where a very high level of assurance isrequired. But implementations of this approach must deal with allkinds of language features and can therefore become quite complex andcontain errors -- in the worst case stating that a program correcteven if it is not. This raises the question of the level ofconfidence granted to these tools themselves. The aim of this thesis is to address this question. We develop, inthe Coq system, a certified verification-condition generator (VCG) forACSL-annotated C programs.Our first contribution is the formalisation of an executableVCG for the Whycert intermediate language,an imperative language with loops, exceptions and recursive functionsand its soundness proof with respect to the blocking big-step operational semantics of the language.A second contribution is the formalisation of the ACSL logicallanguage and the semantics of ACSL annotations of Compcert's Clight.From the compilation of ACSL annotated Clight programs to Whycertprograms and its semantics preservation proof combined with a Whycertaxiomatisation of the Compcert memory model results our maincontribution: an integrated certified tool chainfor verification of C~programs on top of Compcert. By combining oursoundness result with the soundness of the Compcert compiler we obtaina Coq theorem relating the validity of the generated proof obligationswith the safety of the compiled assembly code.
48

USING MACHINE LEARNING TECHNIQUES TO IMPROVE STATIC CODE ANALYSIS TOOLS USEFULNESS

Enas Ahmad Alikhashashneh (7013450) 16 October 2019 (has links)
<p>This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests</p> <p>(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.</p>
49

Enabling Timing Analysis of Complex Embedded Software Systems

Kraft, Johan January 2010 (has links)
Cars, trains, trucks, telecom networks and industrial robots are examples of products relying on complex embedded software systems, running on embedded computers. Such systems may consist of millions of lines of program code developed by hundreds of engineers over many years, often decades. Over the long life-cycle of such systems, the main part of the product development costs is typically not the initial development, but the software maintenance, i.e., improvements and corrections of defects, over the years. Of the maintenance costs, a major cost is the verification of the system after changes has been applied, which often requires a huge amount of testing. However, today's techniques are not sufficient, as defects often are found post-release, by the customers. This area is therefore of high relevance for industry. Complex embedded systems often control machinery where timing is crucial for accuracy and safety. Such systems therefore have important requirements on timing, such as maximum response times. However, when maintaining complex embedded software systems, it is difficult to predict how changes may impact the system's run-time behavior and timing, e.g., response times.Analytical and formal methods for timing analysis exist, but are often hard to apply in practice on complex embedded systems, for several reasons. As a result, the industrial practice in deciding the suitability of a proposed change, with respect to its run-time impact, is to rely on the subjective judgment of experienced developers and architects. This is a risky and inefficient, trial-and-error approach, which may waste large amounts of person-hours on implementing unsuitable software designs, with potential timing- or performance problems. This can generally not be detected at all until late stages of testing, when the updated software system can be tested on system level, under realistic conditions. Even then, it is easy to miss such problems. If products are released containing software with latent timing errors, it may cause huge costs, such as car recalls, or even accidents. Even when such problems are found using testing, they necessitate design changes late in the development project, which cause delays and increases the costs. This thesis presents an approach for impact analysis with respect to run-time behavior such as timing and performance for complex embedded systems. The impact analysis is performed through optimizing simulation, where the simulation models are automatically generated from the system implementation. This approach allows for predicting the consequences of proposed designs, for new or modified features, by prototyping the change in the simulation model on a high level of abstraction, e.g., by increasing the execution time for a particular task. Thereby, designs leading to timing-, performance-, or resource usage problems can be identified early, before implementation, and a late redesigns are thereby avoided, which improves development efficiency and predictability, as well as software quality. The contributions presented in this thesis is within four areas related to simulation-based analysis of complex embedded systems: (1) simulation and simulation optimization techniques, (2) automated model extraction of simulation models from source code, (3) methods for validation of such simulation models and (4) run-time recording techniques for model extraction, impact analysis and model validation purposes. Several tools has been developed during this work, of which two are in commercialization in the spin-off company Percepio AB. Note that the Katana approach, in area (2), is subject for a recent patent application - patent pending. / PROGRESS
50

Vizitų registravimo sistemos projektavimas ir testavimas / Design and testing of call reporting system

Prelgauskas, Justinas 10 July 2008 (has links)
Šiame dokumente aprašytas darbas susideda ir trijų pagrindinių dalių. Pirmojoje, inžinerinėje dalyje atlikome vizitų registravimo sistemos (toliau - „PharmaCODE“) analizę ir projektavimą. Čia pateikėme esmines verslo aplinkos, reikalavimų ir konkurentų analizės, o taipogi ir projektavimo detales. Pateikėme pagrindinius architektūrinius sprendimus. Antrojoje darbo dalyje aprašėme sistemos kokybės tyrimus, naudojant statinės išeities kodų analizės įrankius ir metodus. Šioje dalyje aprašėme kokius įrankius naudojome ir pateikėme pagrindinius kodo analizės rezultatus. Trečiojoje darbo dalyje gilinomės į išeities tekstų analizės metodus ir įrankius, sukūrėme patobulintą analizės taisyklę. Mūsų taisyklės pagalba pavyko aptikti daugiau potencialių SQL-įterpinių saugumo spragų nei aptiko jos pirmtakė – Microsoft projektuota kodo analizės taisyklė. / This work consists of three major parts. First – engineering part – is analysis and design of call reporting system (codename – “PharmaCODE”). We will provide main details of business analysis and design decisions. Second part is all about testing and ensuring system quality, mainly by means of static source code analysis tools & methods. We will describe tools being used and provide main results of source code analysis in this part. And finally, in the third part of this we go deeper into static source code analysis and try to improve one of analysis rules. These days, when there is plenty of evolving web-based applications, security is gaining more and more impact. Most of those systems have, and depend on, back-end databases. However, web-based applications are vulnerable to SQL-injection attacks. In this paper we present technique of solving this problem using secure-coding guidelines and .NET Framework’s static code analysis methods for enforcing those guidelines. This approach lets developers discover vulnerabilities in their code early in development process. We provide a research and realization of improved code analysis rule, which can automatically discover SQL-injection vulnerabilities in MSIL code.

Page generated in 0.0633 seconds