• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 36
  • 28
  • 20
  • 19
  • 18
  • 14
  • 12
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Généralisation de l’analyse de performance décrémentale vers l’analyse différentielle / Generalization of the decremental performance analysis to differential analysis

Bendifallah, Zakaria 17 September 2015 (has links)
Une des étapes les plus cruciales dans le processus d’analyse des performances d’une application est la détection des goulets d’étranglement. Un goulet étant tout évènement qui contribue à l’allongement temps d’exécution, la détection de ses causes est importante pour les développeurs d’applications afin de comprendre les défauts de conception et de génération de code. Cependant, la détection de goulets devient un art difficile. Dans le passé, des techniques qui reposaient sur le comptage du nombre d’évènements, arrivaient facilement à trouver les goulets. Maintenant, la complexité accrue des micro-architectures modernes et l’introduction de plusieurs niveaux de parallélisme ont rendu ces techniques beaucoup moins efficaces. Par conséquent, il y a un réel besoin de réflexion sur de nouvelles approches.Notre travail porte sur le développement d’outils d’évaluation de performance des boucles de calculs issues d’applications scientifiques. Nous travaillons sur Decan, un outil d’analyse de performance qui présente une approche intéressante et prometteuse appelée l’Analyse Décrémentale. Decan repose sur l’idée d’effectuer des changements contrôlés sur les boucles du programme et de comparer la version obtenue (appelée variante) avec la version originale, permettant ainsi de détecter la présence ou pas de goulets d’étranglement.Tout d’abord, nous avons enrichi Decan avec de nouvelles variantes, que nous avons conçues, testées et validées. Ces variantes sont, par la suite, intégrées dans une analyse de performance poussée appelée l’Analyse Différentielle. Nous avons intégré l’outil et l’analyse dans une méthodologie d’analyse de performance plus globale appelée Pamda.Nous décrirons aussi les différents apports à l’outil Decan. Sont particulièrement détaillées les techniques de préservation des structures de contrôle du programme,ainsi que l’ajout du support pour les programmes parallèles.Finalement, nous effectuons une étude statistique qui permet de vérifier la possibilité d’utiliser des compteurs d’évènements, autres que le temps d’exécution, comme métriques de comparaison entre les variantes Decan / A crucial step in the process of application performance analysis is the accurate detection of program bottlenecks. A bottleneck is any event which contributes to extend the execution time. Determining their cause is important for application developpers as it enable them to detect code design and generation flaws.Bottleneck detection is becoming a difficult art. Techniques such as event counts,which succeeded to find bottlenecks easily in the past, became less efficient because of the increasing complexity of modern micro-processors, and because of the introduction of parallelism at several levels. Consequently, a real need for new analysis approaches is present in order to face these challenges.Our work focuses on performance analysis and bottleneck detection of computeintensive loops in scientific applications. We work on Decan, a performance analysis and bottleneck detection tool, which offers an interesting and promising approach called Decremental Analysis. The tool, which operates at binary level, is based on the idea of performing controlled modifications on the instructions of a loop, and comparing the new version (called variant) to the original one. The goal is to assess the cost of specific events, and thus the existence or not of bottlenecks.Our first contribution, consists of extending Decan with new variants that we designed, tested and validated. Based on these variants, we developed analysis methods which we used to characterize hot loops and find their bottlenecks. Welater, integrated the tool into a performance analysis methodology (Pamda) which coordinates several analysis tools in order to achieve a more efficient application performance analysis.Second, we introduce several improvements on the Decan tool. Techniquesdeveloped to preserve the control flow of the modified programs, allowed to use thetool on real applications instead of extracted kernels. Support for parallel programs(thread and process based) was also added. Finally, our tool primarily relying on execution time as the main concern for its analysis process, we study the opportunity of also using other hardware generated events, through a study of their stability, precision and overhead
32

Using goal-driven assistants for software visualization

Ndiaye, Alassane 11 1900 (has links)
Utiliser la visualization de logiciels pour accomplir certaines tâches comme la détection de défauts de design peut être fastidieux. Les utilisateurs doivent d’abord trouver et configurer un outil de visualization qui est adéquat pour représenter les données à examiner. Souvent, ils sont forcés de naviguer à travers le logiciel manuellement pour accomplir leur tâche. Nous proposons une approche plus simple et efficace. Celle ci s’éloigne de la configuration d’un outil et la navigation manuelle d’un système et se concentre sur la définition écrite de la tâche à accomplir. Suite à cela, notre assistant génère le meilleur outil de visualization et guide les utilisateurs à travers la tâche. Notre approche est constituée de trois éléments principaux, un langage dédié à la description de la tâche d’analyse. Un langage pour définir les visualizations comme des mises en oeuvre du patron modèle-vue-contrôleur. Et un processus de génération pour passer d’une tâche d’analyse à une visualization. En enlevant le besoin de configurer un outil de visualization et en guidant la navigation du système, nous pensons que nous avons fait un outil qui plus simple et rapide à utiliser que ses homologues. / Using software visualization to accomplish certain tasks such as design defect detection can prove tedious. Users first need to find and configure a visualization tool that is adequate for representing the data they want to examine. Then all too often, they are forced to manually navigate the software system in order to complete their task. What we propose is a simpler and more efficient approach that moves the emphasis from configuring a tool and manually navigating the system to writing a definition of the work we want to accomplish. Our goal-driven assistant then generates the best visualization tool and guide us through the navigation of the task. Our approach consists of three main components. The first component is a domain specific language (DSL) to describe analysis tasks. The second component is a language to define the visualizations as customized implementations of the model-view-controller (MVC) pattern. The last component is a generation process used to go from the analysis task to the visualization. By removing the need to configure a visualization tool and guiding the navigation of the system, we believe we made a tool that is simpler and faster to use than its conventional counterparts.
33

Certification of a Tool Chain for Deductive Program Verification

Herms, Paolo 14 January 2013 (has links) (PDF)
This thesis belongs to the domain of software verification. The goalof verifying software is to ensure that an implementation, a program,satisfies the requirements, the specification. This is especiallyimportant for critical computer programs, such as control systems forair planes, trains and power plants. Here a malfunctioning occurringduring operation would have catastrophic consequences. Software requirements can concern safety or functioning. Safetyrequirements, such as not accessing memory locations outside validbounds, are often implicit, in the sense that any implementation isexpected to be safe. On the other hand, functional requirementsspecify what the program is supposed to do. The specification of aprogram is often expressed informally by describing in English or someother natural language the mission of a part of the program code.Usually program verification is then done by manual code review,simulation and extensive testing. But this does not guarantee that allpossible execution cases are captured. Deductive program proving is a complete way to ensure soundness of theprogram. Here a program along with its specificationis a mathematical object and its desired properties are logicaltheorems to be formally proved. This way, if the underlying logicsystem is consistent, we can be absolutely sure that the provenproperty holds for the program in any case.Generation of verification conditions is a technique helpingthe programmer to prove the properties he wants about his programs.Here a VCG tool analyses a program and its formal specification andproduces a mathematical formula, whose validity implies the soundnessof the program with respect to its specification. This is particularlyinteresting when the generated formulas can be proved automatically byexternal SMT solvers.This approach is based on works of Hoare and Dijkstra and iswell-understood and shown correct in theory. Deductive verificationtools have nowadays reached a maturity allowing them to be used inindustrial context where a very high level of assurance isrequired. But implementations of this approach must deal with allkinds of language features and can therefore become quite complex andcontain errors -- in the worst case stating that a program correcteven if it is not. This raises the question of the level ofconfidence granted to these tools themselves. The aim of this thesis is to address this question. We develop, inthe Coq system, a certified verification-condition generator (VCG) forACSL-annotated C programs.Our first contribution is the formalisation of an executableVCG for the Whycert intermediate language,an imperative language with loops, exceptions and recursive functionsand its soundness proof with respect to the blocking big-step operational semantics of the language.A second contribution is the formalisation of the ACSL logicallanguage and the semantics of ACSL annotations of Compcert's Clight.From the compilation of ACSL annotated Clight programs to Whycertprograms and its semantics preservation proof combined with a Whycertaxiomatisation of the Compcert memory model results our maincontribution: an integrated certified tool chainfor verification of C~programs on top of Compcert. By combining oursoundness result with the soundness of the Compcert compiler we obtaina Coq theorem relating the validity of the generated proof obligationswith the safety of the compiled assembly code.
34

Prediction of Code Lifetime

Nordfors, Per January 2017 (has links)
There are several previous studies in which machine learning algorithms are used to predict how fault-prone a piece of code is. This thesis takes on a slightly different approach by attempting to predict how long a piece of code will remain unmodified after being written (its “lifetime”). This is based on the hypothesis that frequently modified code is more likely to contain weaknesses, which may make lifetime predictions useful for code evaluation purposes. In this thesis, the predictions are made with machine learning algorithms which are trained on open source code examples from GitHub. Two different machine learning algorithms are used: the multilayer perceptron and the support vector machine. A piece of code is described by three groups of features: code contents, code properties obtained from static code analysis, and metadata from the version control system Git. In a series of experiments it is shown that the support vector machine is the best performing algorithm and that all three feature groups are useful for predicting lifetime. Both the multilayer perceptron and the support vector machine outperform a baseline prediction which always outputs the mean lifetime of the training set. This indicates that lifetime to some extent can be predicted based on information extracted from the code. However, lifetime prediction performance is shown to be highly dataset dependent with large error magnitudes.
35

Ranking of Android Apps based on Security Evidences

Ayush Maharjan (9728690) 07 January 2021 (has links)
<p>With the large number of Android apps available in app stores such as Google Play, it has become increasingly challenging to choose among the apps. The users generally select the apps based on the ratings and reviews of other users, or the recommendations from the app store. But it is very important to take the security into consideration while choosing an app with the increasing security and privacy concerns with mobile apps. This thesis proposes different ranking schemes for Android apps based on security apps evaluated from the static code analysis tools that are available. It proposes the ranking schemes based on the categories of evidences reported by the tools, based on the frequency of each category, and based on the severity of each evidence. The evidences are gathered, and rankings are generated based on the theory of Subjective Logic. In addition to these ranking schemes, the tools are themselves evaluated against the Ghera benchmark. Finally, this work proposes two additional schemes to combine the evidences from difference tools to provide a combined ranking.</p>
36

A comparison of latency for MongoDB and PostgreSQL with a focus on analysis of source code

Lindvall, Josefin, Sturesson, Adam January 2021 (has links)
The purpose of this paper is to clarify the differences in latency between PostgreSQL and MongoDB as a consequence of their differences in software architecture. This has been achieved through benchmarking of Insert, Read and Update operations with the tool “Yahoo! Cloud Serving Benchmark”, and through source code analysis of both database management systems (DBMSs). The overall structure of the architecture has been researched with Big O notation as a tool to examine the complexity of the source code. The result from the benchmarking show that the latency for Insert and Update operations were lower for MongoDB, while the latency for Read was lower for PostgreSQL. The results from the source code analysis show that both DBMSs have a complexity of O(n), but that there are multiple differences in their software architecture affecting latency. The most important difference was the length of the parsing process which was larger for PostgreSQL. The conclusion is that there are significant differences in latency and source code and that room exists for further research in the field. The biggest limitation of the experiment consist of factors such as background processes which affected latency and could not be eliminated, resulting in a low validity.
37

Game-Agnostic Asset Loading Order Using Static Code Analysis

Åsbrink, Anton, Andersson, Jacob January 2022 (has links)
Background. User retention is important in the online sphere, especially within gaming. Utilising browser gaming websites to host games helps smaller studios and solo developers reach out to a larger audience. However, displaying games on the website does not guarantee the user will try the game out and if the load time is long, the player could potentially move on. Using game agnostic, static code analysis, a potential load order can be created, prioritising assets required to start the game to be downloaded first, resulting in shorter wait times for the player to start playing. Objectives. The thesis aim is to develop a game agnostic parser, able to a list all the assets within a given Godot engine based game and sort them according to importance. The order of importance is the assets required for the game to be playable is placed first, followed by each sequential set of assets for each sequential scene. Methods. Static code analysis is in this project done by parsing through all the files and code of a given game. By then using numerous regular expressions one can extract relevant data such as references to assets and scene changes. The assets are then associated with different scenes that are ordered and distinguished by scene changes. Results. The results vary from making no difference to potentially taking 31% of the original loading time. With graphs being generated for every game showing the scenes and their ordering through the parsing process giving information into the process of the game as well as the reasons for the potential speedup or the lack of it. Conclusions. The project shows promising results for games that can be successfully parsed and have the scene structure to gain from it. Further work and development is required for a more comprehensive solution with suggested methods. With these results being largely theoretical a more practical study would be needed to apply the results to a realistic setting. / Bakgrund. Bibehållande av användare är viktigt i den moderna internet sfären, merså inom spelande. Det är fördelaktigt för mindre spel och studios att ha sina spel på webbsidor som ger tillgång till en större användarbas. Det är dock ingen garanti att behålla en användare om ett spel tar för lång tid att ladda. Därav genom spelagnostisk, statisk kodanalys, kan en generera en laddnings ordning för spel resurser där de resurser som krävs för att starta spelet laddas ner först, vilket kan tillåta spelet att starta tidigare. Syfte. Målet är att kunna tolka spel kod för att lista resurserna för ett givet spelgjort i Godot motorn och sortera resurserna i ordningen de förekommer. Där de viktigaste resurserna är de som förekommer i de första scenerna som krävs för att starta spelet och sedan de följande scenen. Metod. Statisk kodanalys är att kolla på koden som den är utan att köras och görsi detta projekt genom att tolka all kod och dess filer. Detta genomförs med hjälp av regular expressions som tar den önskade datan som resurs referenser och indikationer om ändringar i scener. Resultat. Resultatet varierar att inte vara någon skillnad till basfallet till att ta 31% av den ursprungliga laddningstiden. Det visas av grafer skapad för varje spel som visar ordningen av scenerna och dess innehåll, vilket används för att utvärdera vad som gör vissa spel snabbare och varför vissa inte kan optimeras. Slutsatser. Projektet visar lovande resultat för de spel som kan bli optimerat utav programmet. Men för att få en generaliserad lösning krävs mer utveckling för att kunna täcka en större variation av spel. Dock då denna studie endast är teoretisk så behöver en praktiskt implementation göras för att applicera dessa resultat i en realistisk miljö.
38

Repository Mining : Användbarheten av Repository Mining för effektivisering av mjukvaruutveckling

Engblom Sandin, John January 2022 (has links)
Mjukvaruföretag idag söker alltid nya metoder för att effektivisera sin utveckling och att förbättra sin produkt. Denna studie undersöker användbarheten av en sådan ny metod kallad repository mining. Inom mjukvaruutveckling är repository mining en metod av kodanalys som utförs för att få ut metadata från ett versionshanteringssystem. Processen utförs med hjälp av ett kodanalysverktyg som i denna studie är verktyget CodeScene. Målet med denna fallstudie är att undersöka vad det finns för användningsfall för repository mining i ett utvecklingssyfte. Studiens syfte är att få förståelse för vilka typer av metadata som är relevanta och vad för faktorer det finns som kan påverka eventuella resultat. Syftet är även att studera om hur repository mining kan hjälpa företag i deras arbete med att öka eller upprätthålla kvaliteten på deras system. Studien utförs i samband med företaget Sandvik Coromant och deras avdelning Machining Foresight för att analysera deras kodbas. Kodbasen analyseras med hjälp av kodanalysverktyget CodeScene för att utvinna metadata som sedan presenteras till utvecklare inom Machining Foresight. Sedan utförs en kvalitativ studie som består av intervjuer och gruppdiskussioner i syfte av att få utvecklarnas reflektioner och tankegångar angående användbarheten av repository mining. Resultatet visar på att det finns användningsfall hos repository mining men dessa kräver att vissa faktorer är bestämda. Första användningsfallet är en analys på ändring i kodkomplexitet som hjälper att förutspå framtida refaktoreringar. Det andra användningsfallet är en analys på författarskap inom systemet för att hitta möjliga platser känsliga för kunskapsförlust, därmed hjälpa i planering av kunskapsdelning. Detta är dock en fallstudie och dessa resultat ska inte användas för att dra generella slutsatser om repository mining i sin helhet. Resultaten ska endast tas som vägriktning och indikation för framtida studier.
39

A Method for Recommending Computer-Security Training for Software Developers

Nadeem, Muhammad 12 August 2016 (has links)
Vulnerable code may cause security breaches in software systems resulting in financial and reputation losses for the organizations in addition to loss of their customers’ confidential data. Delivering proper software security training to software developers is key to prevent such breaches. Conventional training methods do not take the code written by the developers over time into account, which makes these training sessions less effective. We propose a method for recommending computer–security training to help identify focused and narrow areas in which developers need training. The proposed method leverages the power of static analysis techniques, by using the flagged vulnerabilities in the source code as basis, to suggest the most appropriate training topics to different software developers. Moreover, it utilizes public vulnerability repositories as its knowledgebase to suggest community accepted solutions to different security problems. Such mitigation strategies are platform independent, giving further strength to the utility of the system. This research discussed the proposed architecture of the recommender system, case studies to validate the system architecture, tailored algorithms to improve the performance of the system, and human subject evaluation conducted to determine the usefulness of the system. Our evaluation suggests that the proposed system successfully retrieves relevant training articles from the public vulnerability repository. The human subjects found these articles to be suitable for training. The human subjects also found the proposed recommender system as effective as a commercial tool.
40

KtSpoon: Modelling Kotlin by extending Spoon’s Java Metamodel / KtSpoon: Modellering av Kotlin genom att utöka Spoons metamodell av Java

Lundholm, Jesper January 2021 (has links)
Kotlin is a relatively new language that has received much attention since its first stable release in February 2016. Despite the fast growth of the language, there is a lack of libraries to provide an intuitive, typed abstract syntax tree (AST). Recognizing the utility of user-friendly ASTs with support for various analysis and transformation tasks, we make a first contribution towards bringing one for Kotlin with KtSpoon. Kotlin’s interoperability capabilities with Java enables exploitation of Java’s mature echo system and we propose the use of the Spoon library with its Java metamodel as a base for a model of Kotlin. We show the feasibility of this approach with KtSpoon, which is implemented through small additions to the Spoon metamodel. It consists of a tree builder that outputs a Spoon AST from Kotlin source code and a pretty-printer that prints it back to source code. Through an empirical study, we find out that KtSpoon accurately can represent the full Kotlin language. We conclude that while it is possible to model the Kotlin language with small modifications to the Spoon metamodel, it will likely require a partial reimplementation for it to be an intuitive model for developers. / Kotlin är ett relativt nytt språk som har fått mycket uppmärksamhet sedan dess första stabila version släpptes i Februari 2016. Trots att språket vuxit snabbt så saknar det fortfarande ett bibliotek som tillhandahåller ett intuitivt och typat abstrakt syntaxträd (AST). Nyttan av användarvänliga abstrakta syntaxträd motiverar oss att ta ett första steg mot att skapa ett sådant för Kotlin med KtSpoon. Kotlins interoperabilitet med Java möjliggör nyttjandet av Javas omfattande ekosystem och vi föreslår därför användandet av biblioteket Spoon och dess metamodell av Java som grund för en modell av Kotlin. Genom skapandet av KtSpoon visar vi att det är möjligt att modellera kod skriven i Kotlin med små tillägg i Spoons metamodell. KtSpoon består av en trädbyggare som skapar ett AST från källkod skriven i Kotlin samt en skrivare som skriver tillbaka det till källkod. Genom en empirisk studie finner vi att det är möjligt att modellera hela språket Kotlin med små förändringar av Spoons metamodell, men att det sannolikt krävs en ny implementation av modellen för att den ska vara intuitiv för utvecklare att använda sig av.

Page generated in 0.4343 seconds