• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Analyse de la complexité des programmes par interprétation sémantique / Program complexity analysis by semantics interpretation

Péchoux, Romain 14 November 2007 (has links)
Il existe de nombreuses approches développées par la communauté Implicit Computational Complexity (ICC) permettant d'analyser les ressources nécessaires à la bonne exécution des algorithmes. Dans cette thèse, nous nous intéressons plus particulièrement au contrôle des ressources à l'aide d'interprétations sémantiques. Après avoir rappelé brièvement la notion de quasi-interprétation ainsi que les différentes propriétés et caractérisations qui en découlent, nous présentons les différentes avancées obtenues dans l'étude de cet outil : nous étudions le problème de la synthèse qui consiste à trouver une quasi-interprétation pour un programme donné, puis, nous abordons la question de la modularité des quasi-interprétations. La modularité permet de diminuer la complexité de la procédure de synthèse et de capturer un plus grand nombre d'algorithmes. Après avoir mentionné différentes extensions des quasi-interprétations à des langages de programmation réactifs, bytecode ou d'ordre supérieur, nous introduisons la sup-interprétation. Cette notion généralise la quasi-interprétation et est utilisée dans des critères de contrôle des ressources afin d'étudier la complexité d'un plus grand nombre d'algorithmes dont des algorithmes sur des données infinies ou des algorithmes de type diviser pour régner. Nous combinons cette notion à différents critères de terminaison comme les ordres RPO, les paires de dépendance ou le size-change principle et nous la comparons à la notion de quasi-interprétation. En outre, après avoir caractérisé des petites classes de complexité parallèles, nous donnons quelques heuristiques permettant de synthétiser des sup-interprétations sans la propriété sous-terme, c'est à dire des sup-interprétations qui ne sont pas des quasi-interprétations. Enfin, dans un dernier chapitre, nous adaptons les sup-interprétations à des langages orientés-objet, obtenant ainsi différents critères pour contrôler les ressources d'un programme objet et de ses méthodes / There are several approaches developed by the Implicit Computational Complexity (ICC) community which try to analyze and control program resources. In this document, we focus our study on the resource control with the help of semantics interpretations. After introducing the notion of quasi-interpretation together with its distinct properties and characterizations, we show the results obtained in the study of such a tool: We study the synthesis problem which consists in finding a quasi-interpretation for a given program and we tackle the issue of quasi-interpretation modularity. Modularity allows to decrease the complexity of the synthesis procedure and to capture more algorithms. We present several extensions of quasi-interpretations to reactive programming, bytecode verification or higher-order programming. Afterwards, we introduce the notion of sup-interpretation. This notion strictly generalizes the one of quasi-interpretation and is used in distinct criteria in order to control the resources of more algorithms, including algorithms over infinite data and algorithms using a divide and conquer strategy. We combine sup-interpretations with distinct termination criteria, such as RPO orderings, dependency pairs or size-change principle, and we compare them to the notion of quasi-interpretation. Using the notion of sup-interpretation, we characterize small parallel complexity classes. We provide some heuristics for the sup-interpretation synthesis: we manage to synthesize sup-interpretations without the subterm property, that is, sup-interpretations which are not quasi-interpretations. Finally, we extend sup-interpretations to object-oriented programs, thus obtaining distinct criteria for resource control of object-oriented programs and their methods
62

Extendable and Adaptable Framework for Input Language Independent Static Analysis / Proširiv i prilagodljiv okvir za statičku analizu nezavisnu od ulaznog jezika

Rakić Gordana 16 September 2015 (has links)
<p>In modern approach to software development, a great importance is given to monitoring &nbsp;of software quality in early development phases. Therefore, static analysis becomes more important. Furthermore, software projects are becoming more complex and heterogeneous. These characteristics are reflected in a diversity of functionalities and &nbsp;variety of computer languages and the technologies used for their development. Because of that consistency in static analysis becomes more important than it was earlier.</p><p>In this dissertation SSQSA: Set of Software Quality Static Analyzers is described. The aim &nbsp;of the SSQSA framework&nbsp; is consistent static analysis. This goal is reached by introducing new intermediate source code representation called eCST: enriched Concrete Syntax Tree. The dissertation mostly focuses on eCST, intermediate representations derived from it, and their generation with description of the&nbsp;<br />tools involved in it.</p><p>The main characteristic of eCST is language independence which gives to SSQSA framework two-level extensibility: supporting a new language and supporting a new &nbsp;analysis. This leads to eciency of adding both level supports and&nbsp;consistency of added functionalities.</p><p>To prove the concept, support for more than 10 characteristic languages was introduced. Furthermore, characteristic static analysis techniques (software metrics calculation,&nbsp;<br />code-clone detection, etc.) were implemented and integrated in the framework.&nbsp;</p><p>Established SSQSA framework provides the infrastructure for the further development of the complete platform for software quality control.</p> / <p>U modernim pristupima razvoju softvera veliki značaj pridaje se kontroli kvaliteta softvera u ranim fazama razvoja.&nbsp;Zbog toga, statička analiza postaje sve značajnija. Takođe,&nbsp;softverski proizvodi postaju sve kompleksniji i heterogeni.&nbsp;Ove karakteristike se ogledaju u raznovrsnosti jezika i&nbsp;tehnologija koje se koriste u procesu razvoja softvera. Zbog&nbsp;toga, konzistentnost u statičkoj analizi dobija veći značaj&nbsp;nego &scaron;to je to bio slučaj ranije.</p><p>U ovoj disertaciji opisan je SSQSA skup statičkih analizatora&nbsp;za kontrolu kvaliteta (eng. Set of Software Quality Static&nbsp;Analyzers). Namena SSQSA okvira je konzistentna statička&nbsp;analiza. Cilj se postiže uvođenjem nove međureprezentacije&nbsp;<br />izvornog koda nazvane eCST (obogaćeno konkretno sintaksno stablo, eng. enriched &nbsp;Concrete Syntax Tree).&nbsp;Fokus disertacije je primarno na eCST reprezenataciji koda,&nbsp;<br />reprezentacijama izvedenjim iz eCST i procesu njihovog&nbsp;generisanja, sa opisom oruđa angažovanim u ovim procesima.</p><p>Osnovna i najbitnija karakteristika eCST reprezenatacije&nbsp;je nezavisnost od jezika u kom je izvorni kod pisan, &scaron;to&nbsp;SSQSA okviru daje pro&scaron;irivost na dva nivoa: kroz podr&scaron;ku&nbsp;za nove jezike i kroz podr&scaron;ku za nove analize. Ovo dovodi do&nbsp;efikasnog uvođenja funkcionalnosti na oba navedena nivoa,&nbsp;kao i do konzistentnosti uvedenih funkcionalnosti.&nbsp;</p><p>Kao dokaz ispravnosti koncepta, podr&scaron;ka za vi&scaron;e od 10&nbsp;ulaznih jezika je uvedena. Takođe, implementirane su karakteristične tehnike statičke analize (izračunavanje softverskih&nbsp;metrika, otkrivanje duplikata u kodu, itd.) i integrisane u&nbsp;SSQSA okvir.&nbsp;</p><p>Na opisani način, postavljanjem SSQSA okvira, obezbeđena&nbsp;je infrastruktura za dalji razvoj kompletne platforme za kontrolu kvaliteta softvera.&nbsp;</p>
63

Ranking source code static analysis warnings for continuous monitoring of free/libre/open source software repositories / Ranqueamento de avisos de análise estática de código fonte para monitoramento de repositórios de software livre

Ribeiro, Athos Coimbra 22 June 2018 (has links)
While there is a wide variety of both open source and proprietary source code static analyzers available in the market, each of them usually performs better in a small set of problems, making it hard to choose one single tool to rely on when examining a program. Combining the analysis of different tools may reduce the number of false negatives, but yields a corresponding increase in the number of false positives (which is already high for many tools). An interesting solution, then, is to filter these results to identify the issues least likely to be false positives. This work presents kiskadee, a system to support the usage of static analysis during software development by providing carefully ranked static analysis reports. First, it runs multiple static analyzers on the source code. Then, using a classification model, the potential bugs detected by the static analyzers are ranked based on their importance, with critical flaws ranked first, and potential false positives ranked last. To train kiskadee\'s classification model, we post-analyze the reports generated by three tools on synthetic test cases provided by the US National Institute of Standards and Technology. To make our technique as general as possible, we limit our data to the reports themselves, excluding other information such as change histories or code metrics. The features extracted from these reports are used to train a set of decision trees using AdaBoost to create a stronger classifier, achieving 0.8 classification accuracy (the combined false positive rate from the used tools was 0.61). Finally, we use this classifier to rank static analyzer alarms based on the probability of a given alarm being an actual bug. Our experimental results show that, on average, when inspecting warnings ranked by kiskadee, one hits 5.2 times less false positives before each bug than when using a randomly sorted warning list. / Embora exista grande variedade de analisadores estáticos de código-fonte disponíveis no mercado, tanto com licenças proprietárias, quanto com licenças livres, cada uma dessas ferramentas mostra melhor desempenho em um pequeno conjunto de problemas distinto, dificultando a escolha de uma única ferramenta de análise estática para analisar um programa. A combinação das análises de diferentes ferramentas pode reduzir o número de falsos negativos, mas gera um aumento no número de falsos positivos (que já é alto para muitas dessas ferramentas). Uma solução interessante é filtrar esses resultados para identificar os problemas com menores probabilidades de serem falsos positivos. Este trabalho apresenta kiskadee, um sistema para promover o uso da análise estática de código fonte durante o ciclo de desenvolvimento de software provendo relatórios de análise estática ranqueados. Primeiramente, kiskadee roda diversos analisadores estáticos no código-fonte. Em seguida, utilizando um modelo de classificação, os potenciais bugs detectados pelos analisadores estáticos são ranqueados conforme sua importância, onde defeitos críticos são colocados no topo de uma lista, e potenciais falsos positivos, ao fim da mesma lista. Para treinar o modelo de classificação do kiskadee, realizamos uma pós-análise nos relatórios gerados por três analisadores estáticos ao analisarem casos de teste sintéticos disponibilizados pelo National Institute of Standards and Technology (NIST) dos Estados Unidos. Para tornar a técnica apresentada o mais genérica possível, limitamos nossos dados às informações contidas nos relatórios de análise estática das três ferramentas, não utilizando outras informações, como históricos de mudança ou métricas extraídas do código-fonte dos programas inspecionados. As características extraídas desses relatórios foram utilizadas para treinar um conjunto de árvores de decisão utilizando o algoritmo AdaBoost para gerar um classificador mais forte, atingindo uma acurácia de classificação de 0,8 (a taxa de falsos positivos das ferramentas utilizadas foi de 0,61, quando combinadas). Finalmente, utilizamos esse classificador para ranquear os alarmes dos analisadores estáticos nos baseando na probabilidade de um dado alarme ser de fato um bug no código-fonte. Resultados experimentais mostram que, em média, quando inspecionando alarmes ranqueados pelo kiskadee, encontram-se 5,2 vezes menos falsos positivos antes de se encontrar cada bug quando a mesma inspeção é realizada para uma lista ordenada de forma aleatória.
64

An Analyzer for Message Passing Programs

Huang, Yu 01 May 2016 (has links)
Asynchronous message passing systems are fast becoming a common means for communication between devices. Two problems existing in message passing programs are difficult to solve. The first problem, intended or otherwise, is message-race where a receive may match with more than one send in the runtime system. This non-determinism often leads to intermittent and unexpected behavior depending on the resolution of the race. Another problem is deadlock, which is a situation in that each member process of the group is waiting for some member process to communicate with it, but no member is attempting to communicate with it. Detecting if message-race and/or deadlocks exist in a message passing program are both NP-complete. The difficulty of solving the two problems also comes from three factors that complicate the semantics: asynchronous communication, synchronous barrier, and buffering settings including infinite buffering (the system can buffer messages) and zero buffering (the system has no internal buffering). To solve the above problems with complicating factors, this research provides a novel predictive analysis that initializes a concrete execution and then predicts the behavior of other executions that arise from the initial execution. This research starts with Satisfiability Modulo Theories (SMT) based model checking that provides precise analysis for the program behavior. Unfortunately, a precise analysis using SMT does not scale to large programs. As such, the SMT based model checking is combined with heuristic search for witnessing program properties. The heuristic search is efficient in identifying how sends may match with receives in the runtime as it only looks for the match relations for sends and receives in a small searching space initially; the space is increased only if the program property is not witnessed, until all possible match relations for sends and receives reflected in message non-determinism are found. This research also gives a static analysis approach that is scalable as it does not need to analyze the full set of program behaviors; rather, the static analysis only uses polynomial-time algorithms to identify all potential deadlocks in a send-receive templates given a set of pre-defined deadlock patterns. Given the predictive analysis consisting of SMT based model checking with heuristic search and static analysis, this research is able to solve the two problems above. The work in this dissertation also demonstrates that the predictive analysis is more efficient than the existing tools for verifying message passing programs.
65

Certification of static analysis in many-sorted first-order logic / Analyse statique certifiée en logique du premier ordre multi-sortée

Cornilleau, Pierre-Emmanuel 25 March 2013 (has links)
L'analyse statique est utilisée pour vérifier de manière formelle qu'un programme ne fait pas d'erreurs, mais un analyseur statique est lui même un programme complexe sujet aux erreurs. Une analyse statique formalisée comme un interpreteur abstrait peut être prouvée correcte, cependant un telle preuve ne porte pas directement sur l'implementation de l'analyseur. Pour résoudre cette difficultée, nous proposons de générer des conditions de vérification (VCs, des formules logiques valides seulement si le résultat de l'analyseur est correct), et de les décharger à l'aide d'un prouveur de théorèmes automatique (ATP). Les VCs générées appartiennent à la logic du premier ordre multi-sortée (MSFOL), une logique utilisée avec succés en vérification déductive, suffisament expressive pour encoder les résultats d'analyses complexes et pour formaliser la sémantique operationnelle d'un langage objet, ce qui nous permet de prouver la correction des VCs générées à l'aide d'outils de vérification deductive. Pour assurer que les VCs puissent être déchargée automatiquement pour des analyses du tas, nous introduisons un calcul de VCs appartenant à un fragment décidable de MSFOL, et afin de pouvoir utiliser le même calcul pour différentes analyses, nous décrivons une famille d'analyses à l'aide d'une fonction de concretisation et d'un instrumentation de la sémantique paramétrées. Pour améliorer la fiabilité des ATPs, nous étudions aussi la certification de résultat des proveurs de satisfiabilité modulo théories, une famille d'ATPs dédiée à MSFOL. Nous proposons un système de preuve et un vérifieur modulaires, qui s'appuient sur des vérifieur dédiés aux théories sous-jacentes. / Static program analysis is a core technology for both verifying and finding errors in programs but most static analyzers are complex pieces of software that are not without error. A Static analysis formalised as an abstract interpreter can be proved sound, however such proofs are significantly harder to do on the actual implementation of an analyser. To alleviate this problem we propose to generate Verification Conditions (VCs, formulae valid only if the results of the analyser are correct) and to discharge them using an Automated Theorem Prover (ATP). We generate formulae in Many-Sorted First-Order Logic (MSFOL), a logic that has been successfully used in deductive program verification. MSFOL is expressive enough to describe the results of complex analyses and to formalise the operational semantics of object-oriented languages. Using the same logic for both tasks allows us to prove the soundness of the VC generator using deductive verification tools. To ensure that VCs can be automatically discharged for complex analyses of the heap, we introduce a VC calculus that produces formulae belonging to a decidable fragment of MSFOL. Furthermore, to be able to certify different analyses with the same calculus, we describe a family of analyses with a parametric concretisation function and instrumentation of the semantics. To improve the reliability of ATPs, we also studied the result certification of Satisfiability Modulo Theory solvers, a family of ATPs dedicated to MSFOL. We propose a modular proof-system and a modular proof-verifier programmed and proved correct in Coq, that rely on exchangeable verifiers for each of the underlying theories.
66

A Security Evaluation Methodology for Container Images

Abbott, Brendan Michael 01 March 2017 (has links)
The goal of this research is to create a methodology that evaluates the security posture of container images and helps improve container security. This was done by first searching for any guidelines or standards that focus on container images and security. After finding none, I decided to create an evaluative methodology. The methodology is composed of actions that users should take to evaluate the security of a container image. The methodology was created through in-depth research on container images and the build instructions used to create them and is referred to as the Security Evaluation Methodology for Container Images. The entire Methodology was reviewed by experts in containers, information technology, and security; updated based on their feedback; and then reviewed again for further feedback. Four of the most popular container images—nginx, redis, mbabineau/cfn-bootstrap, and google/cadvisor—were evaluated using the Methodology. The evaluation revealed security issues in each image and provided direction on how to resolve each issue. Based on the positive feedback of experts and the performance of the Methodology, I propose that the Methodology be used to evaluate all container images, as it provides valuable security insights about, and suggestions for, an image.
67

Modeling and Pattern Matching Security Properties with Dependence Graphs

Fåk, Pia January 2005 (has links)
<p>With an increasing number of computers connected to the Internet, the number of malicious attacks on computer systems also raises. The key to all successful attacks on information systems is finding a weak spot in the victim system. Some types of bugs in software can constitute such weak spots. This thesis presents and evaluates a technique for statically detecting such security related bugs. It models the analyzed program as well as different types of security bugs with dependence graphs. Errors are detected by searching the program graph model for subgraphs matching security bug models.</p><p>The technique has been implemented in a prototype tool called GraphMatch. Its accuracy and performance have been measured by analyzing open source application code for missing input validation vulnerabilities. The test results show that the accuracy obtained so far is low and the complexity of the algorithms currently used cause analysis times of several hours even for fairly small projects. Further research is needed to determine if the performance and accuracy can be improved.</p>
68

Benchmarking Points-to Analysis

Gutzmann, Tobias January 2013 (has links)
Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools. Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis. In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy. We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard. We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
69

A Concurrent IFDS Dataflow Analysis Algorithm Using Actors

Rodriguez, Jonathan David January 2010 (has links)
There has recently been a resurgence in interest in techniques for effective programming of multi-core computers. Most programmers find general-purpose concurrent programming to be extremely difficult. This difficulty severely limits the number of applications that currently benefit from multi-core computers. There already exist many concurrent solutions for the class of regular applications, which include various algorithms for linear algebra. For the class of irregular applications, which operate on dynamic and pointer- and graph-based structures, efficient concurrent solutions have so far remained elusive. Dataflow analysis applications, which are often found in compilers and program analysis tools, have received particularly little attention with regard to execution on multi-core machines. Operating on the theory that the Actor model, which structures computations as systems of asynchronously-communicating entities, is a more appropriate method for representing irregular algorithms than the shared-memory model, this work presents a concurrent Actor-based formulation of the IFDS, or Interprocedural Finite Distributive Subset, dataflow analysis algorithm. The implementation of this algorithm is done using the Scala language and its Actors library. This algorithm achieves significant speedup on multi-core machines without using any optimistic execution. This work contributes to Actor research by showing how the Actor model can be practically applied to a dataflow analysis problem. This work contributes to static analysis research by showing how a dataflow analysis algorithm can effectively make use of multi-core machines, allowing the possibility of faster and more precise analyses.
70

Evaluation Of Seismic Response Modification Factors For Steel Frames By Non-linear Analysis

Bakir, Serhan 01 November 2006 (has links) (PDF)
In this study steel framing systems are investigated with regards to their lateral load carrying capacity and in this context seismic response modification factors of individual systems are analyzed. Numerous load resisting layouts, such as different bracing systems and un-braced moment resisting frames with various bay and story configurations are designed and evaluated in a parametric fashion. Three types of beam to column connection conditions are incorporated in evaluation process. Frames, designed according to Turkish seismic code, are investigated by nonlinear static analysis with the guidance of previous studies and recent provisions of FEMA. Method of analysis, design and evaluation data are presented in detail. Previous studies in literature, history and the theory of response modification phenomenon is presented. Results are summarized, main weaknesses and ambiguities introduced to design by the use of &ldquo / R&rdquo / factors are stated depending on the observed behavior.

Page generated in 0.4604 seconds