• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Compile-Time Characterization of Recurrent Patterns in Irregular Computations

Singri, Arjun Jagadeesh 03 September 2010 (has links)
No description available.
12

A Cost Effective Methodology for Quantitative Evaluation of Software Reliability using Static Analysis

Schilling, Walter William, Jr. January 2007 (has links)
No description available.
13

Similarity hash based scoring of portable executable files for efficient malware detection in IoT

Namanya, Anitta P., Awan, Irfan U., Disso, J.P., Younas, M. 09 July 2019 (has links)
Yes / The current rise in malicious attacks shows that existing security systems are bypassed by malicious files. Similarity hashing has been adopted for sample triaging in malware analysis and detection. File similarity is used to cluster malware into families such that their common signature can be designed. This paper explores four hash types currently used in malware analysis for portable executable (PE) files. Although each hashing technique produces interesting results, when applied independently, they have high false detection rates. This paper investigates into a central issue of how different hashing techniques can be combined to provide a quantitative malware score and to achieve better detection rates. We design and develop a novel approach for malware scoring based on the hashes results. The proposed approach is evaluated through a number of experiments. Evaluation clearly demonstrates a significant improvement (> 90%) in true detection rates of malware.
14

A development environment and static analyses for GUARDOL - a language for the specification of high assurance guards

Dodds, Josiah January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John M. Hatcliff / There are a number of network situations where different networks have different security policies and still need to share information. While it is important to allow some data to flow between the two networks, it is just as important that they don't share any data that violates the respective security policies of the networks. Constraints on data sharing are often phrased in terms of classification levels of data (e.g. top secret, secret, public). They might also be stated in terms of the contents of the data (e.g. are there military base names, is the location correct). The software and hardware that works to solve these problems is called Cross Domain Solutions (CDS). There are a variety of hardware platforms capable of implementing CDS. These platforms are all configured in different ways and they are often proprietary. Not only are there a number of platforms on the market, many are difficult to understand, verify, or even specify. The Guardol project provides an open, non-proprietary, and domain-specific language for specifying CDS security policies and implementing CDS. Guardol is designed to be easy to understand and verify. This thesis describes the design and implementation of primary Guardol components. It includes a description of the Eclipse GUI plug-ins that have been developed for the project as well as a description of new formal analyses and translations that have been developed for the language. The translation is used to plug into external tools for model checking and the analyses help to make the translation clean and efficient. The analyses are also useful tools to help make the use of Guardol easier for developers.
15

Distributed parallel symbolic execution

King, Andrew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software defects cost our economy a significant amount of money. Techniques that can detect software defects before the software begins its operational life-cycle are therefore highly valuable. Unfortunately, as software is becoming more ubiquitous, it is also becoming more complex. Static analysis of software can be computationally intensive, and as software becomes more complex the computational demands of any analysis applied increase also. While increasingly complex software entails more computationally demanding analysis, the computational capabilities provided by computers have increased exponentially over the last half century of computing. Historically, the increase in computational capability has come by increasing the clock speed of the computer's central processing unit (CPU.) In the last several years, engineering limitations have made it increasingly difficult to build CPU's with progressively higher clock speeds. Instead, processor manufacturers now provide increased capability in the form of `multi-core' CPUs; where each processor package contains two or more processing units, enabling that processor to execute more than one task concurrently. This thesis describes the design and implementation of a parallel version of symbolic execution which can take advantage of modern multi-core and multi-processor systems to complete analysis of software units in a reduced amount of time.
16

Dioïdes et idéaux de polynômes en analyse statique / Static analysis with dioids and polynomial ideals

Jobin, Arnaud 16 January 2012 (has links)
L'analyse statique a pour but de vérifier qu'un programme a le comportement souhaité c.à.d. satisfait des propriétés de sûreté. Toutefois, inférer les propriétés vérifiées par un programme est un problème difficile : le théorème de Rice énonce que toute propriété non triviale d'un langage de programmation Turing-complet est indécidable. Afin de contourner cette difficulté, les analyses statiques effectuent des approximations des comportements possibles du programme. La théorie de l'interprétation abstraite permet de donner un cadre formel à ces approximations. Cette théorie, introduite par Cousot & Cousot propose un cadre d'approximation basé sur la notion de treillis, de connexion de Galois et de calculs de points fixes par itération. Ce cadre permet de définir la qualité des approximations effectuées et notamment la notion de meilleure approximation. À l'opposé, les notions quantitatives n'apparaissent pas naturellement dans ce cadre. Nous nous sommes donc posés la question de l'inférence, par analyse statique, de propriétés s'exprimant de manière quantitative (telles que l'utilisation de la mémoire ou le temps d'exécution). / Static analysis aims to verify that programs behave correctly i.e. satisfy safety properties. However, generating properties verified by a program is a difficult problem : Rice’s theorem states that any non-trivial property about the language recognized by a Turing machine is undecidable. In order to avoid this difficulty, static analyses approximate the possible behaviours of the program. Abtract interpretation theory defines a formal framework for approximating programs. This theory, introduced by Cousot & Cousot is based on the mathematical structure of lattices, Galois connections and iterative fixpoints calculus. This framework defines the notion of correct approximation and allows for qualitatively compare approximations. On the contrary, it is not suitable for handling quantitative properties (such as memory usage and execution time).
17

First-Order Models for Configuration Analysis

Nelson, Tim 25 April 2013 (has links)
Our world teems with networked devices. Their configuration exerts an ever-expanding influence on our daily lives. Yet correctly configuring systems, networks, and access-control policies is notoriously difficult, even for trained professionals. Automated static analysis techniques provide a way to both verify a configuration's correctness and explore its implications. One such approach is scenario-finding: showing concrete scenarios that illustrate potential (mis-)behavior. Scenarios even have a benefit to users without technical expertise, as concrete examples can both trigger and improve users' intuition about their system. This thesis describes a concerted research effort toward improving scenario-finding tools for configuration analysis. We developed Margrave, a scenario-finding tool with special features designed for security policies and configurations. Margrave is not tied to any one specific policy language; rather, it provides an intermediate input language as expressive as first-order logic. This flexibility allows Margrave to reason about many different types of policy. We show Margrave in action on Cisco IOS, a common language for configuring firewalls, demonstrating that scenario-finding with Margrave is useful for debugging and validating real-world configurations. This thesis also presents a theorem showing that, for a restricted subclass of first-order logic, if a sentence is satisfiable then there must exist a satisfying scenario no larger than a computable bound. For such sentences scenario-finding is complete: one can be certain that no scenarios are missed by the analysis, provided that one checks up to the computed bound. We demonstrate that many common configurations fall into this subclass and give algorithmic tests for both sentence membership and counting. We have implemented both in Margrave. Aluminum is a tool that eliminates superfluous information in scenarios and allows users' goals to guide which scenarios are displayed. We quantitatively show that our methods of scenario-reduction and exploration are effective and quite efficient in practice. Our work on Aluminum is making its way into other scenario-finding tools. Finally, we describe FlowLog, a language for network programming that we created with analysis in mind. We show that FlowLog can express many common network programs, yet demonstrate that automated analysis and bug-finding for FlowLog are both feasible as well as complete.
18

Precise, General, and Efficient Data-flow Analysis for Security Vetting of Android Apps

Wei, Fengguo 18 June 2018 (has links)
This dissertation presents a new approach to static analysis for security vetting of Android apps, and a general framework called Argus-SAF. Argus-SAF determines points-to information for all objects in an Android app component in a flow and context-sensitive (user-configurable) way and performs data-flow and data dependence analysis for the component. Argus-SAF also tracks inter-component communication activities. It can stitch the component-level information into the app- level information to perform intra-app or inter-app analysis. Moreover, Argus-SAF is NDK/JNI- aware and can efficiently track precise data-flow across language boundary. This dissertation shows that, (a) the aforementioned type of comprehensive app analysis is utterly feasible in terms of computing resources with modern hardware, (b) one can easily leverage the results from this general analysis to build various types of specialized security analyses – in many cases the amount of additional coding needed is around 100 lines of code, and (c) the result of those specialized analyses leveraging Argus-SAF is at least on par and often exceeds prior works designed for the specific problems, which this dissertation demonstrate by comparing Argus-SAF’s results with those of prior works whenever the tool can be obtained. Since Argus-SAF’s analysis directly handles intercomponent and inter-language control and data flows, it can be used to address security problems that result from interactions among multiple components from either the same or different apps and among java code and native code. Argus-SAF’s analysis is sound in that it can assure the absence of the specified security problems in an app with well-specified and reasonable assumptions on Android runtime system and its library.
19

Tools for static code analysis: A survey

Hellström, Patrik January 2009 (has links)
<p>This thesis has investigated what different tools for static code analysis, with anemphasis on security, there exist and which of these that possibly could be used in a project at Ericsson AB in Linköping in which a HIGA (Home IMS Gateway) is constructed. The HIGA is a residential gateway that opens up for the possibility to extend an operator’s Internet Multimedia Subsystem (IMS) all the way to the user’s home and thereby let the end user connect his/her non compliant IMS devices, such as a media server, to an IMS network.</p><p>Static analysis is the process of examining the source code of a program and in that way test a program for various weaknesses without having to actually execute it (compared to dynamic analysis such as testing).</p><p>As a complement to the regular testing, that today is being performed in the HIGA project, four different static analysis tools were evaluated to find out which one was best suited for use in the HIGA project. Two of them were open source tools and two were commercial.</p><p>All of the tools were evaluated in five different areas: documentation, installation & integration procedure, usability, performance and types of bugs found. Furthermore all of the tools were later on used to perform testing of two modules of the HIGA.</p><p>The evaluation showed many differences between the tools in all areas and not surprisingly the two open source tools turned out to be far less mature than the commercial ones. The tools that were best suited for use in the HIGA project were Fortify SCA and Flawfinder.</p><p>As far as the evaluation of the HIGA code is concerned some different bugs which could have jeopardized security and availability of the services provided by it were found.</p>
20

Sound Extraction of Control-Flow Graphs from open Java Bytecode Systems

de Carvalho Gomes, Pedro, Picoco, Attilio January 2012 (has links)
Formal verification techniques have been widely deployed as means to ensure the quality of software products. Unfortunately, they suffer with the combinatorial explosion of the state space. That is, programs have a large number of states, sometimes infinite. A common approach to alleviate the problem is to perform the verification over abstract models from the program. Control-flow graphs (CFG) are one of the most common models, and have been widely studied in the past decades. Unfortunately, previous works over modern programming languages, such as Java, have either neglected features that influence the control-flow, or do not provide a correctness argument about the CFG construction. This is an unbearable issue for formal verification, where soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, one may want to extract CFGs from the available components of an open system. I.e., a system whose at least one of the components is missing. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependences between software components. In the current work we present a framework to extract control-flow graphs from open Java Bytecode systems in a modular fashion. Our strategy requires the user to provide interfaces for the missing components. First, we present a formal definition of open Java bytecode systems. Next, we generalize a previous algorithm that performs the extraction of CFGs for closed programs to a modular set-up. The algorithm uses the user-provided interfaces to resolve inter-dependences involving missing components. Eventually the missing components will arrive, and the open system will become closed, and can execute. However, the arrival of a component may affect the soundness of CFGs which have been extracted previously. Thus, we define a refinement relation, which is a set of constraints upon the arrival of components, and prove that the relation guarantees the soundness of CFGs extracted with the modular algorithm. Therefore, the control-flow safety properties verified over the original CFGs still hold in the refined model. We implemented the modular extraction framework in the ConFlEx tool. Also, we have implemented the reusage from previous extractions, to enable the incremental extraction of a newly arrived component. Our technique performs substantial over-approximations to achieve soundness. Despite this, our test cases show that ConFlEx is efficient. Also, the extraction of the CFGs gets considerable speed-up by reusing results from previous analyses. / <p>QC 20121029</p> / Verification of Control-Flow Properties of Programs with Procedures(CVPP)

Page generated in 0.1406 seconds