Spelling suggestions: "subject:"bistatic deprogram 2analysis"" "subject:"bistatic deprogram 3analysis""
1 |
Next Generation Black-Box Web Application Vulnerability Analysis FrameworkJanuary 2017 (has links)
abstract: Web applications are an incredibly important aspect of our modern lives. Organizations
and developers use automated vulnerability analysis tools, also known as
scanners, to automatically find vulnerabilities in their web applications during development.
Scanners have traditionally fallen into two types of approaches: black-box
and white-box. In the black-box approaches, the scanner does not have access to the
source code of the web application whereas a white-box approach has access to the
source code. Today’s state-of-the-art black-box vulnerability scanners employ various
methods to fuzz and detect vulnerabilities in a web application. However, these
scanners attempt to fuzz the web application with a number of known payloads and
to try to trigger a vulnerability. This technique is simple but does not understand
the web application that it is testing. This thesis, presents a new approach to vulnerability
analysis. The vulnerability analysis module presented uses a novel approach
of Inductive Reverse Engineering (IRE) to understand and model the web application.
IRE first attempts to understand the behavior of the web application by giving
certain number of input/output pairs to the web application. Then, the IRE module
hypothesizes a set of programs (in a limited language specific to web applications,
called AWL) that satisfy the input/output pairs. These hypotheses takes the form of
a directed acyclic graph (DAG). AWL vulnerability analysis module can then attempt
to detect vulnerabilities in this DAG. Further, it generates the payload based on the
DAG, and therefore this payload will be a precise payload to trigger the potential vulnerability
(based on our understanding of the program). It then tests this potential
vulnerability using the generated payload on the actual web application, and creates
a verification procedure to see if the potential vulnerability is actually vulnerable,
based on the web application’s response. / Dissertation/Thesis / Masters Thesis Computer Science 2017
|
2 |
Static Program AnalysisSHRESTHA, JAYESH January 2013 (has links)
No description available.
|
3 |
Analýza datových toků pro knihovny se složitými vzory interakcí / Data Lineage Analysis of Frameworks with Complex Interaction PatternsHýbl, Oskar January 2020 (has links)
Manta Flow is a tool for analyzing data flow in enterprise environment. It features Java scanner, a module using static analysis to determine the flows through Java applications. To analyze an application using some framework, the scanner requires a dedicated plugin. Although Java scanner provides plugins for several frameworks, to be usable for real applications, it is essential that the scanner supports as many frameworks as possible, which requires implementation of new plugins. Application using Apache Spark, a framework for cluster computing, are increasingly popular. Therefore we designed and implemented Java scanner plugin that allows the scanner to analyze Spark applications. As Spark focuses on data processing, this presented several challenges that were not encountered in other frameworks. In particular it was necessary to resolve the data schema in various scenarios and track the schema changes throughout any operations invoked on the data. Of the multiple APIs Spark provides for data processing, we focused on Spark SQL module, notably on Dataset, omitting the legacy RDD. We also implemented support for data access, covering JDBC and chosen file formats. The implementation has been thoroughly tested and is proven to work correctly as a part of Manta Flow, which features the plugin in...
|
4 |
From Theory to Practice: Deployment-grade Tools and Methodologies for Software SecurityRahaman, Sazzadur 25 August 2020 (has links)
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds. / Doctor of Philosophy / Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
|
5 |
Analýza datových toků ve databázových systémech / Analyzing Data Lineage in Database FrameworksEliáš, Richard January 2019 (has links)
Large information systems are typically implemented using frameworks and libraries. An important property of such systems is data lineage - the flow of data loaded from one system (e.g. database), through the program code, and back to another system. We implemented the Java Resolver tool for data lineage analysis of Java programs based on the Symbolic analysis library for computing data lineage of simple Java applications. The library supports only JDBC and I/O APIs to identify the sources and sinks of data flow. We proposed some archi- tecture changes to the library to make easily extensible by plugins that can add support for new data processing frameworks. We implemented such plugins for few frameworks with different approach for accessing the data, including Spring JDBC, MyBatis and Kafka. Our tests show that this approach works and can be usable in practice. 1
|
6 |
PRECISION IMPROVEMENT AND COST REDUCTION FOR DEFECT MINING AND TESTINGSun, Boya 31 January 2012 (has links)
No description available.
|
7 |
On-the-Fly Dynamic Dead Variable AnalysisSelf, Joel P. 22 March 2007 (has links) (PDF)
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.
|
Page generated in 0.0807 seconds