• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 8
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 100
  • 100
  • 42
  • 31
  • 22
  • 21
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Instrumentace a vyhodnocení pro dynamickou analýzu aplikací. / Instrumentation and Evaluation for Dynamic Program Analysis

Marek, Lukáš January 2014 (has links)
A dynamic program analysis provides essential information during later phases of an application development. It helps with debugging, profiling, performance optimizations or vulnerability detection. Despite that, support for creating custom dynamic analysis tools, especially in the domain of managed languages, is rather limited. In this thesis, we present two systems to help improve application observability on the Java platform. DiSL is a language accompanied with a framework allowing simple and flexible instrumentation for the dynamic program analysis. DiSL provides high level abstractions to enable quick prototyping even for programmers not possessing a knowledge of Java internals. A skilled analysis developer gains full control over the instrumentation process, thus does not have to worry about unwanted allocations or hidden execution overhead. ShadowVM is a platform that provides isolation between the observed application and the analysis environment. To reduce the amount of possible interactions between the analysis and the application, ShadowVM offloads analysis events out of the context of the application. Even though the isolation is the primary focus of the platform, ShadowVM introduces a number of techniques to stay performance comparable and provide a similar programming model as existing...
62

Raisonnement automatisé pour la logique de séparation avec des définitions inductives / Automated reasoning in separation logic with inductive definitions

Serban, Cristina 31 May 2018 (has links)
La contribution principale de cette thèse est un système de preuve correct et complet pour les implications entre les prédicats inductifs, fréquemment rencontrées lors de la vérification des programmes qui utilisent des structures de données récursives allouées dynamiquement. Nous introduisons un système de preuve généralisé pour la logique du premier ordre et nous l'adaptons à la logique de séparation, car ceci est un cadre qui répond aux plusieurs difficultés posées par le raisonnement sur les tas alloués dynamiquement. La correction et la complétude sont assurées par quatre restrictions sémantiques et nous proposons également un semi-algorithme de recherche de preuves qui devient une procédure de décision pour le problème d'implication lorsque les restrictions sémantiques sont respectées.Ce raisonnement d'ordre supérieur sur les implications nécessite des procédures de décision de premier ordre pour la logique sous-jacente lors de l'application des règles d'inférence et lors de la recherche des preuves. Ainsi, nous fournissons deux procédures de décision pour la logique de séparation, en considérant le fragment sans quantificateurs et le fragment quantifié de façon Exists*Forall*, qui ont été intégrées dans le solveur SMT open source CVC4.Finalement, nous présentons une implémentation de notre système de preuve pour la logique de séparation, qui utilise ces procédures de décision. Étant donné des prédicats inductifs et une requête d'implication, un avertissement est émis lorsqu'une ou plusieurs restrictions sémantiques sont violées. Si l'implication est valide, la sortie est une preuve. Sinon, un ou plusieurs contre-exemples sont fournis. / The main contribution of this thesis is a sound and complete proof system for entailments between inductive predicates, which are frequently encountered when verifying programs that work with dynamically allocated recursive data structures. We introduce a generalized proof system for first-order logic, and then adapt it to separation logic, a framework that addresses many of the difficulties posed by reasoning about dynamically allocated heaps. Soundness and completeness are ensured through four semantic restrictions and we also propose a proof-search semi-algorithm that becomes a decision procedure for the entailment problem when the semantic restrictions hold.This higher-order reasoning about entailments requires first-order decision procedures for the underlying logic when applying inference rules and during proof search. Thus, we provide two decision procedures for separation logic, considering the quantifier-free and the Exists*Forall*-quantified fragments, which were integrated in the open-source, DPLL(T)-based SMT solver CVC4.Finally, we also give an implementation of our proof system for separation logic, which uses these decision procedures. Given some inductive predicate definitions and an entailment query as input, a warning is issued when one or more semantic restrictions are violated. If the entailment is found to be valid, the output is a proof. Otherwise, one or more counterexamples are provided.
63

Analýza datových toků ve databázových systémech / Analyzing Data Lineage in Database Frameworks

Eliáš, Richard January 2019 (has links)
Large information systems are typically implemented using frameworks and libraries. An important property of such systems is data lineage - the flow of data loaded from one system (e.g. database), through the program code, and back to another system. We implemented the Java Resolver tool for data lineage analysis of Java programs based on the Symbolic analysis library for computing data lineage of simple Java applications. The library supports only JDBC and I/O APIs to identify the sources and sinks of data flow. We proposed some archi- tecture changes to the library to make easily extensible by plugins that can add support for new data processing frameworks. We implemented such plugins for few frameworks with different approach for accessing the data, including Spring JDBC, MyBatis and Kafka. Our tests show that this approach works and can be usable in practice. 1
64

FUZZING HARD-TO-COVER CODE

Hui Peng (10746420) 06 May 2021 (has links)
<div>Fuzzing is a simple yet effect approach to discover bugs by repeatedly testing the target system using randomly generated inputs. In this thesis, we identify several limitations in state-of-the-art fuzzing techniques: (1) the coverage wall issue , fuzzer-generated inputs cannot bypass complex sanity checks in the target programs and are unable to cover code paths protected by such checks; (2) inability to adapt to interfaces to inject fuzzer-generated inputs, one important example of such interface is the software/hardware interface between drivers and their devices; (3) dependency on code coverage feedback, this dependency makes it hard to apply fuzzing to targets where code coverage collection is challenging (due to proprietary components or special software design).</div><div><br></div><div><div>To address the coverage wall issue, we propose T-Fuzz, a novel approach to overcome the issue from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the coverage wall is reached, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program.</div></div><div><br></div><div><div>By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, we found 4 new bugs in previously-fuzzed programs and libraries.</div></div><div><br></div><div><div>To address the inability to adapt to inferfaces, we propose USBFuzz. We target the USB interface, fuzzing the software/hardware barrier. USBFuzz uses device emulation</div><div>to inject fuzzer-generated input to drivers under test, and applies coverage-guided fuzzing to device drivers if code coverage collection is supported from the kernel. In its core, USBFuzz emulates an special USB device that provides data to the device driver (when it performs IO operations). This allows us to fuzz the input space of drivers from the device’s perspective, an angle that is difficult to achieve with real hardware. USBFuzz discovered 53 bugs in Linux (out of which 37 are new, and 36 are memory bugs of high security impact, potentially allowing arbitrary read or write in the kernel address space), one bug in FreeBSD, four bugs (resulting in Blue Screens of Death) in Windows and three bugs (two causing an unplanned restart, one freezing the system) in MacOS.</div></div><div><br></div><div><div>To break the dependency on code coverage feedback, we propose WebGLFuzzer. To fuzz the WebGL interface (a set of JavaScript APIs in browsers allowing high performance graphics rendering taking advantage of GPU acceleration on the device), where code coverage collection is challenging, we introduce WebGLFuzzer, which internally uses a log guided fuzzing technique. WebGLFuzzer is not dependent on code coverage feedback, but instead, makes use of the log messages emitted by browsers to guide its input mutation. Compared with coverage guided fuzzing, our log guided fuzzing technique is able to perform more meaningful mutation under the guidance of the log message. To this end, WebGLFuzzer uses static analysis to identify which argument to mutate or which API call to insert to the current program to fix the internal WebGL program state given a log message emitted by the browser. WebGLFuzzer is under evaluation and so far, it has found 6 bugs, one of which is able to freeze the X-Server.</div></div>
65

Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers

Taegyu Kim (10716420) 06 May 2021 (has links)
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains. <br>In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
66

Verifying Data-Oriented Gadgets in Binary Programs to Build Data-Only Exploits

Sisco, Zachary David 08 August 2018 (has links)
No description available.
67

PRECISION IMPROVEMENT AND COST REDUCTION FOR DEFECT MINING AND TESTING

Sun, Boya 31 January 2012 (has links)
No description available.
68

Practical Support for Strong, Serializability-Based Memory Consistency

Biswas, Swarnendu 30 December 2016 (has links)
No description available.
69

Runtime specialization for heterogeneous CPU-GPU platforms

Farooqui, Naila 27 May 2016 (has links)
Heterogeneous parallel architectures like those comprised of CPUs and GPUs are a tantalizing compute fabric for performance-hungry developers. While these platforms enable order-of-magnitude performance increases for many data-parallel application domains, there remain several open challenges: (i) the distinct execution models inherent in the heterogeneous devices present on such platforms drives the need to dynamically match workload characteristics to the underlying resources, (ii) the complex architecture and programming models of such systems require substantial application knowledge and effort-intensive program tuning to achieve high performance, and (iii) as such platforms become prevalent, there is a need to extend their utility from running known regular data-parallel applications to the broader set of input-dependent, irregular applications common in enterprise settings. The key contribution of our research is to enable runtime specialization on such hybrid CPU-GPU platforms by matching application characteristics to the underlying heterogeneous resources for both regular and irregular workloads. Our approach enables profile-driven resource management and optimizations for such platforms, providing high application performance and system throughput. Towards this end, this research: (a) enables dynamic instrumentation for GPU-based parallel architectures, specifically targeting the complex Single-Instruction Multiple-Data (SIMD) execution model, to gain real-time introspection into application behavior; (b) leverages such dynamic performance data to support novel online resource management methods that improve application performance and system throughput, particularly for irregular, input-dependent applications; (c) automates some of the programmer effort required to exercise specialized architectural features of such platforms via instrumentation-driven dynamic code optimizations; and (d) proposes a specialized, affinity-aware work-stealing scheduling runtime for integrated CPU-GPU processors that efficiently distributes work across all CPU and GPU cores for improved load balance, taking into account both application characteristics and architectural differences of the underlying devices.
70

Space cost analysis using sized types

Vasconcelos, Pedro B. January 2008 (has links)
Programming resource-sensitive systems, such as real-time embedded systems, requires guaranteeing both the functional correctness of computations and also that time and space usage fits within constraints imposed by hardware limits or the environment. Functional programming languages have proved very good at meeting the former logical kind of guarantees but not the latter resource guarantees. This thesis contributes to demonstrate the applicability of functional programming in resource-sensitive systems with an automatic program analysis for obtaining guaranteed upper bounds on dynamic space usage of functional programs. Our analysis is developed for a core subset of Hume, a domain-specific functional language targeting resource-sensitive systems (Hammond et al. 2007), and presented as a type and effect system that builds on previous sized type systems (Hughes et al. 1996, Chin and Khoo 2001) and effect systems for costs (Dornic et al. 1992, Reistad and Giord 1994, Hughes and Pareto 1999). It extends previous approaches by using abstract interpretation techniques to automatically infer linear approximations of the sizes of recursive data types and the stack and heap costs of recursive functions. The correctness of the analysis is formally proved with respect to an operational semantics for the language and an inference algorithm that automatically reconstructs size and cost bounds is presented. A prototype implementation of the analysis and operational semantics has been constructed and used to experimentally assess the quality of the cost bounds with some examples, including implementations of textbook functional programming algorithms and simplified embedded systems.

Page generated in 0.0609 seconds