• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • Tagged with
  • 11
  • 11
  • 11
  • 10
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Secure Computation in Heterogeneous Environments: How to Bring Multiparty Computation Closer to Practice?

Raykova, Mariana Petrova January 2012 (has links)
Many services that people use daily require computation that depends on the private data of multiple parties. While the utility of the final result of such interactions outweighs the privacy concerns related to output release, the inputs for such computations are much more sensitive and need to be protected. Secure multiparty computation (MPC) considers the question of constructing computation protocols that reveal nothing more about their inputs than what is inherently leaked by the output. There have been strong theoretical results that demonstrate that every functionality can be computed securely. However, these protocols remain unused in practical solutions since they introduce efficiency overhead prohibitive for most applications. Generic multiparty computation techniques address homogeneous setups with respect to the resources available to the participants and the adversarial model. On the other hand, realistic scenarios present a wide diversity of heterogeneous environments where different participants have different available resources and different incentives to misbehave and collude. In this thesis we introduce techniques for multiparty computation that focus on heterogeneous settings. We present solutions tailored to address different types of asymmetric constraints and improve the efficiency of existing approaches in these scenarios. We tackle the question from three main directions: New Computational Models for MPC - We explore different computational models that enable us to overcome inherent inefficiencies of generic MPC solutions using circuit representation for the evaluated functionality. First, we show how we can use random access machines to construct MPC protocols that add only polylogarithmic overhead to the running time of the insecure version of the underlying functionality. This allows to achieve MPC constructions with computational complexity sublinear in the size for their inputs, which is very important for computations that use large databases. We also consider multivariate polynomials which yield more succinct representations for the functionalities they implement than circuits, and at the same time a large collection of problems are naturally and efficiently expressed as multivariate polynomials. We construct an MPC protocol for multivariate polynomials, which improves the communication complexity of corresponding circuit solutions, and provides currently the most efficient solution for multiparty set intersection in the fully malicious case. Outsourcing Computation - The goal in this setting is to utilize the resources of a single powerful service provider for the work that computationally weak clients need to perform on their data. We present a new paradigm for constructing verifiable computation (VC) schemes, which enables a computationally limited client to verify efficiently the result of a large computation. Our construction is based on attribute-based encryption and avoids expensive primitives such as fully homomorphic encryption andprobabilistically checkable proofs underlying existing VC schemes. Additionally our solution enjoys two new useful properties: public delegation and verification. We further introduce the model of server-aided computation where we utilize the computational power of an outsourcing party to assist the execution and improve the efficiency of MPC protocols. For this purpose we define a new adversarial model of non-collusion, which provides room for more efficient constructions that rely almost completely only on symmetric key operations, and at the same time captures realistic settings for adversarial behavior. In this model we propose protocols for generic secure computation that offload the work of most of the parties to the computation server. We also construct a specialized server-aided two party set intersection protocol that achieves better efficiencies for the two participants than existing solutions. Outsourcing in many cases concerns only data storage and while outsourcing the data of a single party is useful, providing a way for data sharing among different clients of the service is the more interesting and useful setup. However, this scenario brings new challenges for access control since the access control rules and data accesses become private data for the clients with respect to the service provide. We propose an approach that offers trade-offs between the privacy provided for the clients and the communication overhead incurred for each data access. Efficient Private Search in Practice - We consider the question of private search from a different perspective compared to traditional settings for MPC. We start with strict efficiency requirements motivated by speeds of available hardware and what is considered acceptable overhead from practical point of view. Then we adopt relaxed definitions of privacy, which still provide meaningful security guarantees while allowing us to meet the efficiency requirements. In this setting we design a security architecture and implement a system for data sharing based on encrypted search, which achieves only 30% overhead compared to non-secure solutions on realistic workloads.
2

Combining Programs to Enhance Security Software

Kang, Yuan Jochen January 2018 (has links)
Automatic threats require automatic solutions, which become automatic threats themselves. When software grows in functionality, it grows in complexity, and in the number of bugs. To keep track of and counter all of the possible ways that a malicious party can exploit these bugs, we need security software. Such software helps human developers identify and remove bugs, or system administrators detect attempted attacks. But like any other software, and likely more so, security software itself can have blind spots or flaws. In the best case, it stops working, and becomes ineffective. In the worst case, the security software has privileged access to the system it is supposed to protect, and the attacker can hijack those privileges for its own purposes. So we will need external programs to compensate for their weaknesses. At the same time, we need to minimize the additional attack surface and development time due to creating new solutions. To address both points, this thesis will explore how to combine multiple programs to overcome a number of weaknesses in individual security software: (1) When login authentication and physical protections of a smart phone fail, fake, decoy applications detect unauthorized usage and draw the attacker away from truly sensitive applications; (2) when a fuzzer, an automatic software testing tool, requires a diverse set of initial test inputs, manipulating the tools that a human uses to generate these inputs multiplies the generated inputs; (3) when the software responsible for detecting attacks, known as an intrusion detection system, itself needs protection against attacks, a simplified state machine tracks the software's interaction with the underlying platform, without the complexity and risks of a fully functional intrusion detection system; (4) when intrusion detection systems run on multiple, independent machines, a graph-theoretic framework drives the design for how the machines cooperatively monitor each other, forcing the attacker to not only perform more work, but also do so faster. Instead of introducing new, stand-alone security software, the above solutions only require a fixed number of new tools that rely on a diverse selection of programs that already exist. Nor do any of the programs, old or new, require additional privileges that the old programs did not have before. In other words, we multiply the power of security software without multiplying their risks.
3

Security, Privacy, and Transparency Guarantees for Machine Learning Systems

Lecuyer, Mathias January 2019 (has links)
Machine learning (ML) is transforming a wide range of applications, promising to bring immense economic and social benefits. However, it also raises substantial security, privacy, and transparency challenges. ML workloads indeed push companies toward aggressive data collection and loose data access policies, placing troves of sensitive user information at risk if the company is hacked. ML also introduces new attack vectors, such as adversarial example attacks, which can completely nullify models’ accuracy under attack. Finally, ML models make complex data-driven decisions, which are opaque to the end-users, and difficult to inspect for programmers. In this dissertation we describe three systems we developed. Each system addresses a dimension of the previous challenges, by combining new practical systems techniques with rigorous theory to achieve a guaranteed level of protection, and make systems easier to understand. First we present Sage, a differentially private ML platform that enforces a meaningful protection semantic for the troves of personal information amassed by today’s companies. Second we describe PixelDP, a defense against adversarial examples that leverages differential privacy theory to provide a guaranteed level of accuracy under attack. Third we introduce Sunlight, a tool to enhance the transparency of opaque targeting services, using rigorous causal inference theory to explain targeting decisions to end-users.
4

Ethernet sniffing : a big threat to network security

Mukantabana, Beatrice January 1994 (has links)
Networks play an important role in today's information age. The need to share information and resources makes networks a necessity in almost any computing environment. In many cases, the network can be thought of as a large, distributed computer, with disks and other resources on big systems being shared by smaller workstations on people's desks.Security has long been an object of concern and study for both data processing systems and communications facilities. With computer networks, these concerns are combined, and for local networks, the problems may be more acute. Consider a fullcapacity local network, with direct terminal access to the network, data files, and applications distributed among a variety of processors. This network may also provide access to and from long-haul communications and be part of an internet. Clearly, the task of providing security in such a complex environment is quite involved.The subject of security is a broad one and encompasses physical and administrative controls. The aim of this research is to explore the security problems pertaining to Ethernet networks. Different approaches to obtain a secure Ethernet environment are also discussed. / Department of Computer Science
5

Easy Encryption for Email, Photo, and Other Cloud Services

Koh, John Seunghyun January 2021 (has links)
Modern users carry mobile devices with them at nearly all times, and this likely has contributed to the rapid growth of private user data—such as emails, photos, and more—stored online in the cloud. Unfortunately, the security of many cloud services for user data is lacking, and the vast amount of user data stored in the cloud is an attractive target for adversaries. Even a single compromise of a user’s account yields all its data to attackers. A breach of an unencrypted email account gives the attacker full access to years, even decades, of emails. Ideally, users would encrypt their data to prevent this. However, encrypting data at rest has long been considered too difficult for users, even technical ones, mainly due to the confusing nature of managing cryptographic keys. My thesis is that strong security can be made easy to use through client-side encryption using self-generated per-device cryptographic keys, such that user data in cloud services is well protected, encryption is transparent and largely unnoticeable to users even on multiple devices, and encryption can be used with existing services without any server-side modifications. This dissertation introduces a new paradigm for usable cryptographic key management, Per-Device Keys (PDK), and explores how self-generated keys unique to every device can enable new client-side encryption schemes that are compatible with existing online services yet are transparent to users. PDK’s design based on self-generated keys allows them to stay on each device and never leave them. Management of these self-generated keys can be shown to users as a device management abstraction which looks like pairing devices with each other, and not any form of cryptographic key management. I design, implement, and evaluate three client-side encryption schemes supported by PDK, with a focus on designing around usability to bring transparent encryption to users. First, I introduce Easy Email Encryption (E3), a secure email solution that is easy to use. Usersstruggle with using end-to-end encrypted email, such as PGP and S/MIME, because it requires users to understand cryptographic key exchanges to send encrypted emails. E3 eliminates this key exchange by focusing on storing encrypting emails instead of sending them. E3 transparently encrypts emails on receipt, ensuring that all emails received before a compromise are protected from attack, and relies on widely-used TLS connections to protect in-flight emails. Emails are encrypted using self-generated keys, which are completely hidden from the user and do not need to be exchanged with other users, alleviating the burden of users having to know how to use and manage them. E3 encrypts on the client, making it easy to deploy because it requires no server or protocol changes and is compatible with any existing email service. Experimental results show that E3 is compatible with existing IMAP email services, including Gmail and Yahoo!, and has good performance for common email operations. Results of a user study show that E3 provides much stronger security guarantees than current practice yet is much easier to use than end-to-end encrypted email such as PGP. Second, I introduce Easy Secure Photos (ESP), an easy-to-use system that enables photos tobe encrypted and stored using existing cloud photo services. Users cannot store encrypted photos in services like Google Photos because these services only allow users to upload valid images such as JPEG images, but typical encryption methods do not retain image file formats for the encrypted versions and are not compatible with image processing such as image compression. ESP introduces a new image encryption technique that outputs valid encrypted JPEG files which are accepted by cloud photo services, and are robust against compression. The photos are encrypted using self-generated keys before being uploaded to cloud photo services, and are decrypted when downloaded to users’ devices. Similar to E3, ESP hides all the details of encryption/decryption and key management from the user. Since all crypto operations happen in the user’s photo app, ESP requires no changes to existing cloud photo services, making it easy to deploy. Experimental results and user studies show that ESP encryption is robust against attack techniques, exhibits acceptable performance overheads, and is simple for users to set up and use. Third, I introduce Easy Device-based Passwords (EDP), a password manager with improvedsecurity guarantees over existing ones while maintaining their familiar usage models. To encrypt and decrypt user passwords, existing password managers rely on weak, human-generated master passwords which are easy to use but easily broken. EDP introduces a new approach using self-generated keys to encrypt passwords, and an easy-to-use pairing mechanism to allow users to access passwords across multiple devices. Keys are not exposed to users and users do not need to know anything about key management. EDP is the first password manager that secures passwords even with untrusted servers, protecting against server break-ins and password database leaks. Experimental results and a user study show that EDP ensures password security with untrusted servers and infrastructure, has comparable performance to existing password managers, and is considered usable by users.
6

Hardware-Software Co-design for Practical Memory Safety

Hassan, Mohamed January 2022 (has links)
A vast amount of software, from low-level systems code to high-performance applications, is written in memory-unsafe languages such as C and C++. The lack of memory safety in C/C++ can lead to severe consequences; a simple buffer overflow can result in code or data corruption anywhere in the program memory. The problem is even worse in systems that constantly operate on inputs of unknown trustworthiness. For example, in 2021 a memory safety vulnerability was discovered in sudo, a near-ubiquitous utility available on major Unix-like operating systems. The vulnerability, which remained silent for over 10 years, allows any unprivileged user to gain root privileges on a victim machine using a default sudo configuration. As memory-safe languages are unlikely to displace C/C++ in the near future, efficient memory safety mechanisms for both existing and future C/C++ code are needed. Both industry and academia have proposed various techniques to address the C/C++ memory safety problem over the last three decades, either by software-only or hardware-assisted solutions. Software-only techniques such as Google’s AddressSanitizer are used to detect memory errors during the testing phase before products are shipped. While sanitizers have been shown to be effective at detecting memory errors with little effort, they typically suffer from high runtime overheads and increased memory footprint. Hardware-assisted solutions such as Oracle’s Application Data Integrity (ADI) and ARM’s Memory Tagging Extension (MTE) have much lower performance overheads, but they do not offer complete protection. Academic proposals manage to minimize the performance costs of memory safety defenses while maintaining fine-grained security protection. Unfortunately, state-of-the-art solutions require complex metadata that increases the program memory footprint, complicates the hardware design, and breaks compatibility with the rest of the system (e.g., unprotected libraries). To address these problems, the research within this thesis innovates in the realm of compiler transformations and hardware extensions to improve the state of the art in memory safety solutions. Specifically, this thesis shows that leveraging common software trends and rethinking computer microarchitectures can efficiently circumvent the problems of traditional memory safety solutions for C and C++. First, I present a novel cache line formatting technique, dubbed Califorms. Califorms builds on a concept called memory blocklisting, which prohibits a program from access- ing certain memory regions based on program semantics. State-of-the-art hardware-assisted memory blocklisting, while much faster than software blocklisting, creates memory fragmentation for each use of the blocklisted location. To prevent this issue, Califorms encodes the metadata, which is used to identify the blocklisted locations, in the blocklisted (i.e., dead) locations themselves. This inlined metadata can be then integrated into the microarchitecture by changing the cache line format. As a result, both the metadata and data are fetched together, eliminating the need for extra memory accesses. Hence, Califorms reduces the performance overheads of memory safety while providing byte-granular protection and maintaining very low hardware overheads. Secondly, I explore how leveraging common software trends can reduce the performance and memory costs of memory permitlisting (also known as base & bounds). Thus, I present No-FAT, a novel technique for enforcing spatial and temporal memory safety. The key observation that enables No-FAT is the increasing adoption of binning allocators. No-FAT, when used with a binning allocator, is able to implicitly derive an allocation’s bounds information (i.e., the base address and size) from the pointer itself without relying on expensive metadata. Moreover, as No-FAT’s memory instructions are aware of allocation bounds information, No-FAT effectively mitigates certain speculative attacks (e.g., Spectre-V1, which is also known as bounds checking bypass) with no additional cost. While No-FAT successfully detects memory safety violations, it falls short against physical attacks. Hence, I propose C-5, an architecture that complements No-FAT with strong data encryption. C-5 strictly uses access control in the L1 cache and encrypts program data at the L1-L2 cache interface. As a result, C-5 mitigates both in-process and physical attacks without burdening system performance. In addition to memory blocklisting and permitlisting, a cost-effective way to alleviate the memory safety threats is by deploying exploit mitigation techniques (e.g., Intel’s CET and ARM’s PAC). Unfortunately, current exploit mitigations offer incomplete security protection in order to save on performance. This thesis investigates potential opportunities to boost the security guarantees of exploit mitigations while maintaining their low overheads. Thus, I present ZeRØ, a hardware primitive that preserves pointer integrity at no performance cost, effectively mitigating pointer manipulation attacks such as ROP, COP, JOP, COOP, and DOP. ZeRØ proposes unique memory instructions and a novel metadata encoding scheme to protect code and data pointers from memory safety violations. The combination of instructions and metadata allows ZeRØ to avoid explicitly tagging every word in memory. On 64-bit systems, ZeRØ encodes the pointer type and location in the currently unused upper pointer bits. This way ZeRØ reduces the performance overheads of enforcing pointer integrity to zero while requiring simple hardware modifications. Finally, although current mitigation techniques excel at providing efficient protection for high-end devices, they typically suffer from significant performance and energy overheads when ported to the embedded domain. As a result, there is a need for developing new defenses that (1) have low overheads, (2) provide high security coverage, and (3) are especially designed for embedded devices. To achieve these goals I present EPI, an efficient pointer integrity mechanism that is tailored to microcontrollers and embedded devices. Similar to ZeRØ, EPI assigns unique tags to different program assets and uses unique memory instructions for accessing them. However, EPI uses a 32bit friendly encoding scheme to inline the tags within the program data. EPI introduces runtime overheads of less than 1%, making it viable for embedded and low-resource systems.
7

Improving Security Through Egalitarian Binary Recompilation

Williams-King, David Christopher January 2021 (has links)
In this thesis, we try to bridge the gap between which program transformations are possible at source-level and which are possible at binary-level. While binaries are typically seen as opaque artifacts, our binary recompiler Egalito (ASPLOS 2020) enables users to parse and modify stripped binaries on existing systems. Our technique of binary recompilation is not robust to errors in disassembly, but with an accurate analysis, provides near-zero transformation overhead. We wrote several demonstration security tools with Egalito, including code randomization, control-flow integrity, retpoline insertion, and a fuzzing backend. We also wrote Nibbler (ACSAC 2019, DTRAP 2020), which detects unused code and removes it. Many of these features, including Nibbler, can be combined with other defenses resulting in multiplicatively stronger or more effective hardening. Enabled by our recompiler, an overriding theme of this thesis is our focus on deployable software transformation. Egalito has been tested by collaborators across tens of thousands of Debian programs and libraries. We coined this term egalitarian in the context of binary security. Simply put, an egalitarian analysis or security mechanism is one that can operate on itself (and is usually more deployable as a result). As one demonstration of this idea, we created a strong, deployable defense against code reuse attacks. Shuffler (OSDI 2016) randomizes function addresses, moving functions periodically every few milliseconds. This makes an attacker's job extremely difficult, especially if they are located across a network (which necessitates ping time) -- JIT-ROP attacks take 2.3 to 378 seconds to complete. Shuffler is egalitarian and defends its own code and target code simultaneously; Shuffler actually shuffles itself. We hope our deployable, egalitarian binary defenses will allow others to improve upon state-of-the-art and paint binaries as far more malleable than they have been in the past.
8

Methods and Tools for Practical Software Testing and Maintenance

Saieva, Anthony January 2024 (has links)
As software continues to envelop traditional industries the need for increased attention to cybersecurity is higher than ever. Software security helps protect businesses and governments from financial losses due to cyberattacks and data breaches, as well as reputational damage. In theory, securing software is relatively straightforward—it involves following certain best practices and guidelines to ensure that the software is secure. In practice, however, software security is often much more complicated. It requires a deep understanding of the underlying system and code (including potentially legacy code), as well as a comprehensive understanding of the threats and vulnerabilities that could be present. Additionally, software security also involves the implementation of strategies to protect against those threats and vulnerabilities, which may involve a combination of technologies, processes, and procedures. In fact many real cyber attacks are caused not from zero day vulnerabilities but from known issues that haven't been addressed so real software security also requires ongoing monitoring and maintenance to ensure critical systems remain secure. This thesis presents a series of novel techniques that together form an enhanced software maintenance methodology from initial bug reporting all the way through patch deployment. We begin by introducing Ad Hoc Test Generation, a novel testing technique that handles when a security vulnerability or other critical bugis not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. Our prototype of this concept is called ATTUNE. To construct patches in some instances developers maintaining software may be forced to deal directly with the binary since source code is no longer available. In these instances this work presents a transformer based model called DIRECT that provides semantics related names for variables and function names that have been lost giving developers the opportunity to work with a facsimile of the source code that would otherwise be unavailable. In the event developers need even more support deciphering the decompiled code we provide another tool called REINFOREST that allows developers to search for similar code which they can use to further understand the code in question and use as a reference when developing a patch. After patches have been written, deployment remains a challenge. In some instances deploying a patch for the buggy behavior may require supporting legacy systems where software cannot be upgraded without causing compatibility issues. To support these updates this work introduces the concept of binary patch decomposition which breaks a software release down into its component parts and allows software administrators to apply only the critical portions without breaking functionality. We present a novel software patching methodology that we can recreate bugs, develop patches, and deploy updates in the presence of the typical challenges that come when patching production software including deficient test suites, lack of source code, lack of documentation, compatibility issues, and the difficulties associated with patching binaries directly.
9

Ring-LWE: Enhanced Foundations and Applications

Lin, Chengyu January 2022 (has links)
Ring Learning With Errors assumption has become an important building block in many modern cryptographic applications, such as (fully) homomorphic encryption and post-quantum cryptosystems like the recently announced NIST CRYSTALS-Kyber public key encryption scheme. In this thesis, we provide an enhanced security foundation for Ring-LWE based cryptosystems and demonstrate their practical potential in real world applications. Enhanced Foundation. We extend the known pseudorandomness of Ring-LWE to be based on ideal lattices of non Dedekind domains. In earlier works of Lyubashevsky, Perkert and Regev (EUROCRYPT 2010), and Peikert, Regev and Stephens-Davidowitz (STOC 2017), the hardness of RLWE was established on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. These works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems. However, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices. We show that, the Ring-LWE hardness can be based on the polynomial ring, which is potentially be a strict sub-ring of the ring of integers of a number field, and hence not be a Dedekind domain. We present a novel proof technique that builds an algebraic theory for general such rings that also include cyclotomic rings. We also recommend a ``twisted'' cyclotomic field as an alternative for the cyclotomic field used in CRYSTALS-Kyber, as it leads to a more efficient implementation and is based on hardness of ideals in a non Dedekind domain. We leverages the polynomial nature of Ring-LWE, and introduce XSPIR, a new symmetrically private information retrieval (SPIR) protocol, which provides a stronger security guarantee than existing efficient PIR protocols. Like other PIR protocol, XSPIR allows a client to retrieve a specific entry from a server's database without revealing which entry is retrieved. Moreover, the semi-honest client learns no additional information about the database except for the retrieved entry. We demonstrate through analyses and experiments that XSPIR has only a slight overhead compared to state-of-the-art PIR protocols, and provides a stronger security guarantee while enabling the client to perform more complicated queries than simple retrievals.
10

Adaptive and Effective Fuzzing: a Data-Driven Approach

She, Dongdong January 2023 (has links)
Security vulnerabilities have a large real-world impact, from ransomware attacks costing billions of dollars every year to sensitive data breaches in government, military and industry. Fuzzing is a popular technique to discover these vulnerabilities in an automated fashion. Industries have poured tons of resources into building large-scale fuzzing factories (e.g., Google’s ClusterFuzz and Microsoft’s OneFuzz) to test their products and make their product more secure. Despite the wide application of fuzzing in industry, there remain many issues constraining its performance. One fundamental limitation is the rule-based design in fuzzing. Rule-based fuzzers heavily rely on a set of static rules or heuristics. These fixed rules are summarized from human experience, hence failing to generalize on a diverse set of programs. In this dissertation, we present an adaptive and effective fuzzing framework in data-driven approach. A data-driven fuzzer makes decisions based on the analysis and reasoning of data rather than the static rules. Hence it is more adaptive, effective, and flexible than a typical rule-based fuzzer. More interestingly, the data-driven approach can bridge the connection from fuzzing to various data-centric domains (e.g., machine learning, optimizations and social network), enabling sophisticated designs in the fuzzing framework. A general fuzzing framework consists of two major components: seed scheduling and seed mutation. The seed scheduling module selects a seed from a seed corpus that includes multiple testcases. Then seed mutation module applies perturbation on the selected seed to generate a new testcase. First, we present Neuzz, the first machine learning (ML) based general-purpose fuzzer that adopts ML to seed mutation and greatly improves fuzzing performance. Then we present MTFuzz, a follow-up work of Neuzz by including diverse data into ML to generate effective seed mutations. In the end, we present K-Scheduler, a fuzzer-agnostic seed scheduling algorithm in data-driven approach. K-Scheduler leverages the graph data (i.e., inter-procedural control flow graph) and dynamic coverage data (i.e., code coverage bitmap) to construct a dynamic graph and schedule seeds by the graph centrality scores on that graph. It can significantly improve the fuzzing performance than the-state-of-art seed schedulers on various fuzzers widely-used in the industry.

Page generated in 0.0993 seconds