• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 41
  • 17
  • 17
  • 15
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Identifying communications of running programs through their assembly level execution traces

Huang, Huihui 28 May 2018 (has links)
Understanding the communications between programs can help software security engineers understand the behaviour of a system and detect vulnerabilities in a system. Assembly-level execution traces are used for this purpose for two reasons: 1) lack of source code of the running programs, and 2) assembly-level execution traces provide the most accurate run-time behaviour information. In this thesis, I present a communication analysis approach using such execution traces. I first model the message based communication in the context of trace analysis. Then I develop a method and the necessary algorithms to identify communications from a dual trace which consist of two assembly level execution traces. A prototype is developed for communication analysis. Finally, I conducted two experiments for communication analysis of interacting programs. These two experiments show the usefulness of the designed communication analysis approach, the developed algorithms and the implemented prototype. / Graduate / 2019-05-11
12

Improving Desktop System Security Using Compartmentalization

January 2018 (has links)
abstract: Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers. The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
13

Tools for static code analysis: A survey

Hellström, Patrik January 2009 (has links)
This thesis has investigated what different tools for static code analysis, with anemphasis on security, there exist and which of these that possibly could be used in a project at Ericsson AB in Linköping in which a HIGA (Home IMS Gateway) is constructed. The HIGA is a residential gateway that opens up for the possibility to extend an operator’s Internet Multimedia Subsystem (IMS) all the way to the user’s home and thereby let the end user connect his/her non compliant IMS devices, such as a media server, to an IMS network. Static analysis is the process of examining the source code of a program and in that way test a program for various weaknesses without having to actually execute it (compared to dynamic analysis such as testing). As a complement to the regular testing, that today is being performed in the HIGA project, four different static analysis tools were evaluated to find out which one was best suited for use in the HIGA project. Two of them were open source tools and two were commercial. All of the tools were evaluated in five different areas: documentation, installation & integration procedure, usability, performance and types of bugs found. Furthermore all of the tools were later on used to perform testing of two modules of the HIGA. The evaluation showed many differences between the tools in all areas and not surprisingly the two open source tools turned out to be far less mature than the commercial ones. The tools that were best suited for use in the HIGA project were Fortify SCA and Flawfinder. As far as the evaluation of the HIGA code is concerned some different bugs which could have jeopardized security and availability of the services provided by it were found.
14

A Method for Analyzing Security of SOA-basd Systems

Lu, Qifei, Wang, Zhishun January 2010 (has links)
SOA-based systems o er high degree of exibility and interoperabil- ity. However, the securing of SOA-based applications is still a challenge. Although some related techniques have been proposed and presented in academia and industry, it is still dicult to check SOA quality in security aspect from an architecture view. In this thesis project, a method for security analysis in SOA is intro- duced and investigated. The method intends to be used for analyzing security of SOA-based systems on architecture level. To demonstrate the method, a prototype supporting the method is introduced and imple- mented. And the method and prototype are also evaluated respectively based on Technology Acceptance Model. The evaluation result shows that the prototype supporting the method is a promising inspection tool to detect software vulnerability.
15

FORCED EXECUTION FOR SECURITY ANALYSIS OF SOFTWARE WITHOUT SOURCE CODE

Fei Peng (10682163) 03 May 2021 (has links)
<div><div><div><p>Binary code analysis is widely used in many applications, including reverse engineering, software forensics and security. It is very critical in these applications, since the analysis of binary code does not require source code to be available. For example, in one of the security applications, given a potentially malicious executable file, binary analysis can help building human inspectable representations such as control flow graph and call graph.</p><p>Existing binary analysis can be roughly classified into two categories, that are static analysis, and dynamic analysis. Both types of analysis have their own strengths and limitations. Static binary analysis is based on the result of scanning the binary code without executing it. It usually has good code coverage, but the analysis results are sometimes not quite accurate due to the lack of dynamic execution information. Dynamic binary analysis, on the other hand, is based on executing the binary on a set of inputs. On the contrast, the results are usually accurate but heavily rely on the coverage of the test inputs, which sometimes do not exist.</p><p>In this thesis, we first present a novel systematic binary analysis framework called X-Force. Basically, X-Force can force the binary to execute without using any inputs or proper environment setup. As part of the design of our framework, we have proposed a number of techniques, that includes (1) path exploration module which can drive the program to execute different paths; (2) a crash-free execution model that could detect and recover from execution exceptions properly; (3) overcoming a large number of technical challenges in making the technique work on real world binaries.</p><p>Although X-Force is a highly effective method to penetrate malware self-protection and expose hidden behavior, it is very heavy-weight. The reason is that it requires tracing individual instructions, reasoning about pointer alias relations on-the-fly, and repairing invalid pointers by on-demand memory allocation. To further solve this problem, we develop a light-weight and practical forced execution technique. Without losing analysis precision, it avoids tracking individual instructions and on-demand allocation. Under our scheme, a forced execution is very similar to a native one. It features a novel memory pre-planning phase that pre-allocates a large memory buffer, and then initializes the buffer, and variables in the subject binary, with carefully crafted values in a random fashion before the real execution. The pre-planning is designed in such a way that dereferencing an invalid pointer has a very large chance to fall into the pre-allocated region and hence does not cause any exception, and semantically unrelated invalid pointer dereferences highly likely access disjoint (pre-allocated) memory regions, avoiding state corruptions with probabilistic guarantees.</p></div></div></div>
16

Practical Methods for Fuzzing Real-World Systems

Prashast Srivastava (15353365) 27 April 2023 (has links)
<p>The current software ecosystem is exceptionally complex. A key defining feature of this complexity is the vast input space that software applications must process. This feature</p> <p>inhibits fuzzing (an effective automated testing methodology) in uncovering deep bugs (i.e.,</p> <p>bugs with complex preconditions). We improve the bug-finding capabilities of fuzzers by</p> <p>reducing the input space that they have to explore. Our techniques incorporate domain</p> <p>knowledge from the software under test. In this dissertation, we research how to incorporate</p> <p>domain knowledge in different scenarios across a variety of software domains and test</p> <p>objectives to perform deep bug discovery.</p> <p>We start by focusing on language interpreters that form the backend of our web ecosystem.</p> <p>Uncovering deep bugs in these interpreters requires synthesizing inputs that perform a</p> <p>diverse set of semantic actions. To tackle this issue, we present Gramatron, a fuzzer that employs grammar automatons to speed up bug discovery. Then, we explore firmwares belonging to the rapidly growing IoT ecosystem which generally lack thorough testing. FirmFuzz infers the appropriate runtime state required to trigger vulnerabilities in these firmwares using the domain knowledge encoded in the user-facing network applications. Additionally, we showcase how our proposed strategy to incorporate domain knowledge is beneficial under alternative testing scenarios where a developer analyzes specific code locations, e.g., for patch testing. SieveFuzz leverages knowledge of targeted code locations to prohibit exploration of code regions and correspondingly parts of the input space that are irrelevant to reaching the target location. Finally, we move beyond the realm of memory-safety vulnerabilities and present how domain knowledge can be useful in uncovering logical bugs, specifically deserialization vulnerabilities in Java-based applications with Crystallizer. Crystallizer uses a hybrid analysis methodology to first infer an over-approximate set of possible payloads through static analysis (to constrain the search space). Then, it uses dynamic analysis to instantiate concrete payloads as a proof-of-concept of a deserialization vulnerability.</p> <p>Throughout these four diverse areas we thoroughly demonstrate how incorporating domain</p> <p>knowledge can massively improve bug finding capabilities. Our research has developed</p> <p>tooling that not only outperforms the existing state-of-the-art in terms of efficient bug discovery (with speeds up to 117% faster), but has also uncovered 18 previously unknown bugs,</p> <p>with five CVEs assigned.</p>
17

Remote Software Guard Extension (RSGX)

Sundarasamy, Abilesh 21 December 2023 (has links)
With the constant evolution of hardware architecture extensions aimed at enhancing software security, a notable availability gap arises due to the proprietary nature and design-specific characteristics of these features, resulting in a CPU-specific implementation. This gap particularly affects low-end embedded devices that often rely on CPU cores with limited resources. Addressing this challenge, this thesis focuses on providing access to hardware-based Trusted Execution Environments (TEEs) for devices lacking TEE support. RSGX is a framework crafted to transparently offload security-sensitive workloads to an enclave hosted in a remote centralized edge server. Operating as clients, low-end TEE-lacking devices can harness the hardware security features provided by TEEs of either the same or different architecture. RSGX is tailored to accommodate applications developed with diverse TEE-utilizing SDKs, such as the Open Enclave SDK, Intel SGX SDK, and many others. This facilitates easy integration of existing enclave-based applications, and the framework allows users to utilize its features without requiring any source code modifications, ensuring transparent offloading behind the scenes. For the evaluation, we set up an edge computing environment to execute C/C++ applications, including two overhead micro-benchmarks and four popular open-source applications. This evaluation of RSGX encompasses an analysis of its security benefits and a measurement of its performance overhead. We demonstrate that RSGX has the potential to mitigate a range of Common Vulnerability Exposures (CVEs), ensuring the secure execution of confidential computations on hybrid and distributed machines with an acceptable performance overhead. / Master of Science / A vast amount of data is generated globally every day, most of which contains critical information and is often linked to individuals. Therefore, safeguarding data is essential at every stage, whether it's during transmission, storage, or processing. Different security principles are applied to protect data at various stages. This thesis particularly focuses on data in use. To protect data in use, several technologies are available, and one of them is confidential computing, which is a hardware-based security technology. However, confidential computing is limited to certain high-end computing machines, and many resource-constrained devices do not support it. In this thesis, we propose RSGX, a framework to offload secured computation to a confidential computing-capable remote device with a Security as a Service (SECaaS) approach. Through RSGX, users can leverage confidential computing capabilities for any of their applications based on any SDK. RSGX provides this capability transparently and securely. Our evaluation shows that users, by adapting RSGX, can mitigate several security vulnerabilities, thereby enhancing security with a reasonable overhead.
18

Evolution of Security in Automated Migration Processes

Tayefeh Morsal, Seyed Parsa January 2022 (has links)
As users’ requirements change in today’s fast-paced business market, computer software has to adapt to new hardware, technologies and requirements to keep up with the trend. Therefore, to avoid depreciation and obsolescence, which can have detrimental effects on a product, software needs to be constantly maintained and, when passed a certain point in its lifecycle, needs to be migrated or re-developed from scratch. Automated migration enables software vendors to decrease the cost of the migration process by source code generation. However, as security is a crucial requirement in any system, it is not guaranteed that the previously satisfied security requirements are satisfied in the migrated software. Therefore, it is critical to study the evolution of security throughout the automated migration process to predict where new security vulnerabilities may emerge and to understand the scale on which the security is affected. / Thesis / Master of Applied Science (MASc)
19

HetMigrate: Secure and Efficient Cross-architecture Process Live Migration

Bapat, Abhishek Mandar 31 January 2023 (has links)
The slowdown of Moore's Law opened a new era of computer research and development. Researchers started exploring alternatives to the traditional CPU design. A constant increase in consumer demands led to the development of CMPs, GPUs, and FPGAs. Recent research proposed the development of heterogeneous-ISA systems and implemented the necessary systems software to make such systems functional. Evaluations have shown that heterogeneous-ISA systems can offer better throughput and energy efficiency than homogeneous-ISA systems. Due to their low cost, ARM servers are now being adopted in data centers (e.g., AWS Graviton). While prior work provided the infrastructure necessary to run applications on heterogeneous-ISA systems, their dependency on a specialized kernel and a custom compiler increases deployment and maintenance costs. This thesis presents HetMigrate, a framework to live-migrate Linux processes over heterogeneous-ISA systems. HetMigrate integrates with CRIU, a Linux mechanism for process migration, and runs on stock Linux operating systems which improves its deployability. Furthermore, HetMigrate transforms the process's state externally without instrumenting state transformation code into the process binaries which has security benefits and also improves deployability. Our evaluations on Redis server and NAS Parallel Benchmarks show that HetMigrate takes an average of 720ms to fully migrate a process across ISAs while maintaining its state. Moreover, live-migrating with HetMigrate reduces the attack surface of a process by up to 72.8% compared to prior work. Additionally, HetMigrate is easier to deploy in real-world systems compared to prior work. To prove the deployability we ran HetMigrate on a variety of environments like cloud instances (e.g. Cloud Lab), local setups virtualized with QEMU/KVM, and a server-embedded board pair. Similar to works in the past, we also evaluated the energy and throughput benefits that heterogeneous-ISA systems can offer by connecting a Xeon server to three embedded boards over the network. We observed that selectively offloading compute-intensive workloads to embedded boards can increase energy efficiency by up to 39% and throughput by up to 52% while increasing the cost by just 10%. / Master of Science / In 1965 Gordon Moore predicted that the number of transistors in a chip will double every two years. Commonly referred to as "Moore's Law" it no longer holds true and its slowdown opened a new era of computer research and development. Researchers started exploring alternatives to traditional computer designs. A constant increase in consumer demands led to the development of portable, faster, and cheaper computers. Some researchers also started exploring the performance and energy benefits of computing systems that had heterogeneous architecture. Instruction Set Architecture (ISA) is the interface between software and hardware. Recent research proposed the development of systems that had cores of different ISA and implemented the necessary software to make such systems functional. Evaluations have shown that heterogeneous-ISA systems can offer better throughput and energy efficiency than traditional systems. To decrease their cost-to-performance ratio data centers have started adopting servers belonging to diverse architectures making them heterogeneous in nature. While prior work provided the infrastructure necessary to run applications on heterogeneous systems, it suffered from deployability limitations. This thesis presents HetMigrate, a framework that enables stateful program migration in heterogeneous systems. HetMigrate runs on stock open-source operating systems which makes it easy to deploy. Our evaluations show that while HetMigrate performs two orders of magnitude slower than prior work, it can be deployed with ease.
20

On the Impact and Defeat of Regular Expression Denial of Service

Davis, James Collins 28 May 2020 (has links)
Regular expressions (regexes) are a widely-used yet little-studied software component. Engineers use regexes to match domain-specific languages of strings. Unfortunately, many regex engine implementations perform these matches with worst-case polynomial or exponential time complexity in the length of the string. Because they are commonly used in user-facing contexts, super-linear regexes are a potential denial of service vector known as Regular expression Denial of Service (ReDoS). Part I gives the necessary background to understand this problem. In Part II of this dissertation, I present the first large-scale empirical studies of super-linear regex use. Guided by case studies of ReDoS issues in practice (Chapter 3), I report that the risk of ReDoS affects up to 10% of the regexes used in practice (Chapter 4), and that these findings generalize to software written in eight popular programming languages (Chapter 5). ReDoS appears to be a widespread vulnerability, motivating the consideration of defenses. In Part III I present the first systematic comparison of ReDoS defenses. Based on the necessary conditions for ReDoS, a ReDoS defense can be erected at the application level, the regex engine level, or the framework/runtime level. In my experiments I report that application-level defenses are difficult and error prone to implement (Chapter 6), that finding a compatible higher-performing regex engine is unlikely (Chapter 7), that optimizing an existing regex engine using memoization incurs (perhaps acceptable) space overheads (Chapter 8), and that incorporating resource caps into the framework or runtime is feasible but faces barriers to adoption (Chapter 9). In Part IV of this dissertation, we reflect on our findings. By leveraging empirical software engineering techniques, we have exposed the scope of potential ReDoS vulnerabilities, and given strong motivation for a solution. To assist practitioners, we have conducted a systematic evaluation of the solution space. We hope that our findings assist in the elimination of ReDoS, and more generally that we have provided a case study in the value of data-driven software engineering. / Doctor of Philosophy / Software commonly performs pattern-matching tasks on strings. For example, when validating input in a Web form, software commonly tests whether an input fits the pattern of a credit card number or an email address. Software engineers often implement such string-based pattern matching using a tool called regular expressions (regexes). Regexes permit software engineers to succinctly describe the sequences of characters that make up common "languages" like the set of valid Visa credit card numbers (16 digits, starting with a 4) or the set of valid emails (some characters, an '@', and more characters including at least one'.'). Using regexes on untrusted user input in this manner may be a dangerous decision because some regexes take a long time to evaluate. These slow regexes can be exploited by attackers in order to carry out a denial of service attack known as Regular expression Denial of Service (ReDoS). To date, ReDoS has led to outages affecting hundreds of websites and tens of thousands of users. While the risk of ReDoS is well known in theory, in this dissertation I present the first large-scale empirical studies measuring the extent to which slow regular expressions are used in practice. I found that about 10% of real regular expressions extracted from hundreds of thousands of software projects can exhibit longer-than-expected worst-case behavior in popular programming languages including JavaScript, Python, and Ruby. Motivated by these findings, I then consider a range of ReDoS solution approaches: application refactoring, regex engine replacement, regex engine optimization, and resource caps. I report that application refactoring is error-prone, and that regex engine replacement seems unlikely due to incompatibilities between regex engines. Some resource caps are more successful than others, but all resource cap approaches struggle with adoption. My novel regex engine optimizations seem the most promising approach for protecting existing regex engines, offering significant time reductions with acceptable space overheads.

Page generated in 0.0733 seconds