• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 41
  • 17
  • 17
  • 15
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Identifying communications of running programs through their assembly level execution traces

Huang, Huihui 28 May 2018 (has links)
Understanding the communications between programs can help software security engineers understand the behaviour of a system and detect vulnerabilities in a system. Assembly-level execution traces are used for this purpose for two reasons: 1) lack of source code of the running programs, and 2) assembly-level execution traces provide the most accurate run-time behaviour information. In this thesis, I present a communication analysis approach using such execution traces. I first model the message based communication in the context of trace analysis. Then I develop a method and the necessary algorithms to identify communications from a dual trace which consist of two assembly level execution traces. A prototype is developed for communication analysis. Finally, I conducted two experiments for communication analysis of interacting programs. These two experiments show the usefulness of the designed communication analysis approach, the developed algorithms and the implemented prototype. / Graduate / 2019-05-11
12

Improving Desktop System Security Using Compartmentalization

January 2018 (has links)
abstract: Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers. The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
13

Tools for static code analysis: A survey

Hellström, Patrik January 2009 (has links)
This thesis has investigated what different tools for static code analysis, with anemphasis on security, there exist and which of these that possibly could be used in a project at Ericsson AB in Linköping in which a HIGA (Home IMS Gateway) is constructed. The HIGA is a residential gateway that opens up for the possibility to extend an operator’s Internet Multimedia Subsystem (IMS) all the way to the user’s home and thereby let the end user connect his/her non compliant IMS devices, such as a media server, to an IMS network. Static analysis is the process of examining the source code of a program and in that way test a program for various weaknesses without having to actually execute it (compared to dynamic analysis such as testing). As a complement to the regular testing, that today is being performed in the HIGA project, four different static analysis tools were evaluated to find out which one was best suited for use in the HIGA project. Two of them were open source tools and two were commercial. All of the tools were evaluated in five different areas: documentation, installation & integration procedure, usability, performance and types of bugs found. Furthermore all of the tools were later on used to perform testing of two modules of the HIGA. The evaluation showed many differences between the tools in all areas and not surprisingly the two open source tools turned out to be far less mature than the commercial ones. The tools that were best suited for use in the HIGA project were Fortify SCA and Flawfinder. As far as the evaluation of the HIGA code is concerned some different bugs which could have jeopardized security and availability of the services provided by it were found.
14

A Method for Analyzing Security of SOA-basd Systems

Lu, Qifei, Wang, Zhishun January 2010 (has links)
SOA-based systems o er high degree of exibility and interoperabil- ity. However, the securing of SOA-based applications is still a challenge. Although some related techniques have been proposed and presented in academia and industry, it is still dicult to check SOA quality in security aspect from an architecture view. In this thesis project, a method for security analysis in SOA is intro- duced and investigated. The method intends to be used for analyzing security of SOA-based systems on architecture level. To demonstrate the method, a prototype supporting the method is introduced and imple- mented. And the method and prototype are also evaluated respectively based on Technology Acceptance Model. The evaluation result shows that the prototype supporting the method is a promising inspection tool to detect software vulnerability.
15

FORCED EXECUTION FOR SECURITY ANALYSIS OF SOFTWARE WITHOUT SOURCE CODE

Fei Peng (10682163) 03 May 2021 (has links)
<div><div><div><p>Binary code analysis is widely used in many applications, including reverse engineering, software forensics and security. It is very critical in these applications, since the analysis of binary code does not require source code to be available. For example, in one of the security applications, given a potentially malicious executable file, binary analysis can help building human inspectable representations such as control flow graph and call graph.</p><p>Existing binary analysis can be roughly classified into two categories, that are static analysis, and dynamic analysis. Both types of analysis have their own strengths and limitations. Static binary analysis is based on the result of scanning the binary code without executing it. It usually has good code coverage, but the analysis results are sometimes not quite accurate due to the lack of dynamic execution information. Dynamic binary analysis, on the other hand, is based on executing the binary on a set of inputs. On the contrast, the results are usually accurate but heavily rely on the coverage of the test inputs, which sometimes do not exist.</p><p>In this thesis, we first present a novel systematic binary analysis framework called X-Force. Basically, X-Force can force the binary to execute without using any inputs or proper environment setup. As part of the design of our framework, we have proposed a number of techniques, that includes (1) path exploration module which can drive the program to execute different paths; (2) a crash-free execution model that could detect and recover from execution exceptions properly; (3) overcoming a large number of technical challenges in making the technique work on real world binaries.</p><p>Although X-Force is a highly effective method to penetrate malware self-protection and expose hidden behavior, it is very heavy-weight. The reason is that it requires tracing individual instructions, reasoning about pointer alias relations on-the-fly, and repairing invalid pointers by on-demand memory allocation. To further solve this problem, we develop a light-weight and practical forced execution technique. Without losing analysis precision, it avoids tracking individual instructions and on-demand allocation. Under our scheme, a forced execution is very similar to a native one. It features a novel memory pre-planning phase that pre-allocates a large memory buffer, and then initializes the buffer, and variables in the subject binary, with carefully crafted values in a random fashion before the real execution. The pre-planning is designed in such a way that dereferencing an invalid pointer has a very large chance to fall into the pre-allocated region and hence does not cause any exception, and semantically unrelated invalid pointer dereferences highly likely access disjoint (pre-allocated) memory regions, avoiding state corruptions with probabilistic guarantees.</p></div></div></div>
16

Practical Methods for Fuzzing Real-World Systems

Prashast Srivastava (15353365) 27 April 2023 (has links)
<p>The current software ecosystem is exceptionally complex. A key defining feature of this complexity is the vast input space that software applications must process. This feature</p> <p>inhibits fuzzing (an effective automated testing methodology) in uncovering deep bugs (i.e.,</p> <p>bugs with complex preconditions). We improve the bug-finding capabilities of fuzzers by</p> <p>reducing the input space that they have to explore. Our techniques incorporate domain</p> <p>knowledge from the software under test. In this dissertation, we research how to incorporate</p> <p>domain knowledge in different scenarios across a variety of software domains and test</p> <p>objectives to perform deep bug discovery.</p> <p>We start by focusing on language interpreters that form the backend of our web ecosystem.</p> <p>Uncovering deep bugs in these interpreters requires synthesizing inputs that perform a</p> <p>diverse set of semantic actions. To tackle this issue, we present Gramatron, a fuzzer that employs grammar automatons to speed up bug discovery. Then, we explore firmwares belonging to the rapidly growing IoT ecosystem which generally lack thorough testing. FirmFuzz infers the appropriate runtime state required to trigger vulnerabilities in these firmwares using the domain knowledge encoded in the user-facing network applications. Additionally, we showcase how our proposed strategy to incorporate domain knowledge is beneficial under alternative testing scenarios where a developer analyzes specific code locations, e.g., for patch testing. SieveFuzz leverages knowledge of targeted code locations to prohibit exploration of code regions and correspondingly parts of the input space that are irrelevant to reaching the target location. Finally, we move beyond the realm of memory-safety vulnerabilities and present how domain knowledge can be useful in uncovering logical bugs, specifically deserialization vulnerabilities in Java-based applications with Crystallizer. Crystallizer uses a hybrid analysis methodology to first infer an over-approximate set of possible payloads through static analysis (to constrain the search space). Then, it uses dynamic analysis to instantiate concrete payloads as a proof-of-concept of a deserialization vulnerability.</p> <p>Throughout these four diverse areas we thoroughly demonstrate how incorporating domain</p> <p>knowledge can massively improve bug finding capabilities. Our research has developed</p> <p>tooling that not only outperforms the existing state-of-the-art in terms of efficient bug discovery (with speeds up to 117% faster), but has also uncovered 18 previously unknown bugs,</p> <p>with five CVEs assigned.</p>
17

Remote Software Guard Extension (RSGX)

Sundarasamy, Abilesh 21 December 2023 (has links)
With the constant evolution of hardware architecture extensions aimed at enhancing software security, a notable availability gap arises due to the proprietary nature and design-specific characteristics of these features, resulting in a CPU-specific implementation. This gap particularly affects low-end embedded devices that often rely on CPU cores with limited resources. Addressing this challenge, this thesis focuses on providing access to hardware-based Trusted Execution Environments (TEEs) for devices lacking TEE support. RSGX is a framework crafted to transparently offload security-sensitive workloads to an enclave hosted in a remote centralized edge server. Operating as clients, low-end TEE-lacking devices can harness the hardware security features provided by TEEs of either the same or different architecture. RSGX is tailored to accommodate applications developed with diverse TEE-utilizing SDKs, such as the Open Enclave SDK, Intel SGX SDK, and many others. This facilitates easy integration of existing enclave-based applications, and the framework allows users to utilize its features without requiring any source code modifications, ensuring transparent offloading behind the scenes. For the evaluation, we set up an edge computing environment to execute C/C++ applications, including two overhead micro-benchmarks and four popular open-source applications. This evaluation of RSGX encompasses an analysis of its security benefits and a measurement of its performance overhead. We demonstrate that RSGX has the potential to mitigate a range of Common Vulnerability Exposures (CVEs), ensuring the secure execution of confidential computations on hybrid and distributed machines with an acceptable performance overhead. / Master of Science / A vast amount of data is generated globally every day, most of which contains critical information and is often linked to individuals. Therefore, safeguarding data is essential at every stage, whether it's during transmission, storage, or processing. Different security principles are applied to protect data at various stages. This thesis particularly focuses on data in use. To protect data in use, several technologies are available, and one of them is confidential computing, which is a hardware-based security technology. However, confidential computing is limited to certain high-end computing machines, and many resource-constrained devices do not support it. In this thesis, we propose RSGX, a framework to offload secured computation to a confidential computing-capable remote device with a Security as a Service (SECaaS) approach. Through RSGX, users can leverage confidential computing capabilities for any of their applications based on any SDK. RSGX provides this capability transparently and securely. Our evaluation shows that users, by adapting RSGX, can mitigate several security vulnerabilities, thereby enhancing security with a reasonable overhead.
18

Evolution of Security in Automated Migration Processes

Tayefeh Morsal, Seyed Parsa January 2022 (has links)
As users’ requirements change in today’s fast-paced business market, computer software has to adapt to new hardware, technologies and requirements to keep up with the trend. Therefore, to avoid depreciation and obsolescence, which can have detrimental effects on a product, software needs to be constantly maintained and, when passed a certain point in its lifecycle, needs to be migrated or re-developed from scratch. Automated migration enables software vendors to decrease the cost of the migration process by source code generation. However, as security is a crucial requirement in any system, it is not guaranteed that the previously satisfied security requirements are satisfied in the migrated software. Therefore, it is critical to study the evolution of security throughout the automated migration process to predict where new security vulnerabilities may emerge and to understand the scale on which the security is affected. / Thesis / Master of Applied Science (MASc)
19

On the Impact and Defeat of Regular Expression Denial of Service

Davis, James Collins 28 May 2020 (has links)
Regular expressions (regexes) are a widely-used yet little-studied software component. Engineers use regexes to match domain-specific languages of strings. Unfortunately, many regex engine implementations perform these matches with worst-case polynomial or exponential time complexity in the length of the string. Because they are commonly used in user-facing contexts, super-linear regexes are a potential denial of service vector known as Regular expression Denial of Service (ReDoS). Part I gives the necessary background to understand this problem. In Part II of this dissertation, I present the first large-scale empirical studies of super-linear regex use. Guided by case studies of ReDoS issues in practice (Chapter 3), I report that the risk of ReDoS affects up to 10% of the regexes used in practice (Chapter 4), and that these findings generalize to software written in eight popular programming languages (Chapter 5). ReDoS appears to be a widespread vulnerability, motivating the consideration of defenses. In Part III I present the first systematic comparison of ReDoS defenses. Based on the necessary conditions for ReDoS, a ReDoS defense can be erected at the application level, the regex engine level, or the framework/runtime level. In my experiments I report that application-level defenses are difficult and error prone to implement (Chapter 6), that finding a compatible higher-performing regex engine is unlikely (Chapter 7), that optimizing an existing regex engine using memoization incurs (perhaps acceptable) space overheads (Chapter 8), and that incorporating resource caps into the framework or runtime is feasible but faces barriers to adoption (Chapter 9). In Part IV of this dissertation, we reflect on our findings. By leveraging empirical software engineering techniques, we have exposed the scope of potential ReDoS vulnerabilities, and given strong motivation for a solution. To assist practitioners, we have conducted a systematic evaluation of the solution space. We hope that our findings assist in the elimination of ReDoS, and more generally that we have provided a case study in the value of data-driven software engineering. / Doctor of Philosophy / Software commonly performs pattern-matching tasks on strings. For example, when validating input in a Web form, software commonly tests whether an input fits the pattern of a credit card number or an email address. Software engineers often implement such string-based pattern matching using a tool called regular expressions (regexes). Regexes permit software engineers to succinctly describe the sequences of characters that make up common "languages" like the set of valid Visa credit card numbers (16 digits, starting with a 4) or the set of valid emails (some characters, an '@', and more characters including at least one'.'). Using regexes on untrusted user input in this manner may be a dangerous decision because some regexes take a long time to evaluate. These slow regexes can be exploited by attackers in order to carry out a denial of service attack known as Regular expression Denial of Service (ReDoS). To date, ReDoS has led to outages affecting hundreds of websites and tens of thousands of users. While the risk of ReDoS is well known in theory, in this dissertation I present the first large-scale empirical studies measuring the extent to which slow regular expressions are used in practice. I found that about 10% of real regular expressions extracted from hundreds of thousands of software projects can exhibit longer-than-expected worst-case behavior in popular programming languages including JavaScript, Python, and Ruby. Motivated by these findings, I then consider a range of ReDoS solution approaches: application refactoring, regex engine replacement, regex engine optimization, and resource caps. I report that application refactoring is error-prone, and that regex engine replacement seems unlikely due to incompatibilities between regex engines. Some resource caps are more successful than others, but all resource cap approaches struggle with adoption. My novel regex engine optimizations seem the most promising approach for protecting existing regex engines, offering significant time reductions with acceptable space overheads.
20

Investigating the Reproducbility of NPM packages

Goswami, Pronnoy 19 May 2020 (has links)
The meteoric increase in the popularity of JavaScript and a large developer community has led to the emergence of a large ecosystem of third-party packages available via the Node Package Manager (NPM) repository which contains over one million published packages and witnesses a billion daily downloads. Most of the developers download these pre-compiled published packages from the NPM repository instead of building these packages from the available source code. Unfortunately, recent articles have revealed repackaging attacks to the NPM packages. To achieve such attacks the attackers primarily follow three steps – (1) download the source code of a highly depended upon NPM package, (2) inject malicious code, and (3) then publish the modified packages as either misnamed package (i.e., typo-squatting attack) or as the official package on the NPM repository using compromised maintainer credentials. These attacks highlight the need to verify the reproducibility of NPM packages. Reproducible Build is a concept that allows the verification of build artifacts for pre-compiled packages by re-building the packages using the same build environment configuration documented by the package maintainers. This motivates us to conduct an empirical study (1) to examine the reproducibility of NPM packages, (2) to assess the influence of any non-reproducible packages, and (3) to explore the reasons for non-reproducibility. Firstly, we downloaded all versions/releases of 226 most-depended upon NPM packages, and then built each version with the available source code on Github. Secondly, we applied diffoscope, a differencing tool to compare the versions we built against the version downloaded from the NPM repository. Finally, we did a systematic investigation of the reported differences. At least one version of 65 packages was found to be non-reproducible. Moreover, these non- reproducible packages have been downloaded millions of times per week which could impact a large number of users. Based on our manual inspection and static analysis, most reported differences were semantically equivalent but syntactically different. Such differences result due to non-deterministic factors in the build process. Also, we infer that semantic differences are introduced because of the shortcomings in the JavaScript uglifiers. Our research reveals challenges of verifying the reproducibility of NPM packages with existing tools, reveal the point of failures using case studies, and sheds light on future directions to develop better verification tools. / Master of Science / Software packages are distributed as pre-compiled binaries to facilitate software development. There are various package repositories for various programming languages such as NPM (JavaScript), pip (Python), and Maven (Java). Developers install these pre-compiled packages in their projects to implement certain functionality. Additionally, these package repositories allow developers to publish new packages and help the developer community to reduce the delivery time and enhance the quality of the software product. Unfortunately, recent articles have revealed an increasing number of attacks on the package repositories. Moreover, developers trust the pre-compiled binaries, which often contain malicious code. To address this challenge, we conduct our empirical investigation to analyze the reproducibility of NPM packages for the JavaScript ecosystem. Reproducible Builds is a concept that allows any individual to verify the build artifacts by replicating the build process of software packages. For instance, if the developers could verify that the build artifacts of the pre-compiled software packages available in the NPM repository are identical to the ones generated when they individually build that specific package, they could mitigate and be aware of the vulnerabilities in the software packages. The build process is usually described in configuration files such as package.json and DOCKERFILE. We chose the NPM registry for our study because of three primary reasons – (1) it is the largest package repository, (2) JavaScript is the most widely used programming language, and (3) there is no prior dataset or investigation that has been conducted by researchers. We took a two-step approach in our study – (1) dataset collection, and (2) source-code differencing for each pair of software package versions. For the dataset collection phase, we downloaded all available releases/versions of 226 popularly used NPM packages and for the code-differencing phase, we used an off-the-shelf tool called diffoscope. We revealed some interesting findings. Firstly, at least one of the 65 packages as found to be non-reproducible, and these packages have millions of downloads per week. Secondly, we found 50 package-versions to have divergent program semantics which high- lights the potential vulnerabilities in the source-code and improper build practices. Thirdly, we found that the uglification of JavaScript code introduces non-determinism in the build process. Our research sheds light on the challenges of verifying the reproducibility of NPM packages with the current state-of-the-art tools and the need to develop better verification tools in the future. To conclude, we believe that our work is a step towards realizing the reproducibility of NPM packages and making the community aware of the implications of non-reproducible build artifacts.

Page generated in 0.0581 seconds