• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Hardening against Dependability and Security Software Bugs / Automatisches Härten gegen Zuverlässigkeits- und Sicherheitssoftwarefehler

Süßkraut, Martin 15 June 2010 (has links) (PDF)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.
2

Automatic Hardening against Dependability and Security Software Bugs

Süßkraut, Martin 21 May 2010 (has links)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.:1 Introduction 1 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Automatic Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Theses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Enforcing Dynamic Personalized System Call Models 9 2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 SwitchBlade Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 System Call Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Personalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.2 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Model Learner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.1 Problem: False Positives . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.2 Data- ow-Based Learner . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.1 TaintCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.2 Escaping Valgrind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5.3 Replay of Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Model Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Loading the System Call Model . . . . . . . . . . . . . . . . . . . . 31 2.6.2 Checking System Calls . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7.1 Synthetic Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7.2 Apache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.3 Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.4 Micro Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.7.5 Model Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.7.6 Stateful Application . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Speculation for Parallelizing Runtime Checks 43 3.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.1.1 Compiler Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 47 3.1.2 Runtime Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Deterministic Replay and Speculation . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 Switching Code Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.2 Integration with parexc chkpnt . . . . . . . . . . . . . . . . . . 58 3.4.3 Code Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4.4 Stack-local Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5 Speculative Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5.2 Deadlock Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.3 Storage Back-ends . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6 Parallelized Checkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6.1 Out-of-Bounds Checks . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.6.2 Data Flow Integrity Checks . . . . . . . . . . . . . . . . . . . . . . 71 3.6.3 FastAssert Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6.4 Runtime Checking in STM-Based Applications . . . . . . . . . . . . 72 3.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7.2 Checking Already Parallelized Applications . . . . . . . . . . . . . . 77 3.7.3 ParExC Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4 Automatically Finding and Patching Bad Error Handling 83 4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3 Learning Library-Level Error Return Values from System Call Error Injection 89 4.3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.2 E cient Error Injection . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.3 Obtain OS Error Specification . . . . . . . . . . . . . . . . . . . . . 92 4.4 Finding Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4.1 Argument Recording . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4.2 Systematic Error Injection . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.3 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.5 Fast Error Injection using Virtual Machines . . . . . . . . . . . . . . . . . 99 4.5.1 The fork Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.2 Virtual Machines for Fault Injection . . . . . . . . . . . . . . . . . . 101 4.6 Patching Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.1 Error Value Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.6.2 Preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.6.3 Patch Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.7.1 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5 Robustness and Security Hardening of COTS Software Libraries 117 5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.3 Test Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.3.1 Ballista Type System . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.3.2 Meta Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.3.3 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.3.4 Type Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.3.5 Type Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.3.6 Reducing the Number of Test Cases . . . . . . . . . . . . . . . . . . 128 5.3.7 Other Sources of Test Values . . . . . . . . . . . . . . . . . . . . . . 130 5.4 Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.4.1 Check Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.4.2 Parameterized Check Templates . . . . . . . . . . . . . . . . . . . . 133 5.5 Protection Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.5.1 Minimizing the Truth Table . . . . . . . . . . . . . . . . . . . . . . 134 5.5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.6.1 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.6.2 Autocannon as Dependability Benchmark . . . . . . . . . . . . . . 138 5.6.3 Protection Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6 Conclusion 143 6.1 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 References 147 List of Figures 159 List of Tables 163 Listings 165

Page generated in 0.0614 seconds