• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Isolating legacy applications with Lind

Matthews, Christopher James 27 March 2013 (has links)
Legacy applications, often written in C, can be riddled with bugs. Sarcastically referred to as "veritable bug ranches", pre-existing legacy applications of substantial size and complexity are still commonplace. In this dissertation, I motivate, build and evaluate Lind, a sandbox for legacy applications. Lind decreases the impact of buggy programs on the system that runs them. It does this without changing their code or destroying the non-functional characteristics of the programs---such as performance, portability, light-weightedness and ease of deployment---which are the primary motivators for legacy software written in C. Lind borrows many principles of secure system design to help it isolate legacy applications so they cannot impact the rest of the system. To assess Lind, I evaluate how well legacy applications perform in Lind, how strong the isolation Lind provides is, and how easy it is to port applications to Lind---all to conclude that Lind is a viable proof-of-concept platform for legacy applications. / Graduate / 0984
2

Software debugging using the debugger SAM4E Xplained Pro

Manoh, Nadia, Abdullah, Hamoud January 2018 (has links)
Inbyggda system finns i nästan alla enheter som används i vårt dagliga liv, som exempelvis mobiltelefoner, kylskåp och bilar. En del enheter kan vara betydligt känsligare än andra, vilket innebär att en bugg som existerar i ett system kan orsaka skada, till och med förlust av människoliv, eller orsakar ingen skada alls. Mjukvarutestning och mjukvarufelsökning genomförs för att reducera buggar i ett system.Utbildningsprogrammet Datateknik och Mobil IT på Malmö universitet fokuserar inte på att undervisa mjukvarufelsökning med hjälp av felsökningsverktyg. Således presenterar denna forskning en felsökningslaboration skapat för studenter som går Datateknik och Mobil IT, som anses hjälpa studenterna att få kunskap i hur man använder felsökningsverktyget SAM4E Xplained Pro för att lokalisera buggar. Som ett resultat, utfördes felsökningslaborationen av fyra studenter varav 75 procent av buggarna hittades och åtgärdades. / Embedded systems are found in almost every device used in our daily lives, including cell phones, refrigerators, and cars. Some devices may be significantly more sensitive than others, meaning a bug appearing in a system could cause harm, even loss of human lives or cause no harm at all. To reduce bugs in a system, software testing and software debugging are performed.The Computer Science program at Malmö University does not focus on teaching software debugging using a debugger. Thus, this thesis presents a debugging lab created for Computer Science students, considered to help them gain knowledge in how to use the debugger SAM4E Xplained Pro to locate bugs. As a result, four students performed the debugging lab of which 75 percent of the bugs were found and remedied.
3

Maintaining Web Applications Integrity Running on RADIUM

Ur-Rehman, Wasi 08 1900 (has links)
Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
4

Analyses de terminaison des calculs flottants / Termination Analysis of Floating-Point Computations

Maurica Andrianampoizinimaro, Fonenantsoa 08 December 2017 (has links)
Le tristement célèbre Ecran Bleu de la Mort de Windows introduit bien le problème traité. Ce bug est souvent causé par la non-terminaison d'un pilote matériel : le programme s'exécute infiniment, bloquant ainsi toutes les ressources qu'il s'est approprié pour effectuer ses calculs. Cette thèse développe des techniques qui permettent de décider, préalablement à l'exécution, la terminaison d'un programme donné pour l'ensemble des valeurs possibles de ses paramètres en entrée. En particulier, nous nous intéressons aux programmes qui manipulent des nombres flottants. Ces nombres sont omniprésents dans les processeurs actuels et sont utilisés par pratiquement tous les développeurs informatiques. Pourtant, ils sont souvent mal compris et, de fait, source de bugs. En effet, les calculs flottants sont entachés d'erreurs, inhérentes au fait qu'ils sont effectués avec une mémoire finie. Par exemple, bien que vraie dans les réels, l'égalité 0.2 + 0.3 = 0.5 est fausse dans les flottants. Non gérées correctement, ces erreurs peuvent amener à des évènements catastrophiques, tel l'incident du missile Patriot qui a fait 28 morts. Les théories que nous développons sont illustrées, et mises à l'épreuve par des extraits de codes issus de programmes largement répandus. Notamment, nous avons pu exhiber des bugs de terminaisons dues à des calculs flottants incorrects dans certains paquets de la distribution Ubuntu. / The infamous Blue Screen of Death of Windows appropriately introduces the problem at hand. This bug is often caused by a non-terminating device driver: the program runs infinitely, blocking in the process all the resources it allocated for its calculations. This thesis develops techniques that allow to decide, before runtime,termination of a given program for any possible value ​​of its inputs. In particular, we are interested in programs that manipulate floating-point numbers. These numbers are ubiquitous in current processors andare used by nearly all software developers. Yet, they are often misunderstood and, hence, source of bugs.Indeed, floating-point computations are tainted with errors. This is because they are performed within a finite amount of memory. For example, although true in the reals, the equality 0.2 + 0.3 = 0.5 is false in the floats. Not handled properly, these errors can lead to catastrophic events,such as the Patriot missile incident that killed 28 people. The theories we develop are illustrated, and put to the test, by code snippets taken from widely used programs. Notably, we were able to exhibit termination bugs due toincorrect floating-point computations in some packages of the Ubuntu distribution.
5

Automatic Hardening against Dependability and Security Software Bugs / Automatisches Härten gegen Zuverlässigkeits- und Sicherheitssoftwarefehler

Süßkraut, Martin 15 June 2010 (has links) (PDF)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.
6

Automatic Hardening against Dependability and Security Software Bugs

Süßkraut, Martin 21 May 2010 (has links)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.:1 Introduction 1 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Automatic Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Theses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Enforcing Dynamic Personalized System Call Models 9 2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 SwitchBlade Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 System Call Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Personalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.2 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Model Learner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.1 Problem: False Positives . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.2 Data- ow-Based Learner . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.1 TaintCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.2 Escaping Valgrind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5.3 Replay of Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Model Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Loading the System Call Model . . . . . . . . . . . . . . . . . . . . 31 2.6.2 Checking System Calls . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7.1 Synthetic Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7.2 Apache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.3 Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.4 Micro Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.7.5 Model Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.7.6 Stateful Application . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Speculation for Parallelizing Runtime Checks 43 3.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.1.1 Compiler Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 47 3.1.2 Runtime Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Deterministic Replay and Speculation . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 Switching Code Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.2 Integration with parexc chkpnt . . . . . . . . . . . . . . . . . . 58 3.4.3 Code Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4.4 Stack-local Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5 Speculative Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5.2 Deadlock Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.3 Storage Back-ends . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6 Parallelized Checkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6.1 Out-of-Bounds Checks . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.6.2 Data Flow Integrity Checks . . . . . . . . . . . . . . . . . . . . . . 71 3.6.3 FastAssert Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6.4 Runtime Checking in STM-Based Applications . . . . . . . . . . . . 72 3.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7.2 Checking Already Parallelized Applications . . . . . . . . . . . . . . 77 3.7.3 ParExC Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4 Automatically Finding and Patching Bad Error Handling 83 4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3 Learning Library-Level Error Return Values from System Call Error Injection 89 4.3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.2 E cient Error Injection . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.3 Obtain OS Error Specification . . . . . . . . . . . . . . . . . . . . . 92 4.4 Finding Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4.1 Argument Recording . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4.2 Systematic Error Injection . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.3 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.5 Fast Error Injection using Virtual Machines . . . . . . . . . . . . . . . . . 99 4.5.1 The fork Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.2 Virtual Machines for Fault Injection . . . . . . . . . . . . . . . . . . 101 4.6 Patching Bad Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.1 Error Value Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.6.2 Preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.6.3 Patch Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.7.1 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5 Robustness and Security Hardening of COTS Software Libraries 117 5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.3 Test Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.3.1 Ballista Type System . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.3.2 Meta Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.3.3 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.3.4 Type Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.3.5 Type Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.3.6 Reducing the Number of Test Cases . . . . . . . . . . . . . . . . . . 128 5.3.7 Other Sources of Test Values . . . . . . . . . . . . . . . . . . . . . . 130 5.4 Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.4.1 Check Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.4.2 Parameterized Check Templates . . . . . . . . . . . . . . . . . . . . 133 5.5 Protection Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.5.1 Minimizing the Truth Table . . . . . . . . . . . . . . . . . . . . . . 134 5.5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.6.1 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.6.2 Autocannon as Dependability Benchmark . . . . . . . . . . . . . . 138 5.6.3 Protection Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6 Conclusion 143 6.1 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 References 147 List of Figures 159 List of Tables 163 Listings 165

Page generated in 0.0603 seconds