191 |
FUZZING HARD-TO-COVER CODEHui Peng (10746420) 06 May 2021 (has links)
<div>Fuzzing is a simple yet effect approach to discover bugs by repeatedly testing the target system using randomly generated inputs. In this thesis, we identify several limitations in state-of-the-art fuzzing techniques: (1) the coverage wall issue , fuzzer-generated inputs cannot bypass complex sanity checks in the target programs and are unable to cover code paths protected by such checks; (2) inability to adapt to interfaces to inject fuzzer-generated inputs, one important example of such interface is the software/hardware interface between drivers and their devices; (3) dependency on code coverage feedback, this dependency makes it hard to apply fuzzing to targets where code coverage collection is challenging (due to proprietary components or special software design).</div><div><br></div><div><div>To address the coverage wall issue, we propose T-Fuzz, a novel approach to overcome the issue from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the coverage wall is reached, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program.</div></div><div><br></div><div><div>By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, we found 4 new bugs in previously-fuzzed programs and libraries.</div></div><div><br></div><div><div>To address the inability to adapt to inferfaces, we propose USBFuzz. We target the USB interface, fuzzing the software/hardware barrier. USBFuzz uses device emulation</div><div>to inject fuzzer-generated input to drivers under test, and applies coverage-guided fuzzing to device drivers if code coverage collection is supported from the kernel. In its core, USBFuzz emulates an special USB device that provides data to the device driver (when it performs IO operations). This allows us to fuzz the input space of drivers from the device’s perspective, an angle that is difficult to achieve with real hardware. USBFuzz discovered 53 bugs in Linux (out of which 37 are new, and 36 are memory bugs of high security impact, potentially allowing arbitrary read or write in the kernel address space), one bug in FreeBSD, four bugs (resulting in Blue Screens of Death) in Windows and three bugs (two causing an unplanned restart, one freezing the system) in MacOS.</div></div><div><br></div><div><div>To break the dependency on code coverage feedback, we propose WebGLFuzzer. To fuzz the WebGL interface (a set of JavaScript APIs in browsers allowing high performance graphics rendering taking advantage of GPU acceleration on the device), where code coverage collection is challenging, we introduce WebGLFuzzer, which internally uses a log guided fuzzing technique. WebGLFuzzer is not dependent on code coverage feedback, but instead, makes use of the log messages emitted by browsers to guide its input mutation. Compared with coverage guided fuzzing, our log guided fuzzing technique is able to perform more meaningful mutation under the guidance of the log message. To this end, WebGLFuzzer uses static analysis to identify which argument to mutate or which API call to insert to the current program to fix the internal WebGL program state given a log message emitted by the browser. WebGLFuzzer is under evaluation and so far, it has found 6 bugs, one of which is able to freeze the X-Server.</div></div>
|
192 |
Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle ControllersTaegyu Kim (10716420) 06 May 2021 (has links)
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g.,
commercial, military, scientific, and entertainment). However, RAVs’ security and safety
issues could not only arise from either of the “cyber” domain (e.g., control software) and
“physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric”
approaches. However, such a single-domain focus could overlook the security threats caused
by the interplay between the cyber and physical domains.
<br>In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers.
Through a combination of program analysis and vehicle control modeling, we first developed
novel techniques to (1) connect both cyber and physical domains and then (2) analyze
individual domains and their interplay. Specifically, we describe how to detect bugs after
RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing
(RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As
a result, we have found 91 new bugs in modern RAV control programs, and their developers
confirmed 32 cases and patch 11 cases.
|
193 |
TOWARDS TRUSTWORTHY ON-DEVICE COMPUTATIONHeejin Park (12224933) 20 April 2022 (has links)
<div>Driven by breakthroughs in mobile and IoT devices, on-device computation becomes promising. Meanwhile, there is a growing concern over its security: it faces many threats</div><div>in the wild, while not supervised by security experts; the computation is highly likely to touch users’ privacy-sensitive information. Towards trustworthy on-device computation, we present novel system designs focusing on two key applications: stream analytics, and machine learning training and inference.</div><div><br></div><div>First, we introduce Streambox-TZ (SBT), a secure stream analytics engine for ARM-based edge platforms. SBT contributes a data plane that isolates only analytics’ data and</div><div>computation in a trusted execution environment (TEE). By design, SBT achieves a minimal trusted computing base (TCB) inside TEE, incurring modest security overhead.</div><div><br></div><div>Second, we design a minimal GPU software stack (50KB), called GPURip. GPURip allows developers to record GPU computation ahead of time, which will be replayed later</div><div>on client devices. In doing so, GPURip excludes the original GPU stack from run time eliminating its wide attack surface and exploitable vulnerabilities.</div><div><br></div><div>Finally, we propose CoDry, a novel approach for TEE to record GPU computation remotely. CoDry provides an online GPU recording in a safe and practical way; it hosts GPU stacks in the cloud that collaboratively perform a dryrun with client GPU models. To overcome frequent interactions over a wireless connection, CoDry implements a suite of key optimizations.</div>
|
194 |
Dependable Wearable SystemsEdgardo A Barsallo Yi (11656702) 09 December 2021 (has links)
<div>As wearable devices, like smartwatches and fitness monitors, gain popularity and are being touted for clinical purposes, evaluating the resilience and security of wearable operating systems (OSes) and their corresponding ecosystems becomes essential. One of the most dominant OSes for wearable devices is Wear OS, created by Google. Wear OS and Android (its counterpart OS for mobile devices) share similar features, but the unique characteristics and uses of wearable devices posses new challenges. For example, wearable applications are generally more dependent on device sensors, have complex communication patterns (both intra-device and inter-device), and are context-aware. Current research efforts on the Wear OS are more focused on the efficiency and performance of the OS itself, overlooking the resilience or security of the OS or its ecosystem.</div><div> </div><div>This dissertation introduces a systematic analysis to evaluate the Wear OS's resilience and security. The work is divided into two main parts. First, we focus our efforts on developing novel tools to evaluate the robustness of the wearable OS and uncover vulnerabilities and failures in the wearable ecosystem. We provide an assessment and propose techniques to improve the system's overall reliability. Second, we turn our attention to the security and privacy of smart devices. We assess the privacy and security of highly interconnected devices. We demonstrate the feasibility of privacy attacks under these scenarios and propose a defense mechanism to mitigate these attacks.</div><div> </div><div>For the resilience part, we evaluate the overall robustness of the Wear OS ecosystem using a fuzz testing-based tool [DSN2018]. We perform an extensive fault injection study by mutating inter-process communication messages and UI events on a set of popular wearable and mobile applications. The results of our study show similarities in the root cause of failures between Wear OS and Android; however, the distribution of exception differ in both OSes. Further, our study evidence that input validation has improved in the Android ecosystem with respect to prior studies. Then, we study the impact of the state of a wearable device on the overall reliability of the applications running in Wear OS [MobiSys2020]. We use distinguishable characteristics of wearable apps, such as sensor activation and mobile-wearable communication patterns, to derive a state model and use this model to target specific fuzz injection campaigns against a set of popular wearable apps. Our experiments revealed an abundance of improper exception handling on wearable applications and error propagation across mobile and wearable devices. Furthermore, our results unveiled a flawed design of the wearable OS, which caused the device to reboot due to excessive sensor use.</div><div><br></div><div>For the security and privacy part, we assess user awareness toward privacy risks under scenarios with multiple interconnected devices. Our results show that a significant majority of the users have no reservation while granting permission to their devices. Furthermore, users tend to be more conservative while granting permission on their wearables. Based on the results of our study, we demonstrate the practicability of leaking sensitive information inferred from the user by orchestrating an attack using multiple devices. Finally, we introduce a tool based on NLP (Natural Language Processing) techniques that can aid the user in detecting this type of attack.</div>
|
195 |
An Interactive Learning Tool for Early Algebra Education: Design, Implementation, Evaluation and DeploymentMeenakshi Renganathan, Siva 21 September 2017 (has links)
No description available.
|
196 |
Development of time and workload methodologies for Micro Saint models of visual display and control systemsMoscovic, Sandra A. 22 December 2005 (has links)
The Navy, through its Total Quality Leadership (TQL) program, has emphasized the need for objective criteria in making design decisions. There are numerous tools available to aid human factors engineers meet the Navy’s need. For example, simulation modeling provides objective design decisions without incurring the high costs associated with prototype building and testing. Unfortunately, simulation modeling of human— machine systems is limited by the lack of task completion time and variance data for various objectives. Moreover, no study has explored the use of a simulation model with a Predetermined Time System (PTS) as a valid method for making design decisions for display interactive consoles.
This dissertation concerns the development and validation of a methodology to incorporate a PTS known as Modapts into a simulation modeling tool known as Micro Saint. The operator task context for the model was an interactive displays and controls console known as the AN/SLQ-32(V). In addition, the dissertation examined the incorporation of a cognitive workload metric known as the Subjective Workload Assessment Technique (SWAT) into the Micro Saint model.
The dissertation was conducted in three phases. In the first phase, a task analysis was performed to identify operator task and hardware interface redesign options. In the second phase data were collected from two groups of six participants who performed an operationally realistic task on 24 different configurations of a Macintosh AN/SLQ-32(V) simulator. Configurations of the simulated AN/SLQ-32(V) were defined by combinations of two display formats, two color conditions, and two emitter symbol sets, presented under three emitter density conditions. Data from Group 1 were used to assign standard deviations, probability distributions and Modapts times to a Micro Saint model of the task. The third phase of the study consisted of (1) verifying the model-generated performance scores and workload scores by comparison against scores obtained from Group 1 using regression analyses, and (2) validation of the model by comparison against Group 2.
The results indicate that the Modapts/Micro Saint methodology was a valid way to predict performance scores obtained from the 24 simulated AN/SLQ-32(V) prototypes (R² = 0.78). The workload metric used in the task network model accounted for 76 percent of the variance in Group 2 mean workload scores, but the slope of the regression was different from unity (p = 0.05). The statistical finding suggests that the model does not provide an exact prediction of workload scores. Further regression analysis of Group 1 and Group 2 workload scores indicates that the two groups were not homogenous with respect to workload ratings. / Ph. D.
|
197 |
Distribution of Linda across a network of workstationsSchumann, Charles N. 10 November 2009 (has links)
The Linda programming language provides an architecturally independent paradigm for writing parallel programs. By designing and implementing Linda on a network of stand alone workstations a scalable multicomputer can be constructed from existing equipment. This thesis presents the design, implementation and testing of a distributable Linda kernel and communications subsystem providing a framework for full distribution of Linda on a network of workstations. Following a presentation of the Linda language, the kernel’s design and rationale are presented. The design provides for interprocess communications by implementing a protocol on top of the Unix socket facility. Choosing sockets as the interprocess communications medium has the advantage of wide portability. However, a design critique is presented which addresses several disadvantages of the socket based communications model. Considerable attention is given to quantifying the effectiveness of this design in comparison to a shared memory, non-distributable design from Yale University. A thorough investigation into the source of particular observed phenomena is presented which leads to an improvement in wall time performance of an order of magnitude. / Master of Science
|
198 |
Development of a reconfigurable assembly system with an integrated information management systemSmith, Lyle. Christopher. January 1900 (has links)
Thesis (M. Tech. (Engineering Electrical)) -- Central University of Technology, Free State, [2014] / This dissertation evaluates the software and hardware components used to develop a
Reconfigurable Assembly System with an Integrated Information Management System. The assembly system consists of a modular Cartesian robot and vision system. The research focuses on the reconfigurability, modularity, scalability and flexibility that can be achieved in terms of the software and hardware components used within the system.
The assembly system can be divided into high-level control and low-level control
components. All information related to the product, Cartesian positioning and processes to
follow resides in the Information Management System. The Information Management
System is the high-level component and consists of a database, web services and low-levelcontrol drivers. The high-level system responds to the data received from the low-level systems and determines the next process to take place. The low-level systems consist of the PLC (Programmable Logic Controller) and the vision system. The PLC controls the Cartesian robot’s motor controllers and handles all events raised by field devices (e g. sensors or push buttons). The vision system contains a number of pre-loaded inspections used to identify barcodes and parts, obtain positioning data and verify the products’ build quality. The Cartesian robot’s positioning data and the vision system’s inspections are controlled by the Information Management System. The results showed that the high-level control software components are able to add more modularity and reconfigurability to the system, as it can easily adapt to changes in the product. The high-level control components also have the ability to be reconfigured while the assembly system is online without affecting the assembly system. The low-level control system is better suited to handling the control of motor controllers, field devices and vision inspections over an industrial network.
|
199 |
Les infractions portant atteinte à la sécurité du système informatique d’une entrepriseMaalaoui, Ibtissem 09 1900 (has links)
Les nouvelles technologies de l’information et des communications
occupent aujourd’hui une place importante dans les entreprises, quelle que soit la
taille ou le(s) domaine(s) d’activité de ces dernières. Elles participent de manière
positive au développement de la vie économique. Elles sont toutefois à l’origine
d’une nouvelle forme de criminalité qui menace la sécurité et l’intégrité des
systèmes informatiques dans l’entreprise. Celle-ci est d’une ampleur difficile à
évaluer, mais surtout difficile à maîtriser avec les dispositions législatives déjà en
place, laissant par là même apparaître qu’une adaptation au niveau juridique est
inévitable. Certains pays industrialisés ont ainsi décidé de mettre en place un cadre
juridique adéquat pour garantir aux entreprises la sécurité de leurs systèmes
informatiques. Notre étude va justement porter sur les dispositifs mis en place par
deux systèmes juridiques différents. Forcés de prendre en compte une réalité
nouvelle – qui n’existait pas nécessairement il y a plusieurs années –, la France et le
Canada ont décidé de modifier respectivement leurs codes pénal et criminel en leur
ajoutant des dispositions qui répriment de nouvelles infractions.
À travers cet exposé, nous allons analyser les infractions qui portent atteinte
à la sécurité du système informatique de l’entreprise à la lumière des outils
juridiques mis en place. Nous allons mesurer leur degré d’efficacité face à la réalité
informatique. En d’autres termes, il s’agit pour nous de déterminer si le droit va
répondre ou non aux besoins de l’informatique. / The new information and communication technologies (NICT) currently
play an important role in companies, regardless of their size or field of activity; in
addition they contribute positively to the economy. However, their use has led to
NICT-related criminality, which threatens the security and integrity of the
companies’ computer systems. NICT-related criminality has grown exponentially;
its increase is hard to assess, and especially hard to control using the existing
legislative provisions. Hence, legal adaptations appear unavoidable. Several First
World countries have decided to set up, through different means, an adequate legal
framework to guarantee the security of companies’ computer systems.
Our study will focus precisely on the mechanisms that have been set by two
different legal systems. France and Canada, which had to take into account a new
reality–new to at least some extent–have decided to amend their respective penal
and criminal codes by adding provisions that penalize further infringements. In this
work, we will analyze the crimes that undermine the security of the companies’
computer systems in light of the legal tools in place. We will asess how effectively
they face today’s computer world and will determine whether or not the law will
meet or not the needs of this type of technology.
|
200 |
Systematic Evaluations Of Security Mechanism DeploymentsSze Yiu Chau (7038539) 13 August 2019 (has links)
<div>In a potentially hostile networked environment, a large diversity of security mechanisms with varying degree of sophistication are being deployed to protect valuable computer systems and digital assets. </div><div><br></div><div>While many competing implementations of similar security mechanisms are available in the current software development landscape, the robustness and reliability of such implementations are often overlooked, resulting in exploitable flaws in system deployments. In this dissertation, we systematically evaluate implementations of security mechanisms that are deployed in the wild. First, we examine how content distribution applications on the Android platform control access to their multimedia contents. With respect to a well-defined hierarchy of adversarial capabilities and attack surfaces, we find that many content distribution applications, including that of some world-renowned publications and streaming services, are vulnerable to content extraction due to the use of unjustified assumptions in their security mechanism designs and implementations. Second, we investigate the validation logic of X.509 certificate chains as implemented in various open-source TLS libraries. X.509 certificates are widely used in TLS as a means to achieve authentication. A validation logic that is overly restrictive could lead to the loss of legitimate services, while an overly permissive implementation could open door to impersonation attacks. Instead of manual analysis and unguided fuzzing, we propose a principled approach that leverages symbolic execution to achieve better coverage and uncover logical flaws that are buried deep in the code. We find that many TLS libraries deviate from the specification. Finally, we study the verification of RSA signatures, as specified in the PKCS#1 v1.5 standard, which is widely used in many security-critical network protocols. We propose an approach to automatically generate meaningful concolic test cases for this particular problem, and design and implement a provenance tracking mechanism to assist root-cause analysis in general. Our investigation revealed that several crypto and IPSec implementations are susceptible to new variants of the Bleichenbacher low-exponent signature forgery.</div>
|
Page generated in 0.1053 seconds