• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • Tagged with
  • 17
  • 17
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Inlined Reference Monitors : Certification,Concurrency and Tree Based Monitoring

Lundblad, Andreas January 2013 (has links)
Reference monitor inlining is a technique for enforcing security policies by injecting security checks into the untrusted software in a style similar to aspect-oriented programming. The intention is that the injected code enforces compliance with the policy (security), without adding behavior (conservativity) or affecting existing policy compliant behavior (transparency). This thesis consists of four papers which covers a range of topics including formalization of monitor inlining correctness properties, certification of inlined monitors, limitations in multithreaded settings and extensions using data-flow monitoring. The first paper addresses the problem of having a potentially complex program rewriter as part of the trusted computing base. By means of proof-carrying code we show how the inliner can be replaced by a relatively simple proof-checker. This technique also enables the use of monitor inlining for quality assurance at development time, while minimizing the need for post-shipping code rewrites. The second paper focuses on the issues associated with monitor inlining in a concurrent setting. Specifically, it discusses the problem of maintaining transparency when introducing locks for synchronizing monitor state reads and updates. Due to Java's relaxed memory model, it turns out to be impossible for a monitor to be entirely transparent without sacrificing the security property. To accommodate for this, the paper proposes a set of new correctness properties shown to be realistic and realizable. The third paper also focuses on problems due to concurrency and identifies a class of race-free policies that precisely characterizes the set of inlineable policies. This is done by showing that inlining of a policy outside this class is either not secure or not transparent, and by exhibiting a concrete algorithm for inlining of policies inside the class which is secure, conservative, and transparent. The paper also discusses how certification in the style of proof-carrying code could be supported in multithreaded Java programs. The fourth paper formalizes a new type of data centric runtime monitoring which combines monitor inlining with taint tracking. As opposed to ordinary techniques which focus on monitoring linear flows of events, the approach presented here relies on tree shaped traces. The paper describes how the approach can be efficiently implemented and presents a denotational semantics for a simple ``while'' language illustrating how the theoretical foundations is to be used in a practical setting. Each paper is concluded by a practical evaluation of the theoretical results, based on a prototype implementation and case studies on real-world applications and policies. / Referensmonitorinvävning, eller monitorinvävning, är en teknik som används för att se till att en given säkerhetspolicy efterföljs under exekvering av potentiellt skadlig kod. Tekniken går ut på att bädda in en uppsättning säkerhetskontroller (en säkerhetsmonitor) i koden på ett sätt som kan jämföras med aspektorienterad programmering. Syftet med den invävda monitorn är att garantera att policyn efterföljs (säkerhet) utan att påverka ursprungsprogrammets beteende, såvida det följer policyn (transparans och konservativitet). Denna avhandling innefattar fyra artiklar som tillsammans täcker in en rad ämnen rörande monitorinvävning. Bland annat diskuteras formalisering av korrekthetsegenskaper hos invävda monitorer, certifiering av invävda monitorer, begränsningar i multitrådade program och utökningar för hantering av dataflödesmonitorering. Den första artikeln behandlar problemen associerade med att ha en potentiellt komplex programmodifierare som del i den säkerhetskritiska komponenten av ett datorsystem. Genom så kallad bevisbärande kod visar vi hur en monitorinvävare kan ersättas av en relativt enkel beviskontrollerare. Denna teknik möjliggör även användandet av monitorinvävning som hjälpmedel för programutvecklare och eliminerar behovet av programmodifikationer efter att programmet distribuerats. Den andra artikeln fokuserar på problemen kring invävning av monitorer i multitrådade program. Artikeln diskuterar problemen kring att upprätthålla transparans trots införandet av lås för synkronisering av läsningar av och skrivningar till säkerhetstillståndet. På grund av Javas minnesmodell visar det sig dock omöjligt att bädda in en säkerhetsmonitor på ett säkert och transparent sätt. För att ackommodera för detta föreslås en ny uppsättning korrekthetsegenskaper som visas vara realistiska och realiserbara. Den tredje artikeln fokuserar även den på problemen kring flertrådad exekvering och karaktäriserar en egenskap för en policy som är tillräcklig och nödvändig för att både säkerhet och transparens ska uppnås. Detta görs genom att visa att en policy utan egenskapen inte kan upprätthållas på ett säkert och transparent sätt, och genom att beskriva en implementation av en monitorinvävare som är säker och transparent för en policy som har egenskapen. Artikeln diskuterar också hur certifiering av säkerhetsmonitorer i flertrådade program kan realiseras genom bevisbärande kod. Den fjärde artikeln beskriver en ny typ av datacentrisk säkerhetsmonitorering som kombinerar monitorinvävning med dataflödesanalys. Till skillnad mot existerande tekniker som fokuserar på linjära sekvenser av säkerhetskritiska händelser förlitar sig tekniken som presenteras här på trädformade händelsesekvenser. Artikeln beskriver hur tekniken kan implementeras på ett effektivt sätt med hjälp av abstraktion. Varje artikel avslutas med en praktisk evaluering av de teoretiska resultaten baserat på en prototypimplementation och fallstudier av verkliga program och säkerhetsegenskaper. / <p>QC 20130220</p>
2

Dynamic Analysis of Web Services

Simmonds, Jocelyn 31 August 2011 (has links)
Orchestrated web service applications are highly distributed applications that accomplish business goals by executing services offered by partners. This dependance on partner services allows the development of more flexible, modular applications. For a classical distributed system, correctness can be ensured by statically checking the composition of the components that make up the system against properties of interest. However, in the case of web service applications, there are various conditions that make this type of analysis insufficient. For example, partners can be dynamically discovered, which means that we cannot create a definitive model of the system to analyze. Web service applications can also display new behaviour at execution time, so statically checked properties of the system may not hold throughout the system's lifetime. Due to these limitations of static analysis, this thesis concentrates on the dynamic analysis of web service applications, specifically, by monitoring runtime events. The goal of runtime monitoring is to check whether an application violates a given specification of its behaviour during its execution. The behaviour of the system can be specified in a number of ways, e.g., as a set of temporal properties, assertions or even scenarios. During execution, application events are intercepted and used to determine if the system is violating its specification. Moreover, monitoring the system as it runs provides a chance to recover from an error once a problem has been detected. This is critical in the domain of web service applications, as bugs are potentially exposed to millions of users before they are found/fixed. We present techniques to address several major challenges facing the creation of an industrial-strength runtime monitoring and recovery framework for web service applications. The first milestone for achieving this goal is the creation of an adequate property specification language. This language must be expressive enough to capture the distributed, interactive, and message-driven nature of web service applications, but must also be amenable to efficient runtime monitoring. We propose Web Sequence Diagrams (W-SD), a language that, we feel, meets these criteria. Specifications expressed in W-SD permit the analysis of orchestrations involving multiple partners, from the point of view of the orchestrating service. The second contribution of this thesis is the creation of an industrial-strength online runtime monitoring and recovery framework that is non-intrusive, supports the dynamic discovery of web services, deals with synchronous and asynchronous communication, as well as partner services implemented in different languages. Developers using this framework can specify and efficiently monitor a variety of temporal behaviour. If recovery is enabled, properties are monitored proactively, so this framework allows developers to effortlessly enable error recovery in applications being monitored. The last contribution of this thesis is the development of recovery plans from runtime errors. Given an application path which led to a failure and a monitor which detected it, we have developed various techniques and optimizations that make recovery plan generation feasible in practice. For some of the violations, such plans essentially involve "going back" -- compensating the occurred actions until an alternative behaviour of the application is possible. For other violations, such plans include both "going back" and "re-planning" -- guiding the application towards a desired behaviour.
3

Dynamic Analysis of Web Services

Simmonds, Jocelyn 31 August 2011 (has links)
Orchestrated web service applications are highly distributed applications that accomplish business goals by executing services offered by partners. This dependance on partner services allows the development of more flexible, modular applications. For a classical distributed system, correctness can be ensured by statically checking the composition of the components that make up the system against properties of interest. However, in the case of web service applications, there are various conditions that make this type of analysis insufficient. For example, partners can be dynamically discovered, which means that we cannot create a definitive model of the system to analyze. Web service applications can also display new behaviour at execution time, so statically checked properties of the system may not hold throughout the system's lifetime. Due to these limitations of static analysis, this thesis concentrates on the dynamic analysis of web service applications, specifically, by monitoring runtime events. The goal of runtime monitoring is to check whether an application violates a given specification of its behaviour during its execution. The behaviour of the system can be specified in a number of ways, e.g., as a set of temporal properties, assertions or even scenarios. During execution, application events are intercepted and used to determine if the system is violating its specification. Moreover, monitoring the system as it runs provides a chance to recover from an error once a problem has been detected. This is critical in the domain of web service applications, as bugs are potentially exposed to millions of users before they are found/fixed. We present techniques to address several major challenges facing the creation of an industrial-strength runtime monitoring and recovery framework for web service applications. The first milestone for achieving this goal is the creation of an adequate property specification language. This language must be expressive enough to capture the distributed, interactive, and message-driven nature of web service applications, but must also be amenable to efficient runtime monitoring. We propose Web Sequence Diagrams (W-SD), a language that, we feel, meets these criteria. Specifications expressed in W-SD permit the analysis of orchestrations involving multiple partners, from the point of view of the orchestrating service. The second contribution of this thesis is the creation of an industrial-strength online runtime monitoring and recovery framework that is non-intrusive, supports the dynamic discovery of web services, deals with synchronous and asynchronous communication, as well as partner services implemented in different languages. Developers using this framework can specify and efficiently monitor a variety of temporal behaviour. If recovery is enabled, properties are monitored proactively, so this framework allows developers to effortlessly enable error recovery in applications being monitored. The last contribution of this thesis is the development of recovery plans from runtime errors. Given an application path which led to a failure and a monitor which detected it, we have developed various techniques and optimizations that make recovery plan generation feasible in practice. For some of the violations, such plans essentially involve "going back" -- compensating the occurred actions until an alternative behaviour of the application is possible. For other violations, such plans include both "going back" and "re-planning" -- guiding the application towards a desired behaviour.
4

Tracerory - Dynamic Tracematches and Unread Memory Detection for C/C++

Eyolfson, Jonathan January 2011 (has links)
Dynamic binary translation allows us to analyze a program during execution without the need for a compiler or the program's source code. In this work, we present two applications of dynamic binary translation: tracematches and unread memory detection. Libraries are ubiquitous in modern software development. Each library requires that its clients follow certain conventions, depending on the domain of the library. Tracematches are a particularly expressive notation for specifying library usage conventions, but have only been implemented on top of Java. In this work, we leverage dynamic binary translation to enable the use of tracematches on executables, particularly for compiled C/C++ programs. The presence of memory that is never read, or memory writes that are never read during execution is wasteful, and may be also be indicative of bugs. In addition to tracematches, we present an unread memory detector. We built this detector using dynamic binary translation. We have implemented a tool which monitors tracematches on top of the Pin framework along with unread memory. We describe the operation of our tool using a series of motivating examples and then present our overall monitoring approach. Finally, we include benchmarks showing the overhead of our tool on 4 open source projects and report qualitative results.
5

Tracerory - Dynamic Tracematches and Unread Memory Detection for C/C++

Eyolfson, Jonathan January 2011 (has links)
Dynamic binary translation allows us to analyze a program during execution without the need for a compiler or the program's source code. In this work, we present two applications of dynamic binary translation: tracematches and unread memory detection. Libraries are ubiquitous in modern software development. Each library requires that its clients follow certain conventions, depending on the domain of the library. Tracematches are a particularly expressive notation for specifying library usage conventions, but have only been implemented on top of Java. In this work, we leverage dynamic binary translation to enable the use of tracematches on executables, particularly for compiled C/C++ programs. The presence of memory that is never read, or memory writes that are never read during execution is wasteful, and may be also be indicative of bugs. In addition to tracematches, we present an unread memory detector. We built this detector using dynamic binary translation. We have implemented a tool which monitors tracematches on top of the Pin framework along with unread memory. We describe the operation of our tool using a series of motivating examples and then present our overall monitoring approach. Finally, we include benchmarks showing the overhead of our tool on 4 open source projects and report qualitative results.
6

Methods for Reducing Monitoring Overhead in Runtime Verification

Wu, Chun Wah Wallace January 2013 (has links)
Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time. The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples). The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring.
7

Methods for Reducing Monitoring Overhead in Runtime Verification

Wu, Chun Wah Wallace January 2013 (has links)
Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time. The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples). The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring.
8

Automata based monitoring and mining of execution traces

Reger, Giles Matthew January 2014 (has links)
This thesis contributes work to the fields of runtime monitoring and specification mining. It develops a formalism for specifying patterns of behaviour in execution traces and defines techniques for checking these patterns in, and extracting patterns from, traces. These techniques represent an extension in the expressiveness of properties that can be efficiently and effectively monitored and mined. The behaviour of a computer system is considered in terms of the actions it performs, captured in execution traces. Patterns of behaviour, formally defined in trace specifications, denote the traces that the system should (or should not) exhibit. The main task this work considers is that of checking that the system conforms to the specification i.e. is correct. Additionally, trace specifications can be used to document behaviour to aid maintenance and development. However, formal specifications are often missing or incomplete, hence the mining activity. Previous work in the field of runtime monitoring (checking execution traces) has tended to either focus on efficiency or expressiveness, with different approaches making different trade-offs. This work considers both, achieving the expressiveness of the most expressive existing tools whilst remaining competitive with the most efficient. These elements of expressiveness and efficiency depend on the specification formalism used. Therefore, we introduce quantified event automata for describing patterns of behaviour in execution traces and then develop a range of efficient monitoring algorithms. To monitor execution traces we need a formal description of expected behaviour. However, these are often difficult to write - especially as there is often a lack of understanding of actual behaviour. The field of specification mining aims to explain the behaviour present in execution traces by extracting specifications that conform to those traces. Previous work in this area has primarily been limited to simple specifications that do not consider data. By leveraging the quantified event automata formalism, and its efficient trace checking procedures, we introduce a generate-and-check style mining framework capable of accurately extracting complex specifications. This thesis, therefore, makes separate significant contributions to the fields of runtime monitoring and specification mining. This work generalises and extends existing techniques in runtime monitoring, enabling future research to better understand the interaction between expressiveness and efficiency. This work combines and extends previous approaches to specification mining, increasing the expressiveness of specifications that can be mined.
9

Runtime Monitoring of Automated Driving Systems

Mehmed, Ayhan January 2019 (has links)
It is the period of the World's history, where the technological progress reached a level that enables the first steps towards the development of vehicles with automated driving capabilities. The swift response from the significant portion of the industry resulted in a race, the final line set at the introduction of vehicles with full automated driving capabilities. Vehicles with automated driving capabilities target making driving safer, more comfortable, and economically more efficient by assisting the driver or by taking responsibilities for different driving tasks. While vehicles with assistance and partial automation capabilities are already in series production, the ultimate goal is in the introduction of vehicles with full automated driving capabilities. Reaching this level of automation will require shifting all responsibilities, including the responsibility for the overall vehicle safety, from the human to the computer-based system responsible for the automated driving functionality (i.e., the Automated Driving System (ADS)). Such a shift makes the ADS highly safe-critical, requiring a safety level comparable to an aircraft system. It is paramount to understand that ensuring such a level of safety is a complex interdisciplinary challenge. Traditional approaches for ensuring safety require the use of fault-tolerance techniques that are unproven when it comes to the automated driving domain. Moreover, existing safety assurance methods (e.g., ISO 26262) suffer from requirements incompleteness in the automated driving context. The use of artificial intelligence-based components in the ADS further complicate the matter due to their non-deterministic behavior. At present, there is no single straightforward solution for these challenges. Instead, the consensus of cross-domain experts is to use a set of complementary safety methods that together are sufficient to ensure the required level of safety. In the context of that, runtime monitors that verify the safe operation of the ADS during execution, are a promising complementary approach for ensuring safety. However, to develop a runtime monitoring solution for ADS, one has to handle a wide range of challenges. On a conceptual level, the complex and opaque technology used in ADS often make researchers ask the question ``how should ADS be verified in order to judge it is operating safely?". Once the initial Runtime Verification (RV) concept is developed, researchers and practitioners have to deal with research and engineering challenges encountered during the realization of the RV approaches into an actual runtime monitoring solution for ADS. These challenges range from, estimating different safety parameters of the runtime monitors, finding solutions for different technical problems, to meeting scalability and efficiency requirements. The focus of this thesis is to propose novel runtime monitoring solutions for verifying the safe operation of ADS. This encompasses (i) defining novel RV approaches explicitly tailored for automated driving, and (ii) developing concepts, methods, and architectures for realizing the RV approaches into an actual runtime monitoring solution for ADS. Contributions to the former include defining two runtime RV approaches, namely the Computer Vision Monitor (CVM) and the Safe Driving Envelope Verification. Contributions to the latter include (i) estimating the sufficient diagnostic test interval of the runtime verification approaches (in particular the CVM), (ii) addressing the out-of-sequence measurement problem in sensor fusion-based ADS, and (iii) developing an architectural solution for improving the scalability and efficiency of the runtime monitoring solution. / RetNet
10

RUMBA: Runtime Monitoring and Behavioral Analysis Framework for Java Software Systems

Ashkan, Azin January 2007 (has links)
A goal of runtime monitoring is to observe software execution to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing prevention activities against catastrophic failure. Although runtime monitoring has been in use for so many years, there is renewed interest in its application largely because of the increasing complexity and ubiquitous nature of software systems. To address such a demand for runtime monitoring and behavioral analysis of software systems, we present RUMBA framework. It utilizes a synergy between static and dynamic analyses to evaluate whether a program behavior complies with specified properties during its execution. The framework is comprised of three steps, namely: i) Extracting Architecture where reverse engineering techniques are used to extract two meta-models of a Java system by utilizing UML-compliant and graph representations of the system model, ii) Seeding Objectives in which information required for filtering runtime events is obtained based on properties that are defined in OCL (Object Constraint Language) as specifications for the behavioral analysis, and iii) Runtime Monitoring and Analysis where behavior of the system is monitored according to the output of the previous stages, and then is analyzed based on the objective properties. The first and the second stages are static while the third one is dynamic. A prototype of our framework has been developed in Java programming language. We have performed a set of empirical studies on the proposed framework to assess the techniques introduced in this thesis. We have also evaluated the efficiency of the RUMBA framework in terms of processor and memory utilization for the case study applications.

Page generated in 0.1052 seconds