• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 10
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptability and reconfiguration of automotive embedded systems / Adaptabilité et reconfiguration des systémes embarqués automobiles

Belaggoun, Amel 10 October 2017 (has links)
Les véhicules modernes sont de plus en plus informatisés pour satisfaire les exigences de sureté les plus strictes et pour fournir de meilleures expériences de conduite. Par conséquent, le nombre d'unités de contrôle électronique (ECU) dans les véhicules modernes a augmenté de façon continue au cours des dernières années. En outre, les applications à calcul complexe offrent une demande de calcul plus élevée sur les ECU et ont des contraintes de temps-réel dures et souples, d'où le besoin d’une approche unifiée traitant les deux types de contraintes. Les architectures multi-cœur permettent d'intégrer plusieurs niveaux de criticité de sureté sur la même plate-forme. De telles applications ont été conçues à l'aide d'approches statiques; cependant, les approches dites statiques ne sont plus réalisables dans des environnements très dynamiques en raison de la complexité croissante et les contraintes de coûts strictes, d’où la nécessite des solutions plus souples. Cela signifie que, pour faire face aux environnements dynamiques, un système automobile doit être adaptatif; c'est-à-dire qu'il doit pouvoir adapter sa structure et / ou son comportement à l'exécution en réponse à des changements fréquents dans son environnement. Ces nouvelles exigences ne peuvent être confrontées aux approches actuelles des systèmes et logiciels automobiles. Ainsi, une nouvelle conception de l'architecture électrique / électronique (E / E) d'un véhicule doit être développé. Récemment, l'industrie automobile a convenu de changer la plate-forme AUTOSAR actuelle en "AUTOSAR Adaptive Platform". Cette plate-forme est développée par le consortium AUTOSAR en tant que couche supplémentaire de la plate-forme classique. Il s'agit d'une étude de faisabilité continue basée sur le système d'exploitation POSIX qui utilise une communication orientée service pour intégrer les applications dans le système à tout moment. L'idée principale de cette thèse est de développer de nouveaux concepts d'architecture basés sur l'adaptation pour répondre aux besoins d'une nouvelle architecture E / E pour les véhicules entièrement électriques (VEF) concernant la sureté, la fiabilité et la rentabilité, et les intégrer à AUTOSAR. Nous définissons l'architecture ASLA (Adaptive System Level in AUTOSAR), qui est un cadre qui fournit une solution adaptative pour AUTOSAR. ASLA intègre des fonctions de reconfiguration au niveau des tâches telles que l'addition, la suppression et la migration des tâches dans AUTOSAR. La principale différence entre ASLA et la plate-forme Adaptive AUTOSAR est que ASLA permet d'attribuer des fonctions à criticité mixtes sur le même ECU ainsi que des adaptations bornées temps-réel, tant dis que Adaptive AUTOSAR sépare les fonctions temps réel critiques (fonctionnant sur la plate-forme classique) des fonctions temps réel non critiques (fonctionnant sur la plate-forme adaptative). Pour évaluer la validité de notre architecture proposée, nous fournissons une implémentation prototype de notre architecture ASLA et nous évaluons sa performance à travers des expériences. / Modern vehicles have become increasingly computerized to satisfy the more strict safety requirements and to provide better driving experiences. Therefore, the number of electronic control units (ECUs) in modern vehicles has continuously increased in the last few decades. In addition, advanced applications put higher computational demand on ECUs and have both hard and soft timing constraints, hence a unified approach handling both constraints is required. Moreover, economic pressures and multi-core architectures are driving the integration of several levels of safety-criticality onto the same platform. Such applications have been traditionally designed using static approaches; however, static approaches are no longer feasible in highly dynamic environments due to increasing complexity and tight cost constraints, and more flexible solutions are required. This means that, to cope with dynamic environments, an automotive system must be adaptive; that is, it must be able to adapt its structure and/or behaviour at runtime in response to frequent changes in its environment. These new requirements cannot be faced by the current state-of-the-art approaches of automotive software systems. Instead, a new design of the overall Electric/Electronic (E/E) architecture of a vehicle needs to be developed. Recently, the automotive industry agreed upon changing the current AUTOSAR platform to the “AUTOSAR Adaptive Platform”. This platform is being developed by the AUTOSAR consortium as an additional product to the current AUTOSAR classic platform. This is an ongoing feasibility study based on the POSIX operating system and uses service-oriented communication to integrate applications into the system at any desired time. The main idea of this thesis is to develop novel architecture concepts based on adaptation to address the needs of a new E/E architecture for Fully Electric Vehicles (FEVs) regarding safety, reliability and cost-efficiency, and integrate these in AUTOSAR. We define the ASLA (Adaptive System Level in AUTOSAR) architecture, which is a framework that provides an adaptive solution for AUTOSAR. ASLA incorporates tasks-level reconfiguration features such as addition, deletion and migration of tasks in AUTOSAR. The main difference between ASLA and the Adaptive AUTOSAR platform is that ASLA enables the allocation of mixed critical functions on the same ECU as well as time-bound adaptations while adaptive AUTOSAR separates critical, hard real-time functions (running on the classic platform) from non-critical/soft-real-time functions (running on the adaptive platform). To assess the validity of our proposed architecture, we provide an early prototype implementation of ASLA and evaluate its performance through experiments.
2

Automatic Discovery and Exposition of Parallelism in Serial Applications for Compiler-Inserted Runtime Adaptation

Greenland, David A. 25 May 2012 (has links) (PDF)
Compiler-Inserted Runtime Adaptation (CIRA) is a compilation and runtime adaptation strategy which has great potential for increasing performance in multicore systems. In this strategy, the compiler inserts directives into the application which will adapt the application at runtime. Its ability to overcome the obstacles of architectural and environmental diversity coupled with its flexibility to work with many programming languages and styles of applications make it a very powerful tool. However, it is not complete. In fact, there are many pieces still needed to accomplish these lofty goals. This work describes the automatic discovery of parallelism inherent in an application and the generation of an intermediate representation to expose that parallelism. This work shows on six benchmark applications that a significant amount of parallelism which was not initially apparent can be automatically discovered. This work also shows that the parallelism can then be exposed in a representation which is also automatically generated. This is accomplished by a series of analysis and transformation passes with only minimal programmer-inserted directives. This series of passes forms a necessary part of the CIRA toolchain called the concurrency compiler. This concurrency compiler proves that a representation with exposed parallelism and locality can be generated by a compiler. It also lays the groundwork for future, more powerful concurrency compilers. This work also describes the extension of the intermediate representation to support hierarchy, a prerequisite characteristic to the creation of the concurrency compiler. This extension makes it capable of representing many more applications in a much more effective way. This extension to support hierarchy allows much more of the parallelism discovered by the concurrency compiler to be stored in the representation.
3

Improving the Efficiency of Parallel Applications on Multithreaded and Multicore Systems

Curtis-Maury, Matthew 15 April 2008 (has links)
The scalability of parallel applications executing on multithreaded and multicore multiprocessors is often quite limited due to large degrees of contention over shared resources on these systems. In fact, negative scalability frequently occurs such that a non-negligable performance loss is observed through the use of more processors and cores. In this dissertation, we present a prediction model for identifying efficient operating points of concurrency in multithreaded scientific applications in terms of both performance as a primary objective and power secondarily. We also present a runtime system that uses live analysis of hardware event rates through the prediction model to optimize applications dynamically. We discuss a dynamic, phase-aware performance prediction model (DPAPP), which combines statistical learning techniques, including multivariate linear regression and artificial neural networks, with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. We find that the scalability model achieves accuracy approaching 95%, sufficiently accurate to identify improved concurrency levels and thread placements from within real parallel scientific applications. Using DPAPP, we develop a prediction-driven runtime optimization scheme, called ACTOR, which throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each parallel execution phase in an application. ACTOR successfully identifies and exploits program phases where limited scalability results in a performance loss through the use of more processing elements, providing simultaneous reductions in execution time by 5%-18% and power consumption by 0%-11% across a variety of parallel applications and architectures. Further, we extend DPAPP and ACTOR to include support for runtime adaptation of DVFS, allowing for the synergistic exploitation of concurrency throttling and DVFS from within a single, autonomically-acting library, providing improved energy-efficiency compared to either approach in isolation. / Ph. D.
4

Scheduling on Asymmetric Architectures

Blagojevic, Filip 22 July 2008 (has links)
We explore runtime mechanisms and policies for scheduling dynamic multi-grain parallelism on heterogeneous multi-core processors. Heterogeneous multi-core processors integrate conventional cores that run legacy codes with specialized cores that serve as computational accelerators. The term multi-grain parallelism refers to the exposure of multiple dimensions of parallelism from within the runtime system, so as to best exploit a parallel architecture with heterogeneous computational capabilities between its cores and execution units. To maximize performance on heterogeneous multi-core processors, programs need to expose multiple dimensions of parallelism simultaneously. Unfortunately, programming with multiple dimensions of parallelism is to date an ad hoc process, relying heavily on the intuition and skill of programmers. Formal techniques are needed to optimize multi-dimensional parallel program designs. We investigate user- and kernel-level schedulers that dynamically "rightsize" the dimensions and degrees of parallelism on the asymmetric parallel platforms. The schedulers address the problem of mapping application-specific concurrency to an architecture with multiple hardware layers of parallelism, without requiring programmer intervention or sophisticated compiler support. Our runtime environment outperforms the native Linux and MPI scheduling environment by up to a factor of 2.7. We also present a model of multi-dimensional parallel computation for steering the parallelization process on heterogeneous multi-core processors. The model predicts with high accuracy the execution time and scalability of a program using conventional processors and accelerators simultaneously. More specifically, the model reveals optimal degrees of multi-dimensional, task-level and data-level concurrency, to maximize performance across cores. We evaluate our runtime policies as well as the performance model we developed, on an IBM Cell BladeCenter, as well as on a cluster composed of Playstation3 nodes, using two realistic bioinformatics applications. / Ph. D.
5

Automated Runtime Analysis and Adaptation for Scalable Heterogeneous Computing

Helal, Ahmed Elmohamadi Mohamed 29 January 2020 (has links)
In the last decade, there have been tectonic shifts in computer hardware because of reaching the physical limits of the sequential CPU performance. As a consequence, current high-performance computing (HPC) systems integrate a wide variety of compute resources with different capabilities and execution models, ranging from multi-core CPUs to many-core accelerators. While such heterogeneous systems can enable dramatic acceleration of user applications, extracting optimal performance via manual analysis and optimization is a complicated and time-consuming process. This dissertation presents graph-structured program representations to reason about the performance bottlenecks on modern HPC systems and to guide novel automation frameworks for performance analysis and modeling and runtime adaptation. The proposed program representations exploit domain knowledge and capture the inherent computation and communication patterns in user applications, at multiple levels of computational granularity, via compiler analysis and dynamic instrumentation. The empirical results demonstrate that the introduced modeling frameworks accurately estimate the realizable parallel performance and scalability of a given sequential code when ported to heterogeneous HPC systems. As a result, these frameworks enable efficient workload distribution schemes that utilize all the available compute resources in a performance-proportional way. In addition, the proposed runtime adaptation frameworks significantly improve the end-to-end performance of important real-world applications which suffer from limited parallelism and fine-grained data dependencies. Specifically, compared to the state-of-the-art methods, such an adaptive parallel execution achieves up to an order-of-magnitude speedup on the target HPC systems while preserving the inherent data dependencies of user applications. / Doctor of Philosophy / Current supercomputers integrate a massive number of heterogeneous compute units with varying speed, computational throughput, memory bandwidth, and memory access latency. This trend represents a major challenge to end users, as their applications have been designed from the ground up to primarily exploit homogeneous CPUs. While heterogeneous systems can deliver several orders of magnitude speedup compared to traditional CPU-based systems, end users need extensive software and hardware expertise as well as significant time and effort to efficiently utilize all the available compute resources. To streamline such a daunting process, this dissertation presents automated frameworks for analyzing and modeling the performance on parallel architectures and for transforming the execution of user applications at runtime. The proposed frameworks incorporate domain knowledge and adapt to the input data and the underlying hardware using novel static and dynamic analyses. The experimental results show the efficacy of the introduced frameworks across many important application domains, such as computational fluid dynamics (CFD), and computer-aided design (CAD). In particular, the adaptive execution approach on heterogeneous systems achieves up to an order-of-magnitude speedup over the optimized parallel implementations.
6

Prediction Models for Multi-dimensional Power-Performance Optimization on Many Cores

Shah, Ankur Savailal 28 May 2008 (has links)
Power has become a primary concern for HPC systems. Dynamic voltage and frequency scaling (DVFS) and dynamic concurrency throttling (DCT) are two software tools (or knobs) for reducing the dynamic power consumption of HPC systems. To date, few works have considered the synergistic integration of DVFS and DCT in performance-constrained systems, and, to the best of our knowledge, no prior research has developed application-aware simultaneous DVFS and DCT controllers in real systems and parallel programming frameworks. We present a multi-dimensional, online performance prediction framework, which we deploy to address the problem of simultaneous runtime optimization of DVFS, DCT, and thread placement on multi-core systems. We present results from an implementation of the prediction framework in a runtime system linked to the Intel OpenMP runtime environment and running on a real dual-processor quad-core system as well as a dual-processor dual-core system. We show that the prediction framework derives near-optimal settings of the three power-aware program adaptation knobs that we consider. Our overall runtime optimization framework achieves significant reductions in energy (12.27% mean) and ED² (29.6% mean), through simultaneous power savings (3.9% mean) and performance improvements (10.3% mean). Our prediction and adaptation framework outperforms earlier solutions that adapt only DVFS or DCT, as well as one that sequentially applies DCT then DVFS. Further, our results indicate that prediction-based schemes for runtime adaptation compare favorably and typically improve upon heuristic search-based approaches in both performance and energy savings. / Master of Science
7

On the Interaction of High-Performance Network Protocol Stacks with Multicore Architectures

Chunangad Narayanaswamy, Ganesh 20 May 2008 (has links)
Multicore architectures have been one of the primary driving forces in the recent rapid growth in high-end computing systems, contributing to its growing scales and capabilities. With significant enhancements in high-speed networking technologies and protocol stacks which support these high-end systems, a growing need to understand the interaction between them closely is realized. Since these two components have been designed mostly independently, there tend to have often serious and surprising interactions that result in heavy asymmetry in the effective capability of the different cores, thereby degrading the performance for various applications. Similarly, depending on the communication pattern of the application and the layout of processes across nodes, these interactions could potentially introduce network scalability issues, which is also an important concern for system designers. In this thesis, we analyze these asymmetric interactions and propose and design a novel systems level management framework called SIMMer (Systems Interaction Mapping Manager) that automatically monitors these interactions and dynamically manages the mapping of processes on processor cores to transparently maximize application performance. Performance analysis of SIMMer shows that it can improve the communication performance of applications by more than twofold and the overall application performance by 18%. We further analyze the impact of contention in network and processor resources and relate it to the communication pattern of the application. Insights learnt from these analyses can lead to efficient runtime configurations for scientific applications on multicore architectures. / Master of Science
8

Architecting Resilient Computing Systems : a Component-Based Approach / Conception et implémentation de systèmes résilients par une approche à composants

Stoicescu, Miruna 09 December 2013 (has links)
L'évolution des systèmes pendant leur vie opérationnelle est incontournable. Les systèmes sûrs de fonctionnement doivent évoluer pour s'adapter à des changements comme la confrontation à de nouveaux types de fautes ou la perte de ressources. L'ajout de cette dimension évolutive à la fiabilité conduit à la notion de résilience informatique. Parmi les différents aspects de la résilience, nous nous concentrons sur l'adaptativité. La sûreté de fonctionnement informatique est basée sur plusieurs moyens, dont la tolérance aux fautes à l'exécution, où l'on attache des mécanismes spécifiques (Fault Tolerance Mechanisms, FTMs) à l'application. A ce titre, l'adaptation des FTMs à l'exécution s'avère un défi pour développer des systèmes résilients. Dans la plupart des travaux de recherche existants, l'adaptation des FTMs à l'exécution est réalisée de manière préprogrammée ou se limite à faire varier quelques paramètres. Tous les FTMs envisageables doivent être connus dès le design du système et déployés et attachés à l'application dès le début. Pourtant, les changements ont des origines variées et, donc, vouloir équiper un système pour le pire scénario est impossible. Selon les observations pendant la vie opérationnelle, de nouveaux FTMs peuvent être développés hors-ligne, mais intégrés pendant l'exécution. On dénote cette capacité comme adaptation agile, par opposition à l'adaptation préprogrammée. Dans cette thèse, nous présentons une approche pour développer des systèmes sûrs de fonctionnement flexibles dont les FTMs peuvent s'adapter à l'exécution de manière agile par des modifications à grain fin pour minimiser l'impact sur l'architecture initiale. D'abord, nous proposons une classification d'un ensemble de FTMs existants basée sur des critères comme le modèle de faute, les caractéristiques de l'application et les ressources nécessaires. Ensuite, nous analysons ces FTMs et extrayons un schéma d'exécution générique identifiant leurs parties communes et leurs points de variabilité. Après, nous démontrons les bénéfices apportés par les outils et les concepts issus du domaine du génie logiciel, comme les intergiciels réflexifs à base de composants, pour développer une librairie de FTMs adaptatifs à grain fin. Nous évaluons l'agilité de l'approche et illustrons son utilité à travers deux exemples d'intégration : premièrement, dans un processus de développement dirigé par le design pour les systèmes ubiquitaires et, deuxièmement, dans un environnement pour le développement d'applications pour des réseaux de capteurs. / Evolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs.
9

Verifying Design Properties at Runtime Using an MDE-Based Approach Models @Run.Time Verification-Application to Autonomous Connected Vehicles / Vérification de propriétés de conception à l’exécution à l’aide d’une approche IDM, model@run.time verification - Application aux véhicules connectés autonomes

Loulou, Hassan 21 November 2017 (has links)
Un véhicule autonome et connecté (ACV – pour Autonomous Connected Vehicle ) est un système cyber-physique où le monde réel et l’espace numérique virtuel se fusionnent. Ce type de véhicule requiert un processus de validation rigoureuse commençant à la phase de conception et se poursuivant même après le déploiement du logiciel. Un nouveau paradigme est apparu pour le monitorat continu des exécutions des logiciels afin d'autoriser des adaptations automatiquement en temps réel, systématiquement lors d’une détection de changement dans l'environnement d'exécution, d’une panne ou d’un bug. Ce paradigme s’intitule : « Models@Run.time ». Cette thèse s’inscrit dans le cadre des ACVs et plus particulièrement dans le contexte des véhicules qui collaborent et qui partagent leurs données d’une manière sécurisée. Plusieurs approches de modélisation sont déjà utilisées pour exprimer les exigences relatives au contrôle d'accès afin d’imposer des politiques de sécurité. Toutefois, leurs outils de validation ne tiennent pas compte les impacts de l'interaction entre les exigences fonctionnelles et les exigences de sécurité. Cette interaction peut conduire à des violations de sécurité inattendues lors de l'exécution du système ou lors des éventuelles adaptations à l’exécution. En outre, l’estimation en temps réel de l’état de trafic utilisant des données de type crowdsourcing pourrait être utilisée pour les adaptations aux modèles de coopération des AVCs. Cette approche n'a pas encore été suffisamment étudiée dans la littérature. Pour pallier à ces limitations, de nombreuses questions doivent être abordées:• L'évolution des exigences fonctionnelles du système doit être prise en compte lors de la validation des politiques de sécurité ainsi que les scénarios d'attaque doivent être générés automatiquement.• Une approche pour concevoir et détecter automatiquement les anti-patrons (antipatterns) de sécurité doit être développée. En outre, de nouvelles reconfigurations pour les politiques de contrôle d'accès doivent également être identifiées, validées et déployées efficacement à l'exécution.• Les ACVs doivent observer et analyser leur environnement, qui contient plusieurs flux de données dite massives (Big Data) pour proposer de nouveaux modèles de coopération, en temps réel.Dans cette thèse, une approche pour la surveillance de l'environnement des ACVs est proposée. L’approche permet de valider les politiques de contrôle d'accès et de les reconfigurer en toute sécurité. La contribution de cette thèse consiste à:• Guider les Model Checkers de sécurité pour trouver automatiquement les scénarios d'attaque dès la phase de conception.• Concevoir des anti-patterns pour guider le processus de validation, et développer un algorithme pour les détecter automatiquement lors des reconfigurations des modèles.• Construire une approche pour surveiller en temps réel les flux de données dynamiques afin de proposer des adaptations de la politique d'accès lors de l'exécution.L’approche proposée a été validée en utilisant plusieurs exemples liés aux ACVs, et les résultats des expérimentations prouvent la faisabilité de cette approche. / Autonomous Connected Vehicles (ACVs) are Cyber-physical systems (CPS) where the computationalworld and the real one meet. These systems require a rigorous validation processthat starts at design phase and continues after the software deployment. Models@Runtimehas appeared as a new paradigm for continuously monitoring software systems execution inorder to enable adaptations whenever a change, a failure or a bug is introduced in the executionenvironment. In this thesis, we are going to tackle ACVs environment where vehicles tries tocollaborate and share their data in a secure manner.Different modeling approaches are already used for expressing access control requirementsin order to impose security policies. However, their validation tools do not consider the impactsof the interaction between the functional and the security requirements. This interaction canlead to unexpected security breaches during the system execution and its potential runtimeadaptations. Also, the real-time prediction of traffic states using crowd sourcing data could beuseful for proposition adaptations to AVCs cooperation models. Nevertheless, it has not beensufficiently studied yet. To overcome these limitations, many issues should be addressed:• The evolution of the system functional part must be considered during the validation ofthe security policy and attack scenarios must be generated automatically.• An approach for designing and automatically detecting security anti-patterns might bedeveloped. Furthermore, new reconfigurations for access control policies also must befound, validated and deployed efficiently at runtime.• ACVs need to observe and analyze their complex environment, containing big-datastreams to recommend new cooperation models, in near real-time.In this thesis, we build an approach for sensing the ACVs environment, validating its accesscontrol models and securely reconfiguring it on the fly. We cover three aspects:• We propose an approach for guiding security models checkers to find the attack scenariosat design time automatically.• We design anti-patterns to guide the validation process. Then, we develop an algorithmto detect them automatically during models reconfigurations. Also, we design a mechanismfor reconfiguring the access control model and we develop a lightweight modularframework for an efficient deployment of new reconfigurations.• We build an approach for the real-time monitoring of dynamic data streams to proposeadaptations for the access policy at runtime.Our proposed approach was validated using several examples related o ACVs. the results ofour experimentations prove the feasibility of this approach.
10

Run-time Variability with Roles

Taing, Nguonly 04 April 2018 (has links)
Adaptability is an intrinsic property of software systems that require adaptation to cope with dynamically changing environments. Achieving adaptability is challenging. Variability is a key solution as it enables a software system to change its behavior which corresponds to a specific need. The abstraction of variability is to manage variants, which are dynamic parts to be composed to the base system. Run-time variability realizes these variant compositions dynamically at run time to enable adaptation. Adaptation, relying on variants specified at build time, is called anticipated adaptation, which allows the system behavior to change with respect to a set of predefined execution environments. This implies the inability to solve practical problems in which the execution environment is not completely fixed and often unknown until run time. Enabling unanticipated adaptation, which allows variants to be dynamically added at run time, alleviates this inability, but it holds several implications yielding system instability such as inconsistency and run-time failures. Adaptation should be performed only when a system reaches a consistent state to avoid inconsistency. Inconsistency is an effect of adaptation happening when the system changes the state and behavior while a series of methods is still invoking. A software bug is another source of system instability. It often appears in a variant composition and is brought to the system during adaptation. The problem is even more critical for unanticipated adaptation as the system has no prior knowledge of the new variants. This dissertation aims to achieve anticipated and unanticipated adaptation. In achieving adaptation, the issues of inconsistency and software failures, which may happen as a consequence of run-time adaptation, are evidently addressed as well. Roles encapsulate dynamic behavior used to adapt players representing the base system, which is the rationale to select roles as the software system's variants. Based on the role concept, this dissertation presents three mechanisms to comprehensively address adaptation. First, a dynamic instance binding mechanism is proposed to loosely bind players and roles. Dynamic binding of roles enables anticipated and unanticipated adaptation. Second, an object-level tranquility mechanism is proposed to avoid inconsistency by allowing a player object to adapt only when its consistent state is reached. Last, a rollback recovery mechanism is proposed as a proactive mechanism to embrace and handle failures resulting from a defective composition of variants. A checkpoint of a system configuration is created before adaptation. If a specialized bug sensor detects a failure, the system rolls back to the most recent checkpoint. These mechanisms are integrated into a role-based runtime, called LyRT. LyRT was validated with three case studies to demonstrate the practical feasibility. This validation showed that LyRT is more advanced than the existing variability approaches with respect to adaptation due to its consistency control and failure handling. Besides, several benchmarks were set up to quantify the overhead of LyRT concerning the execution time of adaptation. The results revealed that the overhead introduced to achieve anticipated and unanticipated adaptation to be small enough for practical use in adaptive software systems. Thus, LyRT is suitable for adaptive software systems that frequently require the adaptation of large sets of objects.

Page generated in 2.2072 seconds