• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 62
  • 62
  • 17
  • 15
  • 14
  • 14
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The run-time impact of business functionality when decomposing and adopting the microservice architecture / Påverkan av körtid för system funktionaliteter då de upplöses och microservice architektur appliceras

Faradj, Rasti January 2018 (has links)
In line with the growth of software, code bases are getting bigger and more complex. As a result of this, the architectural patterns, which systems rely upon, are becoming increasingly important. Recently, decomposed architectural styles have become a popular choice. This thesis explores system behavior with respect to decomposing system granularity and external communication between the resulting decomposed services. An e-commerce scenario was modeled and implemented at different granularity levels to measure the response time. In establishing the communication, both REST with HTTP and JSON and the gRPC framework were utilized. The results showed that decomposition has impact on run-time behaviour and external communication. The highest granularity level implemented with gRPC for communication establishment adds 10ms. In the context of how the web behaves today, it can be interpreted as feasible but there is no discussion yet on whether it is theoretically desirable. / I linje med de växande mjukvarusystemen blir kodbaserna större och mer komplexa. Arkitekturerna som systemen bygger på får allt större betydelse. Detta examensarbete utforskar hur upplösning av system som tillämpar mikroservicearkitektur beter sig, och hur de påverkas av kommunikationsupprättande bland de upplösta och resulterande tjänsterna. Ett e-handelsscenario modelleras i olika granularitetsnivåer där REST med HTTP och JSON samt gRPC används för att upprätta kommunikationen. Resultaten visar att upplösningen påverkar runtimebeteendet och den externa kommunikationen blir långsammare. En möjlig slutsats är att påverkan från den externa kommunikationen i förhållande till hur webben beter sig idag är acceptabel. Men om man ska ligga inom teoretiskt optimala gränser kan påverkan ses som för stor.
42

Mitigating Emergent Safety and Security Incidents of CPS by a Protective Shell

Wagner, Leonard 07 November 2023 (has links)
In today's modern world, Cyber-Physical Systems (CPS) have gained widespread prevalence, offering tremendous benefits while also increasing society's dependence on them. Given the direct interaction of CPS with the physical environment, their malfunction or compromise can pose significant risks to human life, property, and the environment. However, as the complexity of CPS rises due to heightened expectations and expanded functional requirements, ensuring their trustworthy operation solely during the development process becomes increasingly challenging. This thesis introduces and delves into the novel concept of the 'Protective Shell' – a real-time safeguard actively monitoring CPS during their operational phases. The protective shell serves as a last line of defence, designed to detect abnormal behaviour, conduct thorough analyses, and initiate countermeasures promptly, thereby mitigating unforeseen risks in real-time. The primary objective of this research is to enhance the overall safety and security of CPS by refining, partly implementing, and evaluating the innovative protective shell concept. To provide context for collaborative systems working towards higher objectives — common within CPS as system-of-systems (SoS) — the thesis introduces the 'Emergence Matrix'. This matrix categorises outcomes of such collaboration into four quadrants based on their anticipated nature and desirability. Particularly concerning are outcomes that are both unexpected and undesirable, which frequently serve as the root cause of safety accidents and security incidents in CPS scenarios. The protective shell plays a critical role in mitigating these unfavourable outcomes, as conventional vulnerability elimination procedures during the CPS design phase prove insufficient due to their inability to proactively anticipate and address these unforeseen situations. Employing the design science research methodology, the thesis is structured around its iterative cycles and the research questions imposed, offering a systematic exploration of the topic. A detailed analysis of various safety accidents and security incidents involving CPS was conducted to retrieve vulnerabilities that led to dangerous outcomes. By developing specific protective shells for each affected CPS and assessing their effectiveness during these hazardous scenarios, a generic core for the protective shell concept could be retrieved, indicating general characteristics and its overall applicability. Furthermore, the research presents a generic protective shell architecture, integrating advanced anomaly detection techniques rooted in explainable artificial intelligence (XAI) and human machine teaming. While the implementation of protective shells demonstrate substantial positive impacts in ensuring CPS safety and security, the thesis also articulates potential risks associated with their deployment that require careful consideration. In conclusion, this thesis makes a significant contribution towards the safer and more secure integration of complex CPS into daily routines, critical infrastructures and other sectors by leveraging the capabilities of the generic protective shell framework.:1 Introduction 1.1 Background and Context 1.2 Research Problem 1.3 Purpose and Objectives 1.3.1 Thesis Vision 1.3.2 Thesis Mission 1.4 Thesis Outline and Structure 2 Design Science Research Methodology 2.1 Relevance-, Rigor- and Design Cycle 2.2 Research Questions 3 Cyber-Physical Systems 3.1 Explanation 3.2 Safety- and Security-Critical Aspects 3.3 Risk 3.3.1 Quantitative Risk Assessment 3.3.2 Qualitative Risk Assessment 3.3.3 Risk Reduction Mechanisms 3.3.4 Acceptable Residual Risk 3.4 Engineering Principles 3.4.1 Safety Principles 3.4.2 Security Principles 3.5 Cyber-Physical System of Systems (CPSoS) 3.5.1 Emergence 4 Protective Shell 4.1 Explanation 4.2 System Architecture 4.3 Run-Time Monitoring 4.4 Definition 4.5 Expectations / Goals 5 Specific Protective Shells 5.1 Boeing 737 Max MCAS 5.1.1 Introduction 5.1.2 Vulnerabilities within CPS 5.1.3 Specific Protective Shell Mitigation Mechanisms 5.1.4 Protective Shell Evaluation 5.2 Therac-25 5.2.1 Introduction 5.2.2 Vulnerabilities within CPS 5.2.3 Specific Protective Shell Mitigation Mechanisms 5.2.4 Protective Shell Evaluation 5.3 Stuxnet 5.3.1 Introduction 5.3.2 Exploited Vulnerabilities 5.3.3 Specific Protective Shell Mitigation Mechanisms 5.3.4 Protective Shell Evaluation 5.4 Toyota 'Unintended Acceleration' ETCS 5.4.1 Introduction 5.4.2 Vulnerabilities within CPS 5.4.3 Specific Protective Shell Mitigation Mechanisms 5.4.4 Protective Shell Evaluation 5.5 Jeep Cherokee Hack 5.5.1 Introduction 5.5.2 Vulnerabilities within CPS 5.5.3 Specific Protective Shell Mitigation Mechanisms 5.5.4 Protective Shell Evaluation 5.6 Ukrainian Power Grid Cyber-Attack 5.6.1 Introduction 5.6.2 Vulnerabilities in the critical Infrastructure 5.6.3 Specific Protective Shell Mitigation Mechanisms 5.6.4 Protective Shell Evaluation 5.7 Airbus A400M FADEC 5.7.1 Introduction 5.7.2 Vulnerabilities within CPS 5.7.3 Specific Protective Shell Mitigation Mechanisms 5.7.4 Protective Shell Evaluation 5.8 Similarities between Specific Protective Shells 5.8.1 Mitigation Mechanisms Categories 5.8.2 Explanation 5.8.3 Conclusion 6 AI 6.1 Explainable AI (XAI) for Anomaly Detection 6.1.1 Anomaly Detection 6.1.2 Explainable Artificial Intelligence 6.2 Intrinsic Explainable ML Models 6.2.1 Linear Regression 6.2.2 Decision Trees 6.2.3 K-Nearest Neighbours 6.3 Example Use Case - Predictive Maintenance 7 Generic Protective Shell 7.1 Architecture 7.1.1 MAPE-K 7.1.2 Human Machine Teaming 7.1.3 Protective Shell Plugin Catalogue 7.1.4 Architecture and Design Principles 7.1.5 Conclusion Architecture 7.2 Implementation Details 7.3 Evaluation 7.3.1 Additional Vulnerabilities introduced by the Protective Shell 7.3.2 Summary 8 Conclusion 8.1 Summary 8.2 Research Questions Evaluation 8.3 Contribution 8.4 Future Work 8.5 Recommendation
43

Exekveringsmiljö för Plex-C på JVM / Run-time environment for Plex-C on JVM

Möller, Johan January 2002 (has links)
<p>The Ericsson AXE-based systems are programmed using an internally developed language called Plex-C. Plex-C is normally compiled to execute on an Ericsson internal processor architecture. A transition to standard processors is currently in progress. This makes it interesting to examine if Plex-C can be compiled to execute on the JVM, which would make it processor independent. </p><p>The purpose of the thesis is to examine if parts of the run-time environment of Plex-C can be translated to Java and if this can be done so that sufficient performance is obtained. It includes how language constructions in Plex-C can be translated to Java. </p><p>The thesis describes how a limited part of the Plex-C run-time environment is implemented in Java. Optimizations are an important part of the implementation. </p><p>It is also described how the JVM system was tested with a benchmark test. </p><p>The test results indicate that the implemented system is a few times faster than the Ericsson internal processor architecture. But this performance is still not sufficient for the JVM system to be an interesting replacement for the currently used processor architecture. It might still be useful as a processor independent test platform.</p>
44

Enabling Timing Analysis of Complex Embedded Software Systems

Kraft, Johan January 2010 (has links)
Cars, trains, trucks, telecom networks and industrial robots are examples of products relying on complex embedded software systems, running on embedded computers. Such systems may consist of millions of lines of program code developed by hundreds of engineers over many years, often decades. Over the long life-cycle of such systems, the main part of the product development costs is typically not the initial development, but the software maintenance, i.e., improvements and corrections of defects, over the years. Of the maintenance costs, a major cost is the verification of the system after changes has been applied, which often requires a huge amount of testing. However, today's techniques are not sufficient, as defects often are found post-release, by the customers. This area is therefore of high relevance for industry. Complex embedded systems often control machinery where timing is crucial for accuracy and safety. Such systems therefore have important requirements on timing, such as maximum response times. However, when maintaining complex embedded software systems, it is difficult to predict how changes may impact the system's run-time behavior and timing, e.g., response times.Analytical and formal methods for timing analysis exist, but are often hard to apply in practice on complex embedded systems, for several reasons. As a result, the industrial practice in deciding the suitability of a proposed change, with respect to its run-time impact, is to rely on the subjective judgment of experienced developers and architects. This is a risky and inefficient, trial-and-error approach, which may waste large amounts of person-hours on implementing unsuitable software designs, with potential timing- or performance problems. This can generally not be detected at all until late stages of testing, when the updated software system can be tested on system level, under realistic conditions. Even then, it is easy to miss such problems. If products are released containing software with latent timing errors, it may cause huge costs, such as car recalls, or even accidents. Even when such problems are found using testing, they necessitate design changes late in the development project, which cause delays and increases the costs. This thesis presents an approach for impact analysis with respect to run-time behavior such as timing and performance for complex embedded systems. The impact analysis is performed through optimizing simulation, where the simulation models are automatically generated from the system implementation. This approach allows for predicting the consequences of proposed designs, for new or modified features, by prototyping the change in the simulation model on a high level of abstraction, e.g., by increasing the execution time for a particular task. Thereby, designs leading to timing-, performance-, or resource usage problems can be identified early, before implementation, and a late redesigns are thereby avoided, which improves development efficiency and predictability, as well as software quality. The contributions presented in this thesis is within four areas related to simulation-based analysis of complex embedded systems: (1) simulation and simulation optimization techniques, (2) automated model extraction of simulation models from source code, (3) methods for validation of such simulation models and (4) run-time recording techniques for model extraction, impact analysis and model validation purposes. Several tools has been developed during this work, of which two are in commercialization in the spin-off company Percepio AB. Note that the Katana approach, in area (2), is subject for a recent patent application - patent pending. / PROGRESS
45

The Omnibus language and integrated verification approach

Wilson, Thomas January 2007 (has links)
This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed. The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The use of the integrated verification approach to meet two key requirements of safe software component reuse, to have clear descriptions and some form of certification, are discussed along with the specialised facilities provided by the Omnibus tool to manage the distribution of components. The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic. The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application.
46

Passage à l'echelle d'un support d'exécution à base de tâches pour l'algèbre linéaire dense / Scalability of a task-based runtime system for dense linear algebra applications

Sergent, Marc 08 December 2016 (has links)
La complexification des architectures matérielles pousse vers l’utilisation de paradigmes de programmation de haut niveau pour concevoir des applications scientifiques efficaces, portables et qui passent à l’échelle. Parmi ces paradigmes, la programmation par tâches permet d’abstraire la complexité des machines en représentant les applications comme des graphes de tâches orientés acycliques (DAG). En particulier, le modèle de programmation par tâches soumises séquentiellement (STF) permet de découpler la phase de soumission des tâches, séquentielle, de la phase d’exécution parallèle des tâches. Même si ce modèle permet des optimisations supplémentaires sur le graphe de tâches au moment de la soumission, il y a une préoccupation majeure sur la limite que la soumission séquentielle des tâches peut imposer aux performances de l’application lors du passage à l’échelle. Cette thèse se concentre sur l’étude du passage à l’échelle du support d’exécution StarPU (développé à Inria Bordeaux dans l’équipe STORM), qui implémente le modèle STF, dans le but d’optimiser les performances d’un solveur d’algèbre linéaire dense utilisé par le CEA pour faire de grandes simulations 3D. Nous avons collaboré avec l’équipe HiePACS d’Inria Bordeaux sur le logiciel Chameleon, qui est une collection de solveurs d’algèbre linéaire portés sur supports d’exécution à base de tâches, afin de produire un solveur d’algèbre linéaire dense sur StarPU efficace et qui passe à l’échelle jusqu’à 3 000 coeurs de calcul et 288 accélérateurs de type GPU du supercalculateur TERA-100 du CEA-DAM. / The ever-increasing supercomputer architectural complexity emphasizes the need for high-level parallel programming paradigms to design efficient, scalable and portable scientific applications. Among such paradigms, the task-based programming model abstracts away much of the architecture complexity by representing an application as a Directed Acyclic Graph (DAG) of tasks. Among them, the Sequential-Task-Flow (STF) model decouples the task submission step, sequential, from the parallel task execution step. While this model allows for further optimizations on the DAG of tasks at submission time, there is a key concern about the performance hindrance of sequential task submission when scaling. This thesis’ work focuses on studying the scalability of the STF-based StarPU runtime system (developed at Inria Bordeaux in the STORM team) for large scale 3D simulations of the CEA which uses dense linear algebra solvers. To that end, we collaborated with the HiePACS team of Inria Bordeaux on the Chameleon software, which is a collection of linear algebra solvers on top of task-based runtime systems, to produce an efficient and scalable dense linear algebra solver on top of StarPU up to 3,000 cores and 288 GPUs of CEA-DAM’s TERA-100 cluster.
47

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
48

Achieving Autonomic Computing through the Use of Variability Models at Run-time

Cetina Englada, Carlos 15 April 2010 (has links)
Increasingly, software needs to dynamically adapt its behavior at run-time in response to changing conditions in the supporting computing infrastructure and in the surrounding physical environment. Adaptability is emerging as a necessary underlying capability, particularly for highly dynamic systems such as context-aware or ubiquitous systems. By automating tasks such as installation, adaptation, or healing, Autonomic Computing envisions computing environments that evolve without the need for human intervention. Even though there is a fair amount of work on architectures and their theoretical design, Autonomic Computing was criticised as being a \hype topic" because very little of it has been implemented fully. Furthermore, given that the autonomic system must change states at runtime and that some of those states may emerge and are much less deterministic, there is a great challenge to provide new guidelines, techniques and tools to help autonomic system development. This thesis shows that building up on the central ideas of Model Driven Development (Models as rst-order citizens) and Software Product Lines (Variability Management) can play a signi cant role as we move towards implementing the key self-management properties associated with autonomic computing. The presented approach encompass systems that are capable of modifying their own behavior with respect to changes in their operating environment, by using variability models as if they were the policies that drive the system's autonomic recon guration at runtime. Under a set of recon guration commands, the components that make up the architecture dynamically cooperate to change the con guration of the architecture to a new con guration. This work also provides the implementation of a Model-Based Recon guration Engine (MoRE) to blend the above ideas. Given a context event, MoRE queries the variability models to determine how the system should evolve, and then it provides the mechanisms for modifying the system. / Cetina Englada, C. (2010). Achieving Autonomic Computing through the Use of Variability Models at Run-time [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7484 / Palancia
49

Optimisation de la consommation d'énergie des systèmes mobiles par l'analyse des besoins de l'utilisateur / Energy consumption optimization in mobile systems by user needs analysis

Chaib Draa, Ismat Yahia 26 June 2018 (has links)
De nos jours, l’omniprésence des systèmes mobiles ne fait qu’accroitre et ces derniers deviennent indispensables pour nombreux d’entre nous. Les constructeurs de ces plateformes mobiles inondent le marché avec des produits de plus en plus performants et contenant un grand nombre d’applications énergivores. Le revers de la médaille de cette popularité est la consommation d’énergie. En effet, les caractéristiques des systèmes mobiles actuels ne font qu’accentuer le besoin d’une refonte des techniques d’optimisations de la consommation d’énergie. Dans une époque où les consciences s’élèvent pour un monde plus «green», de nombreuses solutions sont proposées pour répondre à la problématique de la consommation d’énergie des systèmes mobiles. Cependant, dans les solutions existantes, le comportement de l’utilisateur et ses besoins sont rarement considérés. Cette omission est paradoxale car c’est le comportement de l’utilisateur final qui détermine la consommation d’énergie du système. D’autre part, l’utilisation des données qui émanent des différents capteurs embarqués et du système d’exploitation peut être bénéfique pour mettre en place des politiques efficientes de gestion de puissance. Exploité à bon escient, l’important flux d’informations disponible est susceptible de servir à la caractérisation du comportement de l’utilisateur, ses habitudes et ses besoins en matière de configuration hardware. En assimilant ces informations et en les traitant, nous pouvons proposer des optimisations d’énergie sans altérer la satisfaction de l’utilisateur. Dans cette thèse nous proposons le modèle CPA pour Collect – Process – Adjust. Ce modèle permet de collecter des données émanant de différentes sources, les traiter et en générer des politiques d’optimisations d’énergie. Les travaux de cette thèse ont été réalisés en coopération avec Intel. L’objectif de cette collaboration est la conception et la réalisation de solutions permettant d’améliorer la gestion d’énergie proposée par le système d’exploitation. / Optimizing energy consumption in modern mobile handled devices plays a crucial role as lowering the power consumption impacts battery life and systemreliability. Recent mobile platforms have an increasing number of sensors and processing components. Added to the popularity of power-hungry applications, battery life in mobile devices is an important issue. However, the utilization pattern of large amount of data from the various sensors can be beneficial to detect the changing device context, the user needs and the running application requirements in terms of hardware resources. When these information are used properly, an efficient control of power knobs can be implemented to reduce the energy consumption. This thesis has been achieved in collaboration with Intel Portland.
50

Automated Configuration of Time-Critical Multi-Configuration AUTOSAR Systems

Chandmare, Kunal 28 September 2017 (has links)
The vision of automated driving demands a highly available system, especially in safety-critical functionalities. In automated driving when a driver is not binding to be a part of the control loop, the system needs to be operational even after failure of a critical component until driver regain the control of vehicle. In pursuit of such a fail-operational behavior, the developed design process with software redundancy in contrast to conventional dedicated backup requires the support of automatic configurator for scheduling relevant parameters to ensure real-time behavior of the system. Multiple implementation methods are introduced to provide an automatic service which also considers task criticality before assigning task to the processor. Also, a generic method is developed to generate adaptation plans automatically for an already monitoring and reconfiguration service to handle fault occurring environment.

Page generated in 0.062 seconds