• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • Tagged with
  • 34
  • 34
  • 34
  • 11
  • 10
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A methodology for the requirements analysis of critical real-time systems

De Lemos, Rogerio January 1994 (has links)
This thesis describes a methodology for the requirements analysis of critical real-time systems. The methodology is based on formal methods, and provides a systematic way in which requirements can be analysed and specifications produced. The proposed methodology consists of a framework with distinct phases of analysis, a set oftechniques appropriate for the issues to be analysed at each phase of the framework, a hierarchical structure of the specifications obtained from the process of analysis, and techniques to perform quality assessment of the specifications. The phases of the framework, which are abstraction levels for the analysis of the requirements, follow directly from a general structure adopted for critical real-time systems. The intention is to define abstraction levels, or domains, in which the analysis of requirements can be performed in terms of specific properties of the system, thus reducing the inherent complexity of the analysis. Depending on the issues to be analysed in each domain, the choice of the appropriate formalism is determined by the set of features, related to that domain, that a formalism should possess. In this work, instead of proposing new formalisms we concentrate on identifying and enumerating those features that a formalism should have. The specifications produced at each phase of the framework are organised by means of a specification hierarchy, which facilitates our assessment of the quality of the requirements specifications, and their traceability. Such an assessment should be performed by qualitative and quantitative means in order to obtain high confidence (assurance) that the level of safety is acceptable. In order to exemplify the proposed methodology for the requirements analysis of critical real-time systems we discuss a case study based on a crossing of two rail tracks (in a model railway), which raises safety issues that are similar to those found at a traditional level crossing (i.e. rail-road).
2

Scheduling and timing analysis for safety critical real-time systems

Bate, Iain John January 1999 (has links)
No description available.
3

Mobile Interaction with Safety Critical Systems : A feasibility study

Jonsson, Erik January 2015 (has links)
Embedded systems exists everywhere around us and the number of applications seems to be ever growing. They are found in electrical devices from coee machines to aircrafts. The common denominator is that they are designed for the specic purpose of the application. Some of them are used in safety critical systems where it is crucial that they operate correct and as intended in order to avoid accidents that can harm humans or properties. Meanwhile, general purpose Commercial O The Shelf (COTS) devices that can be found in the retail store, such as smartphones and tablets, has become a natural part of everyday life in the society. New applications are developed every day that improves everyday living, but numerous are also coupled to specic devices in order to control its functionality. Interaction between embedded systems and the exible devices do however not come without issues. Security, safety and ethical aspects are some of the issues that should be considered. In this thesis, a case study was performed to investigate the feasibility of using mobile COTS products in interaction with safety critical systems with respect to functional safety. Six user scenarios were identied for investigation, which could be of interest for industrial applications; The operator presented live machine data, The operator controlling the machine remotely, The service technician using mobile device in maintenance, service technician reading machine logs from the oce, the production manager analyzing machine productivity logs from the oce and the software manager uploading software. Restrictions in the functional safety standard, IEC 61508, and the characteristics of COTS devices, leads to the conclusion that real time interaction with safety systems is not allowed if the certication is to be preserved. Extracting information used to analyze the system where data is only sent from the machine would be allowed. All scenarios where the machine sends data to the user, and the data is only used as information, are hence allowed if isolation properties are guaranteed. A prototype system was designed and parts of it were implemented to show how sending and logging information can be performed using the company developed communication platform Data Engine.
4

Dependable systems integration using measurement theory and decision analysis

Prasad, Divya Kumari January 1999 (has links)
No description available.
5

The formal specification of a safety kernal

Scales, William James January 1996 (has links)
No description available.
6

Implementation of an asynchronous real-time programming language

Arenas-Sarmiento, Alvard Enrique January 2000 (has links)
No description available.
7

Optimizing scoped and immortal memory management in real-time Java

Hamza, Hamza January 2013 (has links)
The Real-Time Specification for Java (RTSJ) introduces a new memory management model which avoids interfering with the garbage collection process and achieves better deterministic behaviour. In addition to the heap memory, two types of memory areas are provided - immortal and scoped. The research presented in this Thesis aims to optimize the use of the scoped and immortal memory model in RTSJ applications. Firstly, it provides an empirical study of the impact of scoped memory on execution time and memory consumption with different data objects allocated in scoped memory areas. It highlights different characteristics for the scoped memory model related to one of the RTSJ implementations (SUN RTS 2.2). Secondly, a new RTSJ case study which integrates scoped and immortal memory techniques to apply different memory models is presented. A simulation tool for a real-time Java application is developed which is the first in the literature that shows scoped memory and immortal memory consumption of an RTSJ application over a period of time. The simulation tool helps developers to choose the most appropriate scoped memory model by monitoring memory consumption and application execution time. The simulation demonstrates that a developer is able to compare and choose the most appropriate scoped memory design model that achieves the least memory footprint. Results showed that the memory design model with a higher number of scopes achieved the least memory footprint. However, the number of scopes per se does not always indicate a satisfactory memory footprint; choosing the right objects/threads to be allocated into scopes is an important factor to be considered. Recommendations and guidelines for developing RTSJ applications which use a scoped memory model are also provided. Finally, monitoring scoped and immortal memory at runtime may help in catching possible memory leaks. The case study with the simulation tool developed showed a space overhead incurred by immortal memory. In this research, dynamic code slicing is also employed as a debugging technique to explore constant increases in immortal memory. Two programming design patterns are presented for decreasing immortal memory overheads generated by specific data structures. Experimental results showed a significant decrease in immortal memory consumption at runtime.
8

Towards Predictable Real-Time Performance on Multi-Core Platforms

Kim, Hyoseung 01 June 2016 (has links)
Cyber-physical systems (CPS) integrate sensing, computing, communication and actuation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12 commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation. Our proposed techniques theoretically bound temporal interference that tasks may suffer from when accessing shared resources. Our techniques also involve software primitives and algorithms for real-time operating systems and hypervisors, which significantly reduce the degree of the temporal interference. Specifically, we tackle the issues of cache and memory contention, locking and synchronization, interrupt handling, and access control for computational accelerators such as general-purpose graphics processing units (GPGPUs), all of which are crucial to achieving predictable real-time performance on a modern multi-core platform. Our solutions are readily applicable to commodity multi-core platforms, and can be used not only for developing new systems but also migrating existing applications from single-core to multi-core platforms.
9

Analysing and supporting the reliability decision-making process in computing systems with a reliability evaluation framework / Analyser et supporter le processus de prise de décision dans la fiabilité des systèmes informatiques avec un framework d'évaluation de fiabilité

Kooli, Maha 01 December 2016 (has links)
La fiabilité est devenu un aspect important de conception des systèmes informatiques suite à la miniaturisation agressive de la technologie et le fonctionnement non interrompue qui introduisent un grand nombre de sources de défaillance des composantes matérielles. Le système matériel peut être affecté par des fautes causées par des défauts de fabrication ou de perturbations environnementales telles que les interférences électromagnétiques, les radiations externes ou les neutrons de haute énergie des rayons cosmiques et des particules alpha. Pour les systèmes embarqués et systèmes utilisés dans les domaines critiques pour la sécurité tels que l'avionique, l'aérospatiale et le transport, la présence de ces fautes peut endommager leurs composants et conduire à des défaillances catastrophiques. L'étude de nouvelles méthodes pour évaluer la fiabilité du système permet d'aider les concepteurs à comprendre les effets des fautes sur le système, et donc de développer des produits fiables et sûrs. En fonction de la phase de conception du système, le développement de méthodes d'évaluation de la fiabilité peut réduire les coûts et les efforts de conception, et aura un impact positif le temps de mise en marché du produit.L'objectif principal de cette thèse est de développer de nouvelles techniques pour évaluer la fiabilité globale du système informatique complexe. L'évaluation vise les fautes conduisant à des erreurs logicielles. Ces fautes peuvent se propager à travers les différentes structures qui composent le système complet. Elles peuvent être masquées lors de cette propagation soit au niveau technologique ou architectural. Quand la faute atteint la partie logicielle du système, elle peut endommager les données, les instructions ou le contrôle de flux. Ces erreurs peuvent avoir un impact sur l'exécution correcte du logiciel en produisant des résultats erronés ou empêcher l'exécution de l'application.Dans cette thèse, la fiabilité des différents composants logiciels est analysée à différents niveaux du système (en fonction de la phase de conception), mettant l'accent sur le rôle que l'interaction entre le matériel et le logiciel joue dans le système global. Ensuite, la fiabilité du système est évaluée grâce à des méthodologies d'évaluation flexible, rapide et précise. Enfin, le processus de prise de décision pour la fiabilité des systèmes informatiques est pris en charge avec les méthodes et les outils développés. / Reliability has become an important design aspect for computing systems due to the aggressive technology miniaturization and the uninterrupted performance that introduce a large set of failure sources for hardware components. The hardware system can be affected by faults caused by physical manufacturing defects or environmental perturbations such as electromagnetic interference, external radiations, or high-energy neutrons from cosmic rays and alpha particles.For embedded systems and systems used in safety critical fields such as avionic, aerospace and transportation, the presence of these faults can damage their components and leads to catastrophic failures. Investigating new methods to evaluate the system reliability helps designers to understand the effects of faults on the system, and thus to develop reliable and dependable products. Depending on the design phase of the system, the development of reliability evaluation methods can save the design costs and efforts, and will positively impact product time-to-market.The main objective of this thesis is to develop new techniques to evaluate the overall reliability of complex computing system running a software. The evaluation targets faults leading to soft errors. These faults can propagate through the different structures composing the full system. They can be masked during this propagation either at the technological or at the architectural level. When a fault reaches the software layer of the system, it can corrupt data, instructions or the control flow. These errors may impact the correct software execution by producing erroneous results or prevent the execution of the application leading to abnormal termination or application hang.In this thesis, the reliability of the different software components is analyzed at different levels of the system (depending on the design phase), emphasizing the role that the interaction between hardware and software plays in the overall system. Then, the reliability of the system is evaluated via a flexible, fast, and accurate evaluation framework. Finally, the reliability decision-making process in computing systems is comprehensively supported with the developed framework (methodology and tools).
10

Components, Safety Interfaces, and Compositional Analysis

Elmqvist, Jonas January 2010 (has links)
<p>Component-based software development has emerged as a promising approach for developing complex software systems by composing smaller independently developed components into larger component assemblies. This approach offers means to increase software reuse, achieve higher flexibility and shorter time-to-market by the use of off-the-shelf components (COTS). However, the use of COTS in safety-critical system is highly unexplored.</p><p>This thesis addresses the problems appearing in component-based development of safety-critical systems. We aim at efficient reasoning about safety at system level while adding or replacing components. For safety-related reasoning it does not suffice to consider functioning components in their intended environments but also the behaviour of components in presence of single or multiple faults. Our contribution is a formal component model that includes the notion of a safety interface. It describes how the component behaves with respect to violation of a given system-level property in presence of faults in its environment. This approach also provides a link between formal analysis of components in safety-critical systems and the traditional engineering processes supported by model-based development.</p><p>We also present an algorithm for deriving safety interfaces given a particular safety property and fault modes for the component. The safety interface is then used in a method proposed for compositional reasoning about component assemblies. Instead of reasoning about the effect of faults on the composed system, we suggest analysis of fault tolerance through pair wise analysis based on safety interfaces.</p><p>The framework is demonstrated as a proof-of-concept in two case studies; a hydraulic system from the aerospace industry and an adaptive cruise controller from the automotive industry. The case studies have shown that a more efficient system-level safety analysis can be performed using the safety interfaces.</p>

Page generated in 0.0899 seconds