Spelling suggestions: "subject:"real time lemsystems"" "subject:"real time atemsystems""
41 |
Integration of enhanced slot-shifting in uc/os-IIRAMACHANDRAN, GOWRI SANKAR January 2011 (has links)
No description available.
|
42 |
Probabilistic Analysis of Low-Criticality ExecutionKüttler, Martin, Roitzsch, Michael, Hamann, Claude-Joachim, Völp, Marcus 16 March 2018 (has links) (PDF)
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
|
43 |
Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and ImplementationsEl-Shambakey, Mohammed Talat 02 October 2013 (has links)
Lock-based concurrency control suffers from programmability, scalability, and composability challenges. These challenges are exacerbated in emerging multicore architectures, on which improved software performance must be achieved by exposing greater concurrency. Transactional memory (TM) is an emerging alternative synchronization model for shared memory objects that promises to alleviate these difficulties.
In this dissertation, we consider software transactional memory (STM) for concurrency control in multicore real-time software, and present a suite of real-time STM contention managers for resolving transactional conflicts. The contention managers are called ECM, RCM, LCM, PNF, and FBLT. RCM and ECM resolve conflicts using fixed and dynamic priorities of real-time tasks, respectively, and are naturally intended to be used with the fixed priority (e.g., G-RMA) and dynamic priority (e.g., G-EDF) multicore real-time schedulers, respectively. LCM resolves conflicts based on task priorities as well as atomic section lengths, and can be used with G-EDF or G-RMA schedulers. Transactions under ECM, RCM, and LCM may retry due to conflicts with higher priority tasks even when there are no shared objects, i.e., transitive retry. PNF avoids transitive retry and optimizes processor usage by lowering the priority of retrying transactions, thereby enabling other non-conflicting transactions to proceed. PNF, however, requires a priori knowledge of all requested objects for each atomic section, which is inconsistent with the semantics of dynamic STM. Moreover, its centralized design increases overhead. FBLT avoids transitive retry, do not require a priori knowledge of requested objects, and has a decentralized design.
We establish upper bounds on transactional retry costs and task response times under the contention managers through schedulability analysis. Since ECM and RCM preserve the semantics of the underlying real-time scheduler, their maximum transactional retry cost is double the maximum atomic section length. This is improved in the design of LCM, which achieves shorter retry costs and tighter upper bounds. As PNF avoids transitive retry and improves processor usage, it yields shorter retry costs and tighter upper bounds than ECM, RCM, and LCM. FBLT\'s upper bounds are similarly tight because it combines the advantages of PNF and LCM.
We formally compare the proposed contention managers with each other, with lock-free synchronization, and with multiprocessor real-time locking protocols. Our analysis reveals that, for most cases, ECM, RCM, and LCM achieve higher schedulability than lock-free synchronization only when the atomic section length does not exceed half of lock-free synchronization\'s retry loop length. With equal periods and greater access times for shared objects, atomic section length under ECM, RCM, and LCM can be much larger than the retry loop length while still achieving better schedulability. With proper values for LCM\'s design parameters, atomic section length can be larger than the retry loop length for better schedulability. Under PNF, atomic section length can exceed lock-free\'s retry loop length and still achieve better schedulability in certain cases. FBLT achieves equal or better schedulability than lock-free with appropriate values for design parameters. The schedulability advantage of the contention managers over multiprocessor real-time locking protocols such as Global OMLP and RNLP depends upon the value of $s_{max}/L_{max}$, the ratio of the maximum transaction length to the maximum critical section length. FBLT\'s schedulability is equal or better than Global OMLP and RNLP if $s_/L_ le 2$.
Checkpointing enables partial roll-back of transactions by recording transaction execution states (i.e., checkpoints) during execution, allowing roll-back to a previous checkpoint instead of transaction start, improving task response time. We extend FBLT with checkpointing and develop CP-FBLT, and identify the conditions under which CP-FBLT achieves equal or better schedulability than FBLT.
We implement the contention managers in the Rochester STM framework and conduct experimental studies using a multicore real-time Linux kernel. Our studies reveal that among the contention managers, CP-FBLT has the best average-case performance. CP-FBLT\'s higher performance is due to the fact that PNF\'s and LCM\'s advantages are combined into the design of FBLT, which is the base of CP-FBLT. Moreover, checkpointing improves task response time. The contention managers were also found to have equal or better average-case performance than lock-free synchronization: more jobs meet their deadlines using CP-FBLT, FBLT, and PNF than lock-free synchronization by 34.6%, 28.5%, and 32.4% (on average), respectively. The superiority of the contention managers is directly due to their better conflict resolution policies.
Locking protocols such as OMLP and RNLP were found to perform better: more jobs meet their deadlines under OMLP and RNLP than any contention manager by 12.4% and 13.7% (on average), respectively. However, the proposed contention managers have numerous qualitative advantages over locking protocols. Locks do not compose, whereas STM transactions do. To allow multiple objects to be accessed in a critical section, OMLP assigns objects to non-conflicting groups, where each group is protected by a distinct lock. RNLP assumes that objects are accessed in a specific order to prevent deadlocks. In contrast, STM allows multiple objects to be accessed in a transaction in any order, while guaranteeing deadlock-freedom, which significantly increases programmability. Moreover, STM offers platform independence: the proposed contention managers can be entirely implemented in the user-space as a library. In contrast, real-time locking protocols such as OMLP and RNLP must be supported by the underlying platform (i.e., operating system or virtual machine). / Ph. D.
|
44 |
Improving Soft Real-time Performance of Fog ComputingStruhar, Vaclav January 2021 (has links)
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication. In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
|
45 |
TIME-PREDICTABLE FAST MEMORIES: CACHES VS. SCRATCHPAD MEMORIESLiu, Yu 01 August 2011 (has links)
In modern processor architectures, caches are widely used to shorten the gap between the processor speed and memory access time. However, caches are time unpredictable, especially the shared L2 cache used by different cores on multicore processors. Thus, it can significantly increase the complexity of worst-case execution time (WCET) analysis, which is crucial for real-time systems. This dissertation designs several time-predictable scratchpad memory (SPM) based architectures for both VLIW (Very Long InstructionWord) based single-core and multicore processors. First, this dissertation proposes a time predictable two-level SPM based architecture for VLIW based single-core processors, and an ILP (Integer Linear Programming) based static memory objects allocation algorithm is extended to support the multi-level SPMs without harming the time predictability of SPMs. Second, several SPM based architectures for VLIW based multicore processors are designed. To support these architectures, the dynamic memory objects allocation based partition, the static memory objects allocation based partition and the static memory objects allocation based priority L2 SPM strategy are proposed, which retain the characteristic of time predictability. Also, both the WCET and worst-case energy consumption (WCEC) of our SPM based single-core and multicore architectures are completely evaluated in this dissertation. Last, to exploit the load/store latencies that are statically known in this architecture, we study a SPM-aware scheduling method to improve the performance. Our experimental results indicate the strengths and weaknesses of each proposed architecture and allocation method, which offers interesting memory design options to enable real-time computing. The strength of the two-level architecture is its superior performance compared to the one-level architecture, while the strength of the one-level architecture is its simple implementation. Also, the two-level architecture with separated L1 SPM for each core better fits for the data-intensive real-time applications, which not only retains good performance but also achieves a higher bandwidth by accessing both instruction and data SPM at the same time. Compared to the static based strategies, the dynamic allocation based partition L2 SPM strategy offers the better performance on each core because of the reuse of SPM space at the run-time, but has much higher complexity. In addition, the experimental results show that the timing and energy performance of our proposed SPM based architectures are superior to the similar cache based and hybrid architectures. Meanwhile, our architectures can ensure time predictability which is desirable for the real-time systems.
|
46 |
The design and construction of the Reactive Systems LaboratoryAcciai, Guy Francis 22 October 2009 (has links)
<p>Distributed real-time systems are notoriously difficult to correctly design and construct
[Pam as 1985]. The fundamental principles of concurrency, deadline driven scheduling, and
reaction to external stimuli which underlie such systems are inherently complex. This difficulty
is further exacerbated when applications based on these principles are distributed over a
network. Academic instruction in this domain is challenging: while theoretical issues can be
taught with traditional "pencil and paper" techniques, real-time programming skills require
experience that can be best provided by a laboratory. To this end, the Computer Science
Department at Virginia Tech created and built a laboratory, known as the Reactive Systems
Laboratory (RSL), specifically designed to provide these practical experiences. This paper
documents the decisions, designs, and equipment used to build this laboratory. Additionally,
the low-level software systems required to operate the RSL are discussed. Finally, future
directions for the laboratory are considered and some conclusions are drawn based on usage
to-date.</p> / Master of Science
|
47 |
A quantitative comparison & evaluation of prominent marshalling/un-marshalling formats in distributed real-time & embedded systemsSatyanarayana, Geetha R. 11 July 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis demonstrates a novel idea on how components in a distributed real-time & embedded (DRE) system can choose from different data interchange formats at run-time. It also quantitatively evaluates three binary data interchange protocols used in distributed real-time & embedded (DRE) systems: the Common Data Representation (CDR), which collects data "as-is" into a buffer; Binary JSON (BSON), which enables "on the fly" discovery of elements in a message; and FIX Adapted for Streaming (FAST), which is a binary compression algorithm popularly used for data exchange in financial stock market domain. We compare these three data exchange formats to determine if it is possible to minimize the data usage without compromising CPU processing times, data throughput, and data latency. The lack of such a study has made protocols such as CDR popular based on the assumption that collecting data "as-is" will consume less processing time and send with high throughput.
We perform the study in the context of an Open Source Architecture for Software Instrumentation of Systems (OASIS). To perform our study, we modified its existing data interchange framework to flexibly and seamlessly integrate either format, and let the components choose a format at run-time. The experiments from our study shows that as data size increases, the throughput of CDR, BSON, and FAST decreases by 96.16%, 97.23%, and 84.41%, respectively. The increase in packaging and un-packaging times are 1985.12% and 1642.28% for FAST, compared to 3158.96% and 2312.50% for CDR, and 5077.98% and 3686.48% for BSON.
|
48 |
Proving Implementability of Timing Properties with TolerancesHu, Xiayong 08 1900 (has links)
<p> Many safety-critical software applications are hard real-time systems.
They have stringent timing requirements that have to be met. We present descriptions
of timing behaviors that include precise definitions as well as analysis
of how functional timing requirements (FTRs) interact with performance timing
requirements (PTRs), and how these concepts can be used by software
designers. The definitions explicitly show how to specify timing requirements
with tolerances on time durations. </p> <p> This thesis shows the importance of specifying both FTRs and PTRs,
by revealing the fact that their interaction directly determines the final implementability
of real-time systems. By studying this interaction under three
environmental assumptions, we find that the implementability results of the
timing properties are different in each environment, but they are closely related.
The results allow us to predict the system's implementability without
developing or verifying the actual implementation. This also shows that we can
sometimes significantly reduce the sampling frequency on the target platform,
and still implement the timing requirement correctly. </p> <p> We present a component-based approach to formalizing common timing
requirements and provide a pre-verified implementation of one of these
requirements. The verification is performed using the theorem proving tool
PVS. This allows domain experts to specify the tolerance in each individual
timing requirement precisely. The pre-verified implementation of a timing requirement
is demonstrated by applying the method in two examples. These
examples show that both the design and verification effort are reduced significantly
using a pre-verified template. </p> <p> A primary focus of this thesis is on how to include tolerances on timing durations in the specification, implementation and verification of timing
behaviors in hard real-time applications. </p> / Thesis / Doctor of Philosophy (PhD)
|
49 |
Characterization and Development of Distributed, Adaptive Real-Time SystemsMarinucci, Toni 19 April 2005 (has links)
No description available.
|
50 |
Resource Management for Dynamic, Distributed Real-time SystemsGu, Dazhang January 2005 (has links)
No description available.
|
Page generated in 0.1064 seconds