Spelling suggestions: "subject:"real time"" "subject:"deal time""
241 |
Design of a real time, interactive, parallel simulation computer /Kobayashi, Yukoh January 1981 (has links)
No description available.
|
242 |
An experimental study of alternative schemes for asynchronous message passing in a real-time multicomputer control system /Lee, Shih-Ping January 1984 (has links)
No description available.
|
243 |
Analysis and development of a real-time control methodology in resistance spot welding /Dai, Wen Long January 1991 (has links)
No description available.
|
244 |
Multivariate Image Analysis for Real-Time Process MonitoringBharati, Manish 09 1900 (has links)
In today’s technically advanced society the collection and study of digital images has become an important aspect of various off-line applications that range from medical diagnosis to exploring the Martian surface for traces of water. Various industries have recently started moving towards vision based systems to monitor several of their manufacturing processes. Except for some simple on-line applications, these systems are primarily used to analyze the digital images off-line. This thesis is concerned with developing a more powerful on-line digital image analysis technique which links the fields of traditional digital image processing with a recently devised statistically based image analysis method called multivariate image analysis (MIA). The first part of the thesis introduces the area of traditional digital image processing techniques through a brief literature review of three of its five main classes (image enhancement, restoration, analysis, compression, & synthesis) which contain most of the commonly used operations in this area. This introduction is intended as a starting point for readers who have little background in this field, and as a means of providing sufficient details on these techniques so that they can be used in conjunction with other advanced MIA on-line monitoring operations. MIA of multispectral digital images using latent variable statistical methods (Multi-Way PCA / PLS) is the main topic covered by the second part of this thesis. After reviewing the basic theory of feature extraction using MIA for off-line analyses, a new technique is introduced that extends these ideas for image analyses in on-line applications. Instead of directly using the updated images themselves to monitor a time- varying process, this new technique uses the latent variable space of the image to monitor the increase or decline in the number of pixels belonging to various features of interest. The ability to switch between the images and their latent variable space then allows the user to determine the exact spatial locations of any features of interest. This new method is shown to be ideal for monitoring interesting features from time-varying processes equipped with multispectral sensors. It forms a basis for future on-line industrial process monitoring schemes in those industries that are moving towards automatic vision systems using multispectral digital imagery. / Thesis / Master of Engineering (ME)
|
245 |
Using real time information for effective dynamic scheduling.Cowling, Peter I., Johansson, M. January 2002 (has links)
No / In many production processes real time information may be obtained from process control computers and other monitoring systems, but most existing scheduling models are unable to use this information to effectively influence scheduling decisions in real time. In this paper we develop a general framework for using real time information to improve scheduling decisions, which allows us to trade off the quality of the revised schedule against the production disturbance which results from changing the planned schedule. We illustrate how our framework can be used to select a strategy for using real time information for a single machine scheduling model and discuss how it may be used to incorporate real time information into scheduling the complex production processes of steel continuous caster planning.
|
246 |
Computational Offloading for Real-Time Computer Vision in Unreliable Multi-Tenant Edge SystemsJackson, Matthew Norman 26 June 2023 (has links)
The demand and interest in serving Computer Vision applications at the Edge, where Edge Devices generate vast quantities of data, clashes with the reality that many Devices are largely unable to process their data in real time. While computational offloading, not to the Cloud but to nearby Edge Nodes, offers convenient acceleration for these applications, such systems are not without their constraints. As Edge networks may be unreliable or wireless, offloading quality is sensitive to communication bottlenecks. Unlike seemingly unlimited Cloud resources, an Edge Node, serving multiple clients, may incur delays due to resource contention. This project describes relevant Computer Vision workloads and how an effective offloading framework must adapt to the constraints that impact the Quality of Service yet have not been effectively nor properly addressed by previous literature. We design an offloading controller, based on closed-loop control theory, that enables Devices to maximize their throughput by appropriately offloading under variable conditions. This approach ensures a Device can utilize the maximum available offloading bandwidth. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques. / Master of Science / Devices like security cameras and some Internet of Things gadgets produce valuable real-time video for AI applications. A field within AI research called Computer Vision aims to use this visual data to compute a variety of useful workloads in a way that mimics the human visual system. However, many workloads, such as classifying objects displayed in a video, have large computational demands, especially when we want to keep up with the frame rate of a real-time video. Unfortunately, these devices, called Edge Devices because they are located far from Cloud datacenters at the edge of the network, are notoriously weak for Computer Vision algorithms, and, if running on a battery, will drain it quickly. In order to keep up, we can offload the computation of these algorithms to nearby servers, but we need to keep in mind that the bandwidth of the network might be variable and that too many clients connected to a single server will overload it. A slow network or an overloaded server will incur delays which slow processing throughput. This project describes relevant Computer Vision workloads and how an effective offloading framework that effectively adapts to these constraints has not yet been addressed by previous literature. We designed an offloading controller that measures feedback from the system and adapts how a Device offloads computation, in order to achieve the best possible throughput despite variable conditions. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques.
|
247 |
Recovering from Distributable Thread Failures with Assured Timeliness in Real-Time Distributed SystemsCurley, Edward 13 March 2007 (has links)
This thesis considers the problem of recovering from failures of distributable threads with assured timeliness. When a node hosting a portion of a distributable thread fails, it causes orphans—i.e., thread segments that are disconnected from the thread's root. A termination model is considered for recovering from such failures. In this model the orphans must be detected and cleaned up, and failure-exception notification must be delivered to the farthest, contiguous surviving thread segment for resuming thread execution. Two real-time scheduling algorithms (AUA and HUA) and three distributable thread integrity protocols (TPR, D-TPR and W-TPR) are presented. We show that AUA combined with any of the protocols presented bounds the orphan cleanup and recovery time, thereby bounding thread starvation durations and maximizing the total thread accrued timeliness utility. The algorithms and the protocols are implemented in a real-time middleware that supports distributable threads. The experimental studies with the implementation validate the algorithm/protocols' time-bounded recovery property and confirm their effectiveness. / Master of Science
|
248 |
An Experimental Evaluation of the Scalability of Real-Time Scheduling Algorithms on Large-Scale Multicore PlatformsDellinger, Matthew Aalseth 21 June 2011 (has links)
This thesis studies the problem of experimentally evaluating the scaling behaviors of existing multicore real-time task scheduling algorithms on large-scale multicore platforms. As chip manufacturers rapidly increase the core count of processors, it becomes imperative that multicore real-time scheduling algorithms keep pace. Thus, it must be determined if existing algorithms can scale to these new high core-count platforms. Significant research exists on the theoretical performance of multicore real-time scheduling algorithms, but the vast majority of this research ignores the effects of scalability. It has been demonstrated that multicore real-time scheduling algorithms are feasible for small core-count systems (e.g. 8-core or less), but thus far the majority of the algorithmic research has never been tested on high core-count systems (e.g. 48-core or more).
We present an experimental analysis of the scalability of 16 multicore real-time scheduling algorithms. These algorithms include global, clustered, and partitioned algorithms. We cover a broad range of algorithms, including deadline-based and utility accrual scheduling algorithms. These algorithms are compared under metrics including schedulability, tardiness, deadline satisfaction ratio, and utility accrual ratio. We consider multicore platforms ranging from 8 to 48 cores. The algorithms are implemented in a real-time Linux kernel we create called ChronOS. ChronOS is based on the Linux kernel's PREEMPT RT patch, which provides the underlying operating system kernel with real-time capabilities such as full kernel preemptibility and priority inheritance for kernel locking primitives. ChronOS extends these capabilities with a flexible, scalable real-time scheduling framework.
Our study shows that it is possible to implement global fixed and dynamic priority and simple global utility accrual real-time scheduling algorithms which will scale to large-scale multicore platforms. Interestingly, and in contrast to the conclusion of prior research, our results reveal that some global scheduling algorithms (e.g. G-NP-EDF) is actually scalable on large core counts (e.g. 48). In our implementation, scalability is restricted by lock contention over the global schedule and the cost of inter-processor communication, rather than the global task queue implementation. We also demonstrate that certain classes of utility accrual algorithms such as the GUA class are inherently not scalable. We show that algorithms implemented with scalability as a first-order implementation goal are able to provide real-time guarantees on our 48-core platform. / Master of Science
|
249 |
Priority Assignment Algorithms for Real-Time SystemsDeng, Xuanliang 06 January 2025 (has links)
Priority Assignment is one of the crucial problems in the scheduling of fixed-priority real-time systems, which requires both timely response and correctness of output under specified timing constraints. As the models and hardware platforms of real-time systems become increasingly more complex, it is necessary to consider various design metrics in addition to the system schedulability when we assign priorities to tasks. There has been a rich set of research studies on the priority assignment algorithms. However, there exist several limitations in the state-of-the-art: 1) the current research focuses on improving existing schedulability analyses but fails to integrate them with priority assignment algorithms which has better solution quality than heuristics; 2) the stochastic nature of task execution time in real operation and the properties of heterogeneous hardware platforms are not fully studied in the priority assignment process; and 3) design metrics other than response time are omitted in the priority assignment. In this dissertation, we seek to address the issues in the following aspects. First, instead of using existing schedulability analysis directly, we leverage the proposed concept of response time estimation range to build a novel priority assignment framework. It can quickly rule out infeasible priority assignments that share common attributes and be coupled with a more accurate schedulability analysis than Audsley's Optimal Priority Assignment (OPA) with much weaker compatibility conditions. The framework judiciously takes advantage of optimization techniques and heuristics under different task utilization to achieve better overall acceptance ratio of task system. Second, we consider the attributes of tasks, hardware platform (e.g., heterogeneous platform) and design metrics (e.g., reaction time and data age) of real applications in the process of priority assignment. Current studies assume worst case or certain distribution of tasks' execution times in their response time analysis. However, analysis based on these assumptions are pessimistic and overestimate the response times in some cases. We propose a more general model which does not assume any specific distribution of task execution time and analyze the response time by the convolution method. In addition, we propose stochastic heterogeneous Direct Acyclic Graph (DAG) model to take into account the randomness of execution and heterogeneous hardware platform in real operation. Third, we establish an optimization framework which takes other design metrics (e.g., reaction time, data age), in addition to response time in most studies, as objective and constraints in the optimization process. These design metrics can be important in real applications to guarantee satisfying performance of task system. We demonstrate the effectiveness of our proposed frameworks with several case studies, which have shown that our methods can achieve better system schedulability and/or better run-time. / Doctor of Philosophy / Real-time systems are crucial in control-centric applications, which require both the correctness and timely response of the output. Such systems have specified timing constraints which the outputs of the systems should meet. Failure to meet timing constraints will result in performance degradation or safety-related issues in extreme cases. Priority assignment is the process to assign priorities to tasks. In general case, tasks with higher priorities will execute ahead of tasks with lower priorities when computing resources are available. Priority assignment has an important role in scheduling, which determines the execution orders of the tasks. Current priority assignment algorithms come across several challenges: 1) existing analyses are directly integrated with heuristic method for priority assignment, which cannot guarantee the optimality of the solution; 2) the attributes of the hardware platform or the stochastic nature of task execution in real applications are not fully considered; 3) more design metrics need to be considered in addition to timely response of the system. In this dissertation, we aim to propose better solutions for the above issues. Experimental evaluation has shown that the proposed frameworks outperform existing priority assignment algorithms in terms of run-time and number of tasks that meet the timing constraints of the real-time systems.
|
250 |
A Flattened Hierarchical Scheduler for Real-Time Virtual MachinesDrescher, Michael Stuart 04 June 2015 (has links)
The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates its real-time needs have been proposed, but they cannot support legacy operating systems. This thesis demonstrates the shortcomings of resource partitioning using temporally-isolated servers, presents an alternative solution to the scheduling problem called the KairosVM Flattening Scheduling Algorithm, and provides an implementation of the algorithm based on Linux and KVM. The algorithm is analyzed theoretically and an exact schedulability test for the algorithm is derived. Simulations show that the algorithm can schedule more than 90% of all randomly generated tasksets with a utilization less than 0.95. In comparison to the state-of-the-art server based approach, the KairosVM Flattening Scheduling Algorithm is able to schedule more than 20 times more tasksets with utilization of 0.95. Experimental results demonstrate that the Linux-based implementation is able to match the deadline satisfaction ratio of a state-of-the-art server-based approach when the taskset is schedulable using the state-of-the-art approach. When tasksets are unschedulable, the implementation is able to increase the deadline satisfaction ratio of Vanilla KVM by up to 400%. Furthermore, unlike paravirtualized solutions, the implementation supports legacy systems through the use of introspection. / Master of Science
|
Page generated in 0.093 seconds