• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1980
  • 654
  • 308
  • 237
  • 142
  • 77
  • 55
  • 43
  • 29
  • 22
  • 19
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 4133
  • 4133
  • 823
  • 814
  • 638
  • 630
  • 555
  • 547
  • 523
  • 435
  • 430
  • 428
  • 347
  • 332
  • 293
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Real-Time Hierarchical Scheduling of Virtualized Systems

Burns, Kevin Patrick 17 October 2014 (has links)
In industry there has been a large focus on system integration and server consolidation, even for real-time systems, leading to an interest in virtualization. However, many modern hypervisors do not inherently support the strict timing guarantees of real-time applications. There are several challenges that arise when trying to virtualize a real-time application. One key challenge is to maintain the guest's real-time guarantees. In a typical virtualized environment there is a hierarchy of schedulers. Past solutions solve this issue by strict resource reservation models. These reservations are pessimistic as they accommodate the worst case execution time of each real-time task. We model real-time tasks using probabilistic execution times instead of worst case execution times which are difficult to calculate and are not representative of the actual execution times. In this thesis, we present a probabilistic hierarchical framework to schedule real-time virtual machines. Our framework reduces the number CPUs reserved for each guest by up to 45%, while only decreasing the deadline satisfaction by 2.7%. In addition, we introduce an introspection mechanism capable of gathering real-time characteristics from the guest systems and present them to the host scheduler. Evaluations show that our mechanism incurs up to 21x less overhead than that of bleeding edge introspection techniques when tracing real-time events. / Master of Science
252

Computational Offloading for Real-Time Computer Vision in Unreliable Multi-Tenant Edge Systems

Jackson, Matthew Norman 26 June 2023 (has links)
The demand and interest in serving Computer Vision applications at the Edge, where Edge Devices generate vast quantities of data, clashes with the reality that many Devices are largely unable to process their data in real time. While computational offloading, not to the Cloud but to nearby Edge Nodes, offers convenient acceleration for these applications, such systems are not without their constraints. As Edge networks may be unreliable or wireless, offloading quality is sensitive to communication bottlenecks. Unlike seemingly unlimited Cloud resources, an Edge Node, serving multiple clients, may incur delays due to resource contention. This project describes relevant Computer Vision workloads and how an effective offloading framework must adapt to the constraints that impact the Quality of Service yet have not been effectively nor properly addressed by previous literature. We design an offloading controller, based on closed-loop control theory, that enables Devices to maximize their throughput by appropriately offloading under variable conditions. This approach ensures a Device can utilize the maximum available offloading bandwidth. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques. / Master of Science / Devices like security cameras and some Internet of Things gadgets produce valuable real-time video for AI applications. A field within AI research called Computer Vision aims to use this visual data to compute a variety of useful workloads in a way that mimics the human visual system. However, many workloads, such as classifying objects displayed in a video, have large computational demands, especially when we want to keep up with the frame rate of a real-time video. Unfortunately, these devices, called Edge Devices because they are located far from Cloud datacenters at the edge of the network, are notoriously weak for Computer Vision algorithms, and, if running on a battery, will drain it quickly. In order to keep up, we can offload the computation of these algorithms to nearby servers, but we need to keep in mind that the bandwidth of the network might be variable and that too many clients connected to a single server will overload it. A slow network or an overloaded server will incur delays which slow processing throughput. This project describes relevant Computer Vision workloads and how an effective offloading framework that effectively adapts to these constraints has not yet been addressed by previous literature. We designed an offloading controller that measures feedback from the system and adapts how a Device offloads computation, in order to achieve the best possible throughput despite variable conditions. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques.
253

Recovering from Distributable Thread Failures with Assured Timeliness in Real-Time Distributed Systems

Curley, Edward 13 March 2007 (has links)
This thesis considers the problem of recovering from failures of distributable threads with assured timeliness. When a node hosting a portion of a distributable thread fails, it causes orphans—i.e., thread segments that are disconnected from the thread's root. A termination model is considered for recovering from such failures. In this model the orphans must be detected and cleaned up, and failure-exception notification must be delivered to the farthest, contiguous surviving thread segment for resuming thread execution. Two real-time scheduling algorithms (AUA and HUA) and three distributable thread integrity protocols (TPR, D-TPR and W-TPR) are presented. We show that AUA combined with any of the protocols presented bounds the orphan cleanup and recovery time, thereby bounding thread starvation durations and maximizing the total thread accrued timeliness utility. The algorithms and the protocols are implemented in a real-time middleware that supports distributable threads. The experimental studies with the implementation validate the algorithm/protocols' time-bounded recovery property and confirm their effectiveness. / Master of Science
254

An Experimental Evaluation of the Scalability of Real-Time Scheduling Algorithms on Large-Scale Multicore Platforms

Dellinger, Matthew Aalseth 21 June 2011 (has links)
This thesis studies the problem of experimentally evaluating the scaling behaviors of existing multicore real-time task scheduling algorithms on large-scale multicore platforms. As chip manufacturers rapidly increase the core count of processors, it becomes imperative that multicore real-time scheduling algorithms keep pace. Thus, it must be determined if existing algorithms can scale to these new high core-count platforms. Significant research exists on the theoretical performance of multicore real-time scheduling algorithms, but the vast majority of this research ignores the effects of scalability. It has been demonstrated that multicore real-time scheduling algorithms are feasible for small core-count systems (e.g. 8-core or less), but thus far the majority of the algorithmic research has never been tested on high core-count systems (e.g. 48-core or more). We present an experimental analysis of the scalability of 16 multicore real-time scheduling algorithms. These algorithms include global, clustered, and partitioned algorithms. We cover a broad range of algorithms, including deadline-based and utility accrual scheduling algorithms. These algorithms are compared under metrics including schedulability, tardiness, deadline satisfaction ratio, and utility accrual ratio. We consider multicore platforms ranging from 8 to 48 cores. The algorithms are implemented in a real-time Linux kernel we create called ChronOS. ChronOS is based on the Linux kernel's PREEMPT RT patch, which provides the underlying operating system kernel with real-time capabilities such as full kernel preemptibility and priority inheritance for kernel locking primitives. ChronOS extends these capabilities with a flexible, scalable real-time scheduling framework. Our study shows that it is possible to implement global fixed and dynamic priority and simple global utility accrual real-time scheduling algorithms which will scale to large-scale multicore platforms. Interestingly, and in contrast to the conclusion of prior research, our results reveal that some global scheduling algorithms (e.g. G-NP-EDF) is actually scalable on large core counts (e.g. 48). In our implementation, scalability is restricted by lock contention over the global schedule and the cost of inter-processor communication, rather than the global task queue implementation. We also demonstrate that certain classes of utility accrual algorithms such as the GUA class are inherently not scalable. We show that algorithms implemented with scalability as a first-order implementation goal are able to provide real-time guarantees on our 48-core platform. / Master of Science
255

An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms

Saha, Sonal 12 September 2011 (has links)
Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS scheduling algorithms, which employ different techniques to utilize the CPU idle time to scale the frequency. Many of these algorithms have been experimentally studied through simulations, but have not been implemented on real hardware platforms. Though simulation-based experimental studies can provide a first-order understanding, implementation-based studies can reveal actual timeliness and energy consumption behaviours. This is particularly important, when it is difficult to devise accurate simulation models of hardware, which is increasingly the case with modern systems. In this thesis, we study the timeliness and energy consumption behaviours of fourteen state- of-the-art RT-DVFS schedulers by implementing and evaluating them on two hardware platforms. The schedulers include CC-EDF, LA-EDF, REUA, DRA andd AGR1 among others, and the hardware platforms include ASUS laptop with the Intel I5 processor and a mother- board with the AMD Zacate processor. We implemented these schedulers in the ChronOS real-time Linux kernel and measured their actual timeliness and energy behaviours under a range of workloads including CPU-intensive, memory-intensive, mutual exclusion lock-intensive, and processor-underloaded and overloaded workloads. Our studies reveal that measuring the CPU power consumption as the cube of CPU frequency can lead to incorrect conclusions. In particular, it ignores the idle state CPU power consumption, which is orders of magnitude smaller than the active power consumption. Consequently, power savings obtained by exclusively optimizing active power consumption (i.e., RT-DVFS) may be offset by completing tasks sooner by running them at the highest frequency and transitioning to the idle state earlier (i.e., no DVFS). Thus, the active power consumption savings of the RT-DVFS techniques' that we report are orders of magnitude smaller than their simulation-based savings reported in the literature. / Master of Science
256

Adaptive Polling for Responsive Web Applications

Aziz, H., Ridley, Mick J. 16 February 2016 (has links)
Yes / The web environment has been developing remarkably, and much work has been done towards improving web based notification systems, where servers act smartly by notifying and feeding clients with subscribed data. In this paper we have reviewed some of the problems with current solutions to real-time updates of multi user web applications; we introduce a new concept “adaptive polling” based on one AJAX technique “Polling” to reduce the high volume of redundant server connections with reasonable latency, we demonstrated a prototype implementation of the new concept which is then evaluated against the existing one; the positive results clearly indicated more efficiency in terms of client-server bandwidth.
257

The Effects of Packet Buffer Size and Packet Priority on Bursty Real-Time Traffic

Winblad von Walter, Ragnar, Sandred, Johan January 2024 (has links)
Networks which use real-time communication have high requirements on latency and packet loss. Improving one aspect may results in worse performance for another, and it can be difficult to prioritize one over the other as all the requirements need to be met in order for the network tooperate as expected. Many studies have investigated reducing the size of packet buffers to improve the latency. However, they have mainly focused on studying TCP traffic which may not be optimal for real-time traffic, where it instead could be more suitable to use UDP. We have performed an experiment where we compared the performance of real-time traffic over multiple different buffer sizes. We generated traffic using synchronized bursts of packets which were either sample value (SV) or IP packets, as defined by IEC 61850. We measured the packet loss and latency for situations where the traffic was either entirely composed of SV packets, or when it had mixed SV and IP traffic. For the mixed traffic, we also experimented with using different VLAN priorities for the two types of packets. We have determined deadline thresholds that show what size of packet buffer will start causing packets to miss their deadline, and what size will lead every packet in bursts oftraffic to miss their deadlines. We also found that increasing the priority of SV packets in mixed traffic can have either a positive or a negative impact on their performance, depending on how highly they are prioritized.
258

Helping job seekers prepare for technical interviews by enabling context-rich interview feedback

Lu, Yi 11 June 2024 (has links)
Technical interviews have become a popular method for recruiters in the tech industry to assess job candidates' proficiency in both soft skills and technical skills as programmers. However, these interviews can be stressful and frustrating for interviewees. One significant cause of the negative experience of technical interviews was the lack of feedback, making it difficult for job seekers to improve their performance progressively by participating in technical interviews. Although there are open platforms like Leetcode that allow job seekers to practice their technical proficiency, resources for conducting mock interviews to practice soft skills like communication are limited and costly to interviewees. To address this, we investigated how professional interviewers provide feedback if they were conducting a mock interview and the difficulties they face when interviewing job seekers by running mock interviews between software engineers and job seekers. With the insights from the formative studies, we developed a new system for technical interviews aiming to help interviewers conduct technical interviews with less cognitive load and provide context-rich feedback. An evaluation study on the usability of using our system to conduct technical interviews further revealed the unresolved cognitive loads of interviewers, underscoring the requirements for further improvement to facilitate easier interview processes and enable peer-to-peer interview practices. / Master of Science / Technical interview is a common method used by tech companies to evaluate job candidates. During these interviews, candidates are asked to solve algorithm problems and explain their thought processes while coding. Running these interviews, recruiters can assess the job candidate's ability to write codes and solve problems in a limited time. At the same time, the requirements for interviewees to talk aloud help interviewers evaluate their communication and collaboration skills. Although technical interviews enable employers to assess job applicants from multiple perspectives, they also introduce interviewees to stress and anxiety. Among the many complaints about technical interviews, one significant difficulty of the interview process is the lack of feedback from interviewers. As a result, it is difficult for interviewees to improve progressively by participating in technical interviews repeatedly. Although there are platforms for interviewees to practice code writing, resources like mock interviews with actual interviewers for job seekers to practice communication skills are costly and rare. Our study investigated how professional programmers run mock technical interviews and provide feedback when required. The mock interview observations helped us understand the standard procedure and common practices of how practitioners run these interviews. At the same time, we concluded the potential cause of cognitive loads and difficulties for interviewers to run such interviews. To answer the difficulties of conducting technical interviews, we developed a new system that enabled interviewers to conduct technical interviews with less cognitive load and provide enriched feedback. After rerunning mock interviews with our system, we noted that while some features in our system helped make the interview process easier, additional cognitive loads are unresolved. Looking into these difficulties, we suggested several directions for future studies to improve our design to enable an easier interview process for interviewers and support interview rehearsals between job seekers.
259

WINGS CONCEPT: PRESENT AND FUTURE

Harris, Jim, Downing, Bob 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Western Aeronautical Test Range (WATR) of NASA’s Dryden Flight Research Center (DFRC) is facing a challenge in meeting the technology demands of future flight mission projects. Rapid growth in technology for aircraft has resulted in complexity often surpassing the capabilities of the current WATR real-time processing and display systems. These current legacy systems are based on an architecture that is over a decade old. In response, the WATR has initiated the development of the WATR Integrated Next Generation System (WINGS). The purpose of WINGS is to provide the capability to acquire data from a variety of sources and process that data for subsequent analysis and display to Project Users in the WATR Mission Control Centers (MCCs) in real-time, near real-time and subsequent post-mission analysis. WINGS system architecture will bridge the continuing gap between new research flight test requirements and capability by distributing current system architectures to provide incremental and iterative system upgrades.
260

THE REAL/STAR 2000: A HIGH PERFORMANCE MULTIPROCESSOR COMPUTER FOR TELEMETRY APPLICATIONS

Furht, B., Gluch, D., Parker, J., Matthews, P., Joseph, D. 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In this paper we describe the design of the REAL/STAR 2000 system, a highperformance real-time computer for telemetry applications. The REAL/STAR 2000 is a symmetric, tightly-coupled multiprocessor, optimized for real-time processing. The system provides a high level of scalability and flexibility by supporting three configurations: single, dual, and quad processor configurations, based on Motorola 88100 RISC processors. The system runs the multiprocessor REAL/IX operating system, a real-time implementation of the AT&T UNIX System V. It compiles with BCS and OCS standards, meets the POSIX 1003.1 standard, and has the current functionality of the emerging POSIX 1003.4 real-time standard. The REAL/STAR 2000 promotes an open system approach to real-time computing by supporting major industry standards. Benchmark results are also presented in the paper.

Page generated in 0.046 seconds