• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 130
  • 120
  • 108
  • 83
  • 47
  • 23
  • 13
  • 13
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 835
  • 248
  • 210
  • 185
  • 130
  • 127
  • 126
  • 115
  • 107
  • 87
  • 75
  • 74
  • 66
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Linuxová emulační vrstva ve FreeBSD / Linux Emulation Layer in FreeBSD

Divácký, Roman Unknown Date (has links)
This masters thesis deals with updating the Linux emulation layer (so called Linuxulator). The task was to update the layer to match the functionality of Linux 2.6. As a reference implementation, the Linux 2.6.16 kernel was chosen. The concept is loosely based on the NetBSD implementation. Most of the work was done in the summer of 2006 as a part of the Google Summer of Code students program. The focus was on bringing the NPTL (new posix thread library) support into the emulation layer, including TLS (thread local storage), futexes (fast user space mutexes), PID mangling, and some other minor things. Many small problems were identified and fixed in the process. My work was integrated into the main FreeBSD source repository and will be shipped in the upcoming 7.0R release. We, the emulation development team, are working toward making the Linux 2.6 emulation the default emulation layer in FreeBSD.
102

Lottery Scheduling in the Linux Kernel: A Closer Look

Zepp, David 01 June 2012 (has links) (PDF)
This paper presents an implementation of a lottery scheduler, presented from design through debugging to performance testing. Desirable characteristics of a general purpose scheduler include low overhead, good overall system performance for a variety of process types, and fair scheduling behavior. Testing is performed, along with an analysis of the results measuring the lottery scheduler against these characteristics. Lottery scheduling is found to provide better than average control over the relative execution rates of processes. The results show that lottery scheduling functions as a good mechanism for sharing the CPU fairly between users that are competing for the resource. While the lottery scheduler proves to have several interesting properties, overall system performance suffers and does not compare favorably with the balanced performance afforded by the standard Linux kernel’s scheduler.
103

A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

Drescher, Michael Stuart 04 June 2015 (has links)
The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates its real-time needs have been proposed, but they cannot support legacy operating systems. This thesis demonstrates the shortcomings of resource partitioning using temporally-isolated servers, presents an alternative solution to the scheduling problem called the KairosVM Flattening Scheduling Algorithm, and provides an implementation of the algorithm based on Linux and KVM. The algorithm is analyzed theoretically and an exact schedulability test for the algorithm is derived. Simulations show that the algorithm can schedule more than 90% of all randomly generated tasksets with a utilization less than 0.95. In comparison to the state-of-the-art server based approach, the KairosVM Flattening Scheduling Algorithm is able to schedule more than 20 times more tasksets with utilization of 0.95. Experimental results demonstrate that the Linux-based implementation is able to match the deadline satisfaction ratio of a state-of-the-art server-based approach when the taskset is schedulable using the state-of-the-art approach. When tasksets are unschedulable, the implementation is able to increase the deadline satisfaction ratio of Vanilla KVM by up to 400%. Furthermore, unlike paravirtualized solutions, the implementation supports legacy systems through the use of introspection. / Master of Science
104

Real-Time Hierarchical Scheduling of Virtualized Systems

Burns, Kevin Patrick 17 October 2014 (has links)
In industry there has been a large focus on system integration and server consolidation, even for real-time systems, leading to an interest in virtualization. However, many modern hypervisors do not inherently support the strict timing guarantees of real-time applications. There are several challenges that arise when trying to virtualize a real-time application. One key challenge is to maintain the guest's real-time guarantees. In a typical virtualized environment there is a hierarchy of schedulers. Past solutions solve this issue by strict resource reservation models. These reservations are pessimistic as they accommodate the worst case execution time of each real-time task. We model real-time tasks using probabilistic execution times instead of worst case execution times which are difficult to calculate and are not representative of the actual execution times. In this thesis, we present a probabilistic hierarchical framework to schedule real-time virtual machines. Our framework reduces the number CPUs reserved for each guest by up to 45%, while only decreasing the deadline satisfaction by 2.7%. In addition, we introduce an introspection mechanism capable of gathering real-time characteristics from the guest systems and present them to the host scheduler. Evaluations show that our mechanism incurs up to 21x less overhead than that of bleeding edge introspection techniques when tracing real-time events. / Master of Science
105

Single System Image in a Linux-based Replicated Operating System Kernel

Ravichandran, Akshay Giridhar 15 September 2015 (has links)
Recent trends in the computer market suggest that emerging computing platforms will be increasingly parallel and heterogeneous, in order to satisfy the user demand for improved performance and superior energy savings. Heterogeneity is a promising technology to keep growing the number of cores per chip without breaking the power wall. However, existing system software is able to cope with homogeneous architectures, but it was not designed to run on heterogeneous architectures, therefore, new system software designs are necessary. One innovative design is the multikernel OS deployed by the Barrelfish operating system (OS) which partitions hardware resources to independent kernel instances that communicate exclusively by message passing, without exploiting the shared memory available amongst different CPUs in a multicore platform. Popcorn Linux implements an extension of the multikernel OS design, called replicated-kernel OS, with the goal of providing a Linux-based single system image environment on top of multiple kernels, which can eventually run on different ISA processors. A replicated-kernel OS replicates the state of various OS sub-systems amongst kernels that cooperate using message passing to distribute or access various services uniquely available on each kernel. In this thesis, we present mechanisms to distribute signals, namespaces, inter-thread synchronizations and socket state replication. These features are built on top of the existing messaging layer, process or thread migration and address space consistency protocol to provide the application with an illusion of single system image and developers with the SMP programming environment they are most familiar with. The mechanisms developed were unit tested with micro benchmarks to validate their correctness and to measure the gained speed up or additive overhead. Real-world applications were also used to benchmark the developed mechanisms on homogeneous and on heterogeneous architectures. It is found that the contributed Popcorn synchronization mechanism exhibits overhead compared to vanilla Linux on multicore as Linux's equivalent mechanisms are tightly coupled with underlying hardware cache coherency protocol, therefore, faster than software message passing. On heterogeneous platforms, the developed mechanisms allow to transparently map each portion of the application to the processor in the platform on which the execution is faster. Optimizations are recommended to further improve the performance of the proposed synchronization mechanism. However, optimizations may force the userspace application and libraries to be rewritten in order to decouple their synchronization mechanisms of shared memory, therefore losing transparency, which is one of the primary goals of this work. / Master of Science
106

An Experimental Evaluation of the Scalability of Real-Time Scheduling Algorithms on Large-Scale Multicore Platforms

Dellinger, Matthew Aalseth 21 June 2011 (has links)
This thesis studies the problem of experimentally evaluating the scaling behaviors of existing multicore real-time task scheduling algorithms on large-scale multicore platforms. As chip manufacturers rapidly increase the core count of processors, it becomes imperative that multicore real-time scheduling algorithms keep pace. Thus, it must be determined if existing algorithms can scale to these new high core-count platforms. Significant research exists on the theoretical performance of multicore real-time scheduling algorithms, but the vast majority of this research ignores the effects of scalability. It has been demonstrated that multicore real-time scheduling algorithms are feasible for small core-count systems (e.g. 8-core or less), but thus far the majority of the algorithmic research has never been tested on high core-count systems (e.g. 48-core or more). We present an experimental analysis of the scalability of 16 multicore real-time scheduling algorithms. These algorithms include global, clustered, and partitioned algorithms. We cover a broad range of algorithms, including deadline-based and utility accrual scheduling algorithms. These algorithms are compared under metrics including schedulability, tardiness, deadline satisfaction ratio, and utility accrual ratio. We consider multicore platforms ranging from 8 to 48 cores. The algorithms are implemented in a real-time Linux kernel we create called ChronOS. ChronOS is based on the Linux kernel's PREEMPT RT patch, which provides the underlying operating system kernel with real-time capabilities such as full kernel preemptibility and priority inheritance for kernel locking primitives. ChronOS extends these capabilities with a flexible, scalable real-time scheduling framework. Our study shows that it is possible to implement global fixed and dynamic priority and simple global utility accrual real-time scheduling algorithms which will scale to large-scale multicore platforms. Interestingly, and in contrast to the conclusion of prior research, our results reveal that some global scheduling algorithms (e.g. G-NP-EDF) is actually scalable on large core counts (e.g. 48). In our implementation, scalability is restricted by lock contention over the global schedule and the cost of inter-processor communication, rather than the global task queue implementation. We also demonstrate that certain classes of utility accrual algorithms such as the GUA class are inherently not scalable. We show that algorithms implemented with scalability as a first-order implementation goal are able to provide real-time guarantees on our 48-core platform. / Master of Science
107

An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms

Saha, Sonal 12 September 2011 (has links)
Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS scheduling algorithms, which employ different techniques to utilize the CPU idle time to scale the frequency. Many of these algorithms have been experimentally studied through simulations, but have not been implemented on real hardware platforms. Though simulation-based experimental studies can provide a first-order understanding, implementation-based studies can reveal actual timeliness and energy consumption behaviours. This is particularly important, when it is difficult to devise accurate simulation models of hardware, which is increasingly the case with modern systems. In this thesis, we study the timeliness and energy consumption behaviours of fourteen state- of-the-art RT-DVFS schedulers by implementing and evaluating them on two hardware platforms. The schedulers include CC-EDF, LA-EDF, REUA, DRA andd AGR1 among others, and the hardware platforms include ASUS laptop with the Intel I5 processor and a mother- board with the AMD Zacate processor. We implemented these schedulers in the ChronOS real-time Linux kernel and measured their actual timeliness and energy behaviours under a range of workloads including CPU-intensive, memory-intensive, mutual exclusion lock-intensive, and processor-underloaded and overloaded workloads. Our studies reveal that measuring the CPU power consumption as the cube of CPU frequency can lead to incorrect conclusions. In particular, it ignores the idle state CPU power consumption, which is orders of magnitude smaller than the active power consumption. Consequently, power savings obtained by exclusively optimizing active power consumption (i.e., RT-DVFS) may be offset by completing tasks sooner by running them at the highest frequency and transitioning to the idle state earlier (i.e., no DVFS). Thus, the active power consumption savings of the RT-DVFS techniques' that we report are orders of magnitude smaller than their simulation-based savings reported in the literature. / Master of Science
108

Evaluation of power management strategies on actual multiprocessor platforms / Évaluation de stratégies de gestion de la consommation pour des plateformes multiprocesseurs concrètes

Khan Jadoon, Jabran 25 March 2013 (has links)
L’objectif de cette thèse est d’étudier l’efficacité énergétique des stratégies basse consommation pour des plateformes représentatives. Principalement, nous nous intéresserons aux stratégies énergétiques pour des systèmes embarqués multicœur en étudiant le comportement de politiques logicielles qui permettent la réduction effective de l’énergie tout en répondant aux exigences applicatives. Le travail présenté dans ce mémoire vise à étudier des stratégies de gestion de la consommation pour des plateformes monoprocesseur puis multiprocesseur concrètes. L’approche utilisée pour cette étude fut basée sur des plateformes représentatives afin d’identifier les paramètres significatifs, aussi bien au niveau matériel qu’au niveau applicatif, à l’inverse de nombreux travaux dans lesquels ces paramètres sont assez peu pris en compte voir ignorés. Ce travail analyse et compare diverses expérimentations menées sur des politiques énergétiques basées sur des techniques DVFS (Dynamic Voltage and Frequency Scaling) et DPS (Dynamic Power Switching) et définit les conditions sous lesquelles ces stratégies sont efficaces. Ces expérimentations ont permis d’établir des conclusions remarquables qui peuvent servir de pré-requis lors de la définition de stratégies efficaces de gestion de la consommation. Ces résultats montrent également que pour obtenir des stratégies efficientes il est nécessaire de tenir compte du domaine applicatif. Enfin, il faut noter que les modèles de haut de niveau de consommation ont été définis sur la base des mesures effectuées et afin d’estimer les gains énergétiques dès les premières étapes d’un flot de conception. / The purpose of this study is to investigate how power management strategies can be efficiently exploited in actual platforms. Primarily, the challenges in multicore based embedded systems lies in managing the energy expenditure, determining the scheduling behavior and establishing methods to monitor power and energy, so as to meet the demands of the battery life and load requirements. The work presented in this dissertation is a study of low power-aware strategies in the practical world for single and multiprocessor platforms. The approach used for this study is based on representative multiprocessor platforms (real or virtual) to identify the most influential parameters, at hardware as well as application level, unlike many existing works in which these parameters are often underestimated or sometimes even ignored. The work analyzes and compares in detail various experimentations with different power policies based on Dynamic Voltage and Frequency Scaling (DVFS) and Dynamic Power Switching (DPS) techniques, and investigates the conditions at which these policies are effective in terms of energy savings. The results of these investigations reveal many interesting and notable conclusions that can serve as prerequisites for the efficient use of power management strategies. This work also shows the potential of advanced domain specific power strategies compared to real world available strategies that are general purpose based in their majority. Finally, some high level consumption models are derived from the different energy measurement results to let the estimation of power management benefits at early stages of a system development.
109

Taintx: A System for Protecting Sensitive Documents

Dillon, Patrice 06 August 2009 (has links)
Across the country members of the workforce are being laid off due to downsizing. Most of those people work for large corporations and have access to important company documents. There have been several studies suggesting that employees are taking critical information after learning they will be laid off. This becomes an issue and a threat to a corporation's security. Corporations are then placed in a position to make sure sensitive documents never leave the company. In this study we build a system that is used to assist corporations and systems administrators. This system will prevent users from taking sensitive documents. The system used in this study helps to maintain a level of security that is not only beneficial but is a crucial part of managing a corporation, and enhancing its ability to compete in an aggressive market.
110

Redes de microfones em tempo real: uma implementação com hardware de baixo custo e software de código aberto. / Real time microphone arrays: a low-cost implementation with open source code.

Conde, Flávio 24 February 2010 (has links)
Este trabalho apresenta a implementação prática de uma rede de microfones para ser utilizada em tempo real. A solução proposta envolve o uso de hardware de baixo custo e software de código aberto. Em termos de hardware, a rede de microfones utilizou dispositivos de áudio USB conectados diretamente a um computador pessoal (PC). Em termos de software, foram utilizados a biblioteca de código aberto Advanced Linux Sound Architecture (ALSA) e o sistema operacional Linux. Algumas implementações foram realizadas na biblioteca ALSA para que fosse possível a utilização da rede de microfones dentro do Linux. Os algoritmos implementados na biblioteca ALSA foram o Delay and Sum, Generalized Sidelobe Canceller (GSC) e o Post-Filtering. Os aspectos teóricos dos principais algoritmos empregados nas redes de microfones foram abordados de forma extensa. Os resultados teóricos e práticos desta implementação são apresentados no final deste trabalho. Todo o desenvolvimento de software foi publicado na Internet para que sirva de base para futuros trabalhos. / This work presents the practical implementation of a microphone array to be used in real time. The proposed solution involves the use of low-cost hardware and open source software. In terms of hardware, the microphone array used USB audio devices connected directly to a personal computer (PC). In terms of software, it was used the open-source library Advanced Linux Sound Architecture (ALSA) and Linux operating system. Some implementations were carried out in ALSA library to make it possible to use the microphone array within Linux. The algorithms implemented in ALSA library were the Delay and Sum, Generalized Sidelobe Canceller (GSC) and Post-Filtering. The theory of the main algorithms used in microphone array were discussed extensively. The results for the theoretical and practical implementation are presented at the end of this work. All software development was published on the Internet to serve as a basis for future works.

Page generated in 0.0258 seconds