• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 64
  • 55
  • 16
  • 16
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 438
  • 438
  • 236
  • 229
  • 105
  • 82
  • 77
  • 72
  • 62
  • 56
  • 54
  • 54
  • 51
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Effective interprocess communication (IPC) in a real-time transputer network

Bor, Mehmet January 1994 (has links)
The thesis describes the design and implementation of an interprocess communication (IPC) mechanism within a real-time distributed operating system kernel (RT-DOS) which is designed for a transputer-based network. The requirements of real-time operating systems are examined and existing design and implementation strategies are described. Particular attention is paid to one of the object-oriented techniques although it is concluded that these techniques are not feasible for the chosen implementation platform. Studies of a number of existing operating systems are reported. The choices for various aspects of operating system design and their influence on the IPC mechanism to be used are elucidated. The actual design choices are related to the real-time requirements and the implementation that has been adopted is described.
72

Operating system scheduling optimization

Anderson, George Georgevich 28 May 2013 (has links)
D.Phil. (Electrical and Electronic Engineering) / This thesis explores methods for improving, or optimizing, Operating System (OS) scheduling. We first study the problem of tuning an OS scheduler by setting various parameters, or knobs, made available. This problem has not been addressed extensively in the literature, and has never been solved for the default Linux OS scheduler. We present three methods useful for tuning an Operating System scheduler in order to improve the quality of scheduling leading to better performance for workloads. The first method is based on Response Surface Methodology, the second on the Particle Swarm Optimization (PSO), while the third is based on the Golden Section method. We test our proposed methods using experiments and suitable benchmarks and validate their viability. Results indicate significant gains in execution time for workloads tuned with these methods over execution time for workloads running under schedulers with default, unoptimized tuning parameters. The gains for using RSM-based over default scheduling parameter settings are only limited by the type of workload (how much time it needs to execute); gains of up to 16:48% were obtained, but even more are possible, as described in the thesis. When comparing PSO with Golden Section, PSO produced better scheduling parameter settings, but it took longer to do so, while Golden Section produced slightly worse parameter settings, but much faster. We also study a problem very critical to scheduling on modern Central Processing Units (CPUs). Modern CPUs have multicore designs, which corresponds to having more than one CPU on a single chip. These are known as Chip Multiprocessors (CMPs). The CMP is now the standard type of CPU for many different types of computers, including Personal Computers.
73

Conformance testing of OSI protocols : the class O transport protocol as an example

Kou, Tian January 1987 (has links)
This thesis addresses the problem of conformance testing of communication protocol implementations. Test sequence generation techniques for finite state machines (FSM) have been developed to solve the problem of high costs of an exhaustive test. These techniques also guarantee a complete coverage of an implementation in terms of state transitions and output functions, and therefore provide a sound test of the implementation under test. In this thesis, we have modified and applied three test sequence generation techniques on the class 0 transport protocol. A local tester and executable test sequences for the ISO class 0 transport protocol have been developed on a portable protocol tester to demonstrate the practicality of the test methods and test methodologies. The local test is achieved by an upper tester residing on top of the implementation under test (IUT) and a lower tester residing at the bottom of the IUT. Tests are designed based on the state diagram of an IUT. Some methodologies of parameter variations have also been used to test primitive parameters of the implementation. Some problems encountered during the implementation of the testers and how they are resolved are also discussed in the thesis. / Science, Faculty of / Computer Science, Department of / Graduate
74

Improving Caches in Consolidated Environments

Koller, Ricardo 24 July 2012 (has links)
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer’s processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one. The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over- provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consol- idated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain dupli- cated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write poli- cies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy. We addressed these problems by modeling their impact and by proposing solu- tions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we pro- posed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.
75

A System Generation for a Small Operating System

Pargiter, Luke R., Sayers, Jerry E. 08 April 1992 (has links)
A system generation utility has been developed to assist students in producing IBM PC-based multitasking applications targeted for the small operating system (SOS) developed by Jerry E. Sayers. Our aim is to augment SOS by enabling a student to interactively tailor the characteristics of the operating system to meet the requirements of a particular application. The system allows the user to adjust factors such as: initial state, priority, and scheduling method of concurrently executed tasks, and. also, use of system resources. A custom operating system is produced by invoking a MAKE utility to bind SOS with application-specific code, in addition to intermediate source code created during the system generation process. Testing of the system included implementing an application that adds column vectors in a 5 x 5000 matrix concurrently. Further testing involves using the system generation utility along with SOS as part of an undergraduate operating systems class at East Tennessee State University.
76

Computer operating system facilities for the automatic control & activity scheduling of computer-based management systems /

Isaacs, Dov January 1977 (has links)
No description available.
77

Distributed Computing Systems: an Overview

Schwarzkopf, Haim 01 January 1977 (has links) (PDF)
Associative processors, parallel processors, content addressable parallel processors, networks, and other architectures have been around the computing scene as "Distributed Processing", for some time now. Several hundred papers have been written discussing their use and design but so far no academic work has tried to summarize the field called "Distributed Processing" using a systems approach. This research report attempts to remedy this lack. It attempts to gather into one place information that existed as of late 1976 in a format easily understandable by managers and systems engineers. The report deals also with certain issues of centralization and decentralization of EDP (Electronic Data Processing) facilities, created by the introduction of distributed computing systems into industries and businesses.
78

On Improving the Security of Virtualized Systems through Unikernelized Driver Domain and Virtual Machine Monitor Compartmentalization and Specialization

Mehrab, A. K. M. Fazla 31 March 2023 (has links)
Virtualization is the backbone of cloud infrastructures. Its core subsystems include hypervisors and virtual machine monitors (VMMs). They ensure the isolation and security of co-existent virtual machines (VMs) running on the same physical machine. Traditionally, driver domains -- isolated VMs in a hypervisor such as Xen that run device drivers -- use general-purpose full-featured OSs (e.g., Linux), which has a large attack surface, evident by the increasing number of their common vulnerabilities and exposures (CVEs). We argue for using the unikernel operating system (OS) model for driver domains. In this model, a single application is statically compiled together with the minimum necessary kernel code and libraries to produce a single address-space image, reducing code size by as much as one order of magnitude, which yields security benefits. We develop a driver domain OS, called Kite, using NetBSD OS's rumprun unikernel. Since rumprun is directly based on NetBSD's code, it allows us to leverage NetBSD's large collection of device drivers, including highly specialized ones such as Amazon ENA. Kite's design overcomes several significant challenges including Xen's limited para-virtualization (PV) I/O support in rumprun, lack of Xen backend drivers which prevents rumprun from being used as a driver domain OS, and NetBSD's lack of support for running driver domains in Xen. We instantiate Kite for the two most widely used I/O devices, storage and network, by designing and implementing the storage backend and network backend drivers. Our evaluations reveal that Kite achieves competitive performance to a Linux-based driver domain while using 10x fewer system calls, mitigates a set of CVEs, and retains all the benefits of unikernels including a reduced number of return-oriented programming (ROP) gadgets and advanced gadget-related metrics. General-purpose VMMs include a large number of components that may not be used in many VM configurations, resulting in a large attack surface. In addition, they lack intra-VMM isolation, which degrades security: vulnerabilities in one VMM component can be exploited to compromise other components or that of the host OS and other VMs (by privilege escalation). To mitigate these security challenges, we develop principles for VMM compartmentalization and specialization. We construct a prototype, called Redwood, embodying those principles. Redwood is built by extending Cloud Hypervisor and compartmentalizes thirteen critical components (i.e., virtual I/O devices) using Intel MPK, a hardware primitive available in Intel CPUs. Redwood has fifteen fine-grained modules, each representing a single feature, which increases its configurability and flexibility. Our evaluations reveal that Redwood is as performant as the baseline Cloud Hypervisor, has a 50% smaller VMM image size and 50% fewer ROP gadgets, and is resilient to an array of CVEs. I/O acceleration architectures, such as Data Plane Development Kit (DPDK) enhance VM performance by moving the data plane from the VMM to a separate userspace application. Since the VMM must share its VMs' sensitive information with accelerated applications, it can potentially degrade security. The dissertation's final contribution is the compartmentalization of a VM's sensitive data within an accelerated application using the Intel MPK hardware primitive. Our evaluations reveal that the technique does not cause any degradation in I/O performance and mitigates potential attacks and a class of CVEs. / Doctor of Philosophy / Instead of using software on a local device like a laptop or a mobile phone, consumers can access the same services from a remote high-end computer through high-speed Internet. This paradigm shift in computing is enabled by a remote computing infrastructure known as the "cloud,'' wherein networked server computers are deployed to execute third-party applications, often untrusted. Multiple applications are consolidated on the same server to save computer resources, but this can compromise security: a malicious application can steal co-existent applications' sensitive data. To enable resource consolidation and mitigate security attacks, applications are executed using a virtual machine (VM) -- an abstract machine that runs its own operating system (OS). Multiple VMs run on a single physical machine using two software systems: hypervisor and virtual machine monitor (VMM). They ensure that VMs are spatially isolated from each other, localizing security attacks. This dissertation focuses on enhancing the security of hypervisors and VMMs. The hypervisor and VMM have multiple responsibilities toward supporting the OS running on the physical computer and VMs. The OS runs software called device drivers, which communicate with input-output (I/O) hardware such as network and storage devices. Device drivers, usually written by third-party and I/O device manufacturers, are highly vulnerable to security attacks. To mitigate such attacks, device drivers are often run inside special VMs, called driver domains. State-of-the-art driver domains use a general-purpose full-featured OS such as Linux, which has a large code base (in the tens of millions of lines of code) and thus, a large attack surface. To address this security challenge, the dissertation proposes using lightweight, single-purpose VMs called unikernels, as driver domain OSs. Their code size is smaller than that of full-featured OSs by as much as one order of magnitude, which yields security benefits. We design and develop a unikernel-based driver domain, called Kite, for network and storage I/O devices. Kite uses NetBSD OS's rumprun unikernel for creating a driver domain OS. Using rumprun unikernel as a driver domain OS requires overcoming many technical challenges including a lack of support in a popular hypervisor such as Xen for performing I/O operations and communicating with rumprun, among others. Kite's design overcomes these challenges. Our empirical studies reveal that Kite is ten times less likely to be affected by future attacks and ten times faster to start than existing solutions for driver domains. At the same time, Kite domains match the performance of state-of-the-art driver domain OSs such as Linux. The hypervisor and VMM are responsible for creating VMs and providing resources such as memory, processing power, and hardware device access. Existing VMMs are designed to be versatile. Thus, they include a large number of components that may not be used in many VM configurations, resulting in a large attack surface. In addition, VMM components are not well spatially separated from each other. Thus, vulnerabilities in one component can be exploited to compromise other components. To address these security challenges, the dissertation proposes a set of principles for i) customizing a VMM for each VM's needs, instead of using one VMM for all VMs, and ii) strongly isolating VMM components from each other. We realize these principles in a prototype implementation called Redwood. Redwood is highly configurable and separates critical I/O components from each other using a hardware primitive. Our evaluations reveal that Redwood significantly reduces the VMM's size and VMM's vulnerabilities while maintaining performance. To enhance VM performance, I/O acceleration software is often used that eliminates communication overheads in the VMM. To do so, the VMM must share VMs' sensitive information with accelerated applications, which can potentially degrade security. The dissertation's final contribution is a technique that strongly isolates and limits access to sensitive information in the application using a hardware primitive. Our evaluations reveal that the technique improves security by localizing attacks without sacrificing performance.
79

Cross-ISA Execution Migration of Unikernels: Build Toolchain, Memory Alignment, and VM State Transfer Techniques

Mehrab, A K M Fazla 12 December 2018 (has links)
The data centers are composed of resource-rich expensive server machines. A server, overloadeded with workloads, offloads some jobs to other servers; otherwise, its throughput becomes low. On the other hand, low-end embedded computers are low-power, and cheap OS-capable devices. We propose a system to use these embedded devices besides the servers and migrate some jobs from the server to the boards to increase the throughput when overloaded. The datacenters usually run workloads inside virtual machines (VM), but these embedded boards are not capable of running full-fledged VMs. In this thesis, we propose to use lightweight VMs, called unikernel, which can run on these low-end embedded devices. Another problem is that the most efficient versions of these boards have different instruction set architectures than the servers have. The ISA-difference between the servers and the embedded boards and the migration of the entire unikernel between them makes the migration a non-trivial problem. This thesis proposes a way to provide the unikernels with migration capabilities so that it becomes possible to offload workloads from the server to the embedded boards. This thesis describes a toolchain development process for building migratable unikernel for the native applications. This thesis also describes the alignment of the memory components between unikernels for different ISAs, so that the memory referencing remains valid and consistent after migration. Moreover, this thesis represents an efficient VM state transfer method so that the workloads experience higher execution time and minimum downtime due to the migration. / Master of Science / Cloud computing providers run data centers which are composed of thousands of server machines. Servers are robust, scalable, and thus capable of executing many jobs efficiently. At the same time, they are expensive to purchase and maintain. However, these servers may become overloaded by the jobs and take more time to finish their execution. In this situation, we propose a system which runs low-cost, low-power single-board computers in the data centers to help the servers, in considered scenarios, reduce execution time by transferring jobs from the server to the boards. Cloud providers run services inside virtual machines (VM) which provides isolation from other services. As these boards are not capable of running traditional VMs due to the low resources, we run lightweight VMs, called unikernel, in them. So if the servers are overloaded, some jobs running inside unikernels are offloaded to the boards. Later when the server gets some of its resources freed, these jobs are migrated back to the server. This back and forth migration system development for a unikernel is composed of several modules. This thesis discuss detail design and implementation of a few of these modules such as unikernel build environment implementation, and unikernel's execution state transfer during the migration.
80

System support for design and development environments

Smith, Eric C. January 1986 (has links)
Most, if not all, currently popular operating systems are designed to be general purpose environments for the development, maintenance, documentation and execution of systems of all types. Thus, the designers of the operating system must try to make the system a compromise between efficiency and power in all of these areas. This paper suggests that a class of operating systems and tools be designed to deal specifically with the problems of software design and development only. The fact that only the development tools themselves, and not the systems under development, are required to run fast and efficiently in the development environment is stressed as providing significantly different weight to the various considerations of operating system design. Since many of the problems of run time efficiency are no longer quite so pressing, additional power can be given to the operating system so that it may better support the software design and development process. / M.S.

Page generated in 0.1543 seconds