• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 65
  • 55
  • 16
  • 16
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 439
  • 439
  • 236
  • 230
  • 105
  • 83
  • 77
  • 72
  • 62
  • 56
  • 54
  • 54
  • 51
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Computer operating system facilities for the automatic control & activity scheduling of computer-based management systems

Isaacs, Dov January 1977 (has links)
No description available.
162

Managing Memory for Power, Performance, and Thermal Efficiency

Tolentino, Matthew Edward 08 April 2009 (has links)
Extraordinary improvements in computing performance, density, and capacity have driven rapid increases in system energy consumption, motivating the need for energy-efficient performance. Harnessing the collective computational capacity of thousands of these systems can consume megawatts of electrical power, even though many systems may be underutilized for extended periods of time. At scale, powering and cooling unused or lightly loaded systems can waste millions of dollars annually. To combat this inefficiency, we propose system software, control systems, and architectural techniques to improve the energy efficiency of high-capacity memory systems while preserving performance. We introduce and discuss several new application-transparent, memory management algorithms as well as a formal analytical model of a power-state control system rooted in classical control theory we developed to proportionally scale memory capacity with application demand. We present a prototype implementation of this control-theoretic runtime system that we evaluate on sequential memory systems. We also present and discuss why the traditional performance-motivated approach of maximizing interleaving within memory systems is problematic and should be revisited in terms of power and thermal efficiency. We then present power-aware control techniques for improving the energy efficiency of symmetrically interleaved memory systems. Given the limitations of traditional interleaved memory configurations, we propose and evaluate unorthodox, asymmetrically interleaved memory configurations. We show that when coupled with our control techniques, significant energy savings can be achieved without sacrificing application performance or memory bandwidth. / Ph. D.
163

Improving Operating System Security, Reliability, and Performance through Intra-Unikernel Isolation, Asynchronous Out-of-kernel IPC, and Advanced System Servers

Sung, Mincheol 28 March 2023 (has links)
Computer systems are vulnerable to security exploits, and the security of the operating system (OS) is crucial as it is often a trusted entity that applications rely on. Traditional OSs have a monolithic design where all components are executed in a single privilege layer, but this design is increasingly inadequate as OS code sizes have become larger and expose a large attack surface. Microkernel OSs and multiserver OSs improve security and reliability through isolation, but they come at a performance cost due to crossing privilege layers through IPCs, system calls, and mode switches. Library OSs, on the other hand, implement kernel components as libraries which avoids crossing privilege layers in performance-critical paths and thereby improves performance. Unikernels are a specialized form of library OSs that consist of a single application compiled with the necessary kernel components, and execute in a single address space, usually atop a hypervisor for strong isolation. Unikernels have recently gained popularity in various application domains due to their better performance and security. Although unikernels offer strong isolation between each instance due to virtualization, there is no isolation within a unikernel. Since the model eliminates the traditional separation between kernel and user parts of the address space, the subversion of a kernel or application component will result in the subversion of the entire unikernel. Thus, a unikernel must be viewed as a single unit of trust, reducing security. The dissertation's first contribution is intra-unikernel isolation: we use Intel's Memory Protection Keys (MPK) primitive to provide per-thread permission control over groups of virtual memory pages within a unikernel's single address space, allowing different areas of the address space to be isolated from each other. We implement our mechanisms in RustyHermit, a unikernel written in Rust. Our evaluations show that the mechanisms have low overhead and retain unikernel's low system call latency property: 0.6% slowdown on applications including memory/compute intensive benchmarks as well as micro-benchmarks. Multiserver OS, a type of microkernel OS, has high parallelism potential due to its inherent compartmentalization. However, the model suffers from inferior performance. This is due to inter-process communication (IPC) client-server crossings that require context switches for single-core systems, which are more expensive than traditional system calls; on multi-core systems (now ubiquitous), they have poor resource utilization. The dissertation's second contribution is Aoki, a new approach to IPC design for microkernel OSs. Aoki incorporates non-blocking concurrency techniques to eliminate in-kernel blocking synchronization which causes performance challenges for state-of-the-art microkernels. Aoki's non-blocking (i.e., lock-free and wait-free) IPC design not only improves performance and scalability, but also enhances reliability by preventing thread starvation. In a multiserver OS setting, the design also enables the reconnection of stateful servers after failure without loss of IPC states. Aoki solves two problems that have plagued previous microkernel IPC designs: reducing excessive transitions between user and kernel modes and enabling efficient recovery from failures. We implement Aoki in the state-of-the-art seL4 microkernel. Results from our experiments show that Aoki outperforms the baseline seL4 in both fastpath IPC and cross-core IPC, with improvements of 2.4x and 20x, respectively. The Aoki IPC design enables the design of system servers for multiserver OSs with higher performance and reliability. The dissertation's third and final contribution is the design of a fault-tolerant storage server and a copy-free file system server. We build both servers using NetBSD OS's rumprun unikernel, which provides robust isolation through hardware virtualization, and is capable of handling a wide range of storage devices including NVMe. Both servers communicate with client applications using Aoki's IPC design, which yields scalable IPC. In the case of the storage server, the IPC also enables the server to transparently recover from server failures and reconnect to client applications, with no loss of IPC state and no significant overhead. In the copy-free file system server's design, applications grant the server direct memory access to file I/O data buffers for high performance. The performance problems solved in the server designs have challenged all prior multiserver/microkernel OSs. Our evaluations show that both servers have a performance comparable to Linux and the rumprun baseline. / Doctor of Philosophy / Computer security is extremely important, especially when it comes to the operating system (OS) – the foundation upon which all applications execute. Traditional OSs adopt a monolithic design in which all of their components execute at a single privilege level (for achieving high performance). However, this design degrades security as the vulnerability of a single component can be exploited to compromise the entire system. The problem is exacerbated when the OS codebase becomes large, as is the current trend. To overcome this security challenge, researchers have developed alternative OS models such as microkernels, multiserver OSs, library OSs, and recently, unikernels. The unikernel model has recently gained popularity in application domains such as cloud computing, the internet of things (IoT), and high-performance computing due to its improved security and performance. In this model, a single application is compiled together with its necessary OS components to produce a single, small executable image. Unikernels execute atop a hypervisor, a software layer that provides strong isolation between unikernels, usually by leveraging special hardware instructions. Both ideas improve security. The dissertation's first contribution improves the security of unikernels by enabling isolation within a unikernel. This allows different components of a unikernel (e.g., safe code, unsafe code, kernel code, user code) to be isolated from each other. Thus, the vulnerability of a single component cannot be exploited to compromise the entire system. We used Intel's Memory Protection Keys (MPK), a hardware feature of Intel CPUs, to achieve this isolation. Our implementation of the technique and experimental evaluations revealed that the technique has low overhead and high performance. The dissertation's second contribution improves the performance of multiserver OSs. This OS model has excellent potential for parallelization, but its performance is hindered by slow communication between applications and OS subsystems (which are programmed as clients and servers, respectively). We develop Aoki, an Inter-Process Communication (IPC) technique that enables faster and more reliable communication between clients and servers in multiserver OSs. Our implementation of Aoki in the state-of-the-art seL4 microkernel and evaluations reveal that the technique improves IPC latency over seL4's by as much as two orders of magnitude. The dissertation's third and final contribution is the design of two servers for multiserver OSs: a storage server and a file system server. The servers are built as unikernels running atop the Xen hypervisor and are powered by Aoki's IPC mechanism for communication between the servers and applications. The storage server is designed to recover its state after a failure with no loss of data and little overhead, and the file system server is designed to communicate with applications with little overhead. Our evaluations show that both servers achieve their design goals: they have comparable performance to that of state-of-the-art high-performance OSes such as Linux.
164

Multiple strategy process migration.

De Paoli, Damien, mikewood@deakin.edu.au January 1996 (has links)
The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.
165

Analýza trhu operačních systémů / Operating Systems Market Analysis

Kafka, Jan January 2012 (has links)
The reason of this work are operating systems, their significance, history, analysis of the current situation and attempt to predict the future. The first part introduces the basic concepts, the definition of operating system and brief history. The second part deals with the current situation on the market for operating systems, the main drivers of this sector and the business models used. The last part deals with the prediction of the situation on the market for operating systems, their future evolution, the probable evolution of their business model and estimate the near future from the business point of view. It were also made study of available new scientific publications about OS. Contribution of this thesis evaluate role and future operating systems and prediction business perspective in the industry branch.
166

Securing resource constrained platforms with low-cost solutions.

Arslan Khan (17592498) 11 December 2023 (has links)
<p dir="ltr">This thesis focuses on securing different attack surfaces of embedded systems while meeting the stringent requirements imposed by these systems. Due to the specialized architecture of embedded systems, the security measures should be customized to match the unique requirements of each specific domain. To this end, this thesis identified novel security architectures using techniques such as anomaly detection, program analysis, compartmentalization, etc. This thesis synergizes work at the intersection of programming languages, compilers, computer architecture, operating systems, and embedded systems. </p>
167

Virtualized resource management in high performance fabric clusters

Ranadive, Adit Uday 07 January 2016 (has links)
Providing performance and isolation guarantees for applications running in virtualized datacenter environments requires continuous management of the underlying physical resources. For communication- and I/O-intensive applications running on such platforms, the management methods must adequately deal with the shared use of the high-performance fabrics these applications require. In particular, new classes of latency-sensitive and data-intensive workloads running in virtualized environments rely on emerging fabrics like 40+Gbps Ethernet and InfiniBand/RoCE with support for RDMA, VMM-bypass and hardware-level virtualization (SR-IOV). However, the benefits provided by these technology advances are offset by several management constraints: (i) the inability of the hypervisor to monitor the VMs’ usage of these fabrics can affect the platform’s ability to provide isolation and performance guarantees, (ii) the hypervisor cannot provide fine-grained I/O provisioning or perform management decisions for VMs, thus reducing the degree of consolidation that can be supported on the platforms, and (iii) without such support it is harder to integrate these fabrics into emerging cloud computing platforms and datacenter fabric management solutions. This is made particularly challenging for workloads spanning multiple VMs, utilizing physical resources distributed across multiple server nodes and the interconnection fabric. This thesis addresses the problem of realizing a flexible, dynamic resource management system for virtualized platforms with high performance fabrics. We make the following key contributions: (i) A lightweight monitoring tool, IBMon, integrated with the hypervisor to monitor VMs’ use of RDMA-enabled virtualized interconnects, using memory introspection techniques. (ii) The design and construction of a resource management system that leverages IBMon to provide latency-sensitive applications performance guarantees. This system is built on microeconomic principles of supply and demand and can be deployed on a per-node (Resource Exchange) or a multi-node (Distributed Resource Exchange) basis. Fine-grained resource allocations can be enforced through several mechanisms, including CPU capping or fabric-level congestion control. (iii) Sphinx, a fabric management solution that leverages Resource Exchange to orchestrate network and provide latency proportionality for consolidated workloads, based on user/application-specified policies. (iv) Implementation and experimental evaluation using InfiniBand clusters virtualized with the Xen or KVM hypervisor, managed via the OpenFloodlight SDN controller, and using representative data-intensive and latency-sensitive benchmarks.
168

On providing an efficient and reliable virtual block storage service

Esterhuyse, Eben 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: This thesis describes the design and implementation of a data storage service. Many clients can be served simultaneously in an environment where processes execute on different physical machines and communicate via message passing primitives. The service is provided by two separate servers: one that functions at the disk block level and another that maintains files. A prototype system was developed first in the form of a simple file store. The prototype served two purposes: (1) it extended the single-user Oberon system to create a multiuser system suitable to support group work in laboratories, and (2) it provided a system that could be measured to obtain useful data to design the final system. Clients access the service from Oberon workstations. The Oberon file system (known as the Ceres file system) normally stores files on a local disk. This system was modified to store files on a remote Unix machine. Heavily used files are cached to improve the efficiency of the system. In the final version of the system disk blocks are cached, not entire files. In this way the disks used to store the data are unified and presented as a separate virtual block service to be used by file systems running on client workstations. The virtual block server runs on a separate machine and is accessed via a network. The simplicity of the block server is appealing and should in itself improve reliability. The main concern is efficiency and the goal of the project was to determine whether such a design can be made efficient enough to serve its purpose. / AFRIKAANSE OPSOMMING:Hierdie tesis omskryf die ontwerp en implementasie van 'n data stoor diens. Verskeie gebruikers word bedien deur die diens wat funksioneer in 'n verspreide omgewing: 'n omgewing waar prosesse uitvoer op verskillende masjiene en met mekaar kommunikeer met behulp van boodskappe wat rondgestuur word. Die diens word verskaf deur twee bedieners: die eerste wat funksioneer op 'n blok vlak en die ander wat lers onderhou. 'n Prototipe leer diens is ontwikkel deur middel van 'n basiese leer stoor. Die prototipe het twee funksies verrig: (1) die enkel gebruiker Oberon stelsel is uitgebrei na 'n veelvoudige gebruiker stelsel bruikbaar vir groepwerk in 'n laboratorium omgewing, en (2) 'n stelsel is verskaf wat betroubare en akkurate data kon verskaf vir die ontwerp van die finale stelsel. Oberon werkstasies word gebruik met die leer diens. Die Oberon leer stelsel (ook bekend as die Ceres leer stelsel) stoor normaalweg leers op 'n lokale skyf. Hierdie bestaande stelsel is verander om leers te stoor op 'n eksterne Unix masjien. Leers wat die meeste in gebruik is word in geheue aangehou vir effektiwiteits redes. Die finale weergawe van die stelsel berg skyf blokke in geheue, nie leers nie. Hierdie metode laat dit toe om data te stoor op 'n standaard metode, bruikbaar deur verskillende tipes leer stelsels wat uitvoer op verskeie gebruikers se werkstasies. Die virtuele blok stoor voer uit op 'n aparte masjien en is bereikbaar via 'n netwerk. Die eenvoudige ontwerp van die diens is opsigself aanloklik en behoort betroubaarheid te verbeter. Die hoof bekommernis is effektiwiteit en die hoofdoel van die projek was om te bepaal of hierdie ontwerp effektief genoeg gemaak kon word.
169

Kernel support for embedded reactive systems

Ackerman, M. C . (Marthinus Casper) 10 1900 (has links)
Thesis (MSc)--Stellenbosch University , 1993. / ENGLISH ABSTRACT: Reactive systems are event driven state machines which usually do not terminate, but remain in perpetual interaction with their environment. Such systems usually interact 'With devices which introduce a high degree of concurrency and some real time constraints to the system. Because of the concurrent nature of reactive systems they are commonly implemented as communicating concurrent processes on one or more processors. Jeffay introduces a design paradigm which requires consumer processes to consume messages faster than they are produced by producer processes. If this is guaranteed, the real time constraints of such .. system are always met, and the correctness of the process interaction is guaranteed in terms of the message passing semantics. I developed the ESE kernel, which supports Jeffay systems by providing lightweight processes which communicate over asynchronous channels. Processes are scheduled non-preemptively according to the earliest deadline first policy when they have messages pending on their input channels. The Jeffay design method and the ESE kernel have been found to be highly suitable to implement embedded reactive systems. The general requirements of embedded reactive systems, and kernel support required by such systems, are discussed. / AFRIKAANSE OPSOMMING: Reaktiewe stelsels is toeatandsoutomate wat aangedryf word deur gebeure in hul omgewins. So 'n stelsel termineer gewoonlik nie, maar bly in 'n voortdurende wisselwerking met toestelle in sy omgewing. Toestelle in die omgewing van 'n reaktiewe stelsel veroorsaak in die algemeen 'n hoë mate van gelyklopendheid in die stelsel, en plaas gewoonlik sekere intydse beperkings op die stelsel. Gelyklopende stelsels word gewoonlik as stelsel. van kommunikerende prosesse geïmplementeer op een of meer prosessors. Jeffay beskryf 'n ontwerpsmetodologie waarvolgens die ontvanger van boodskappe hulle vinniger moet verwerk as wat die sender hulle kan stuur. Indien hierdie gedrag tussen alle pare kommunikerende prosesse gewaarborg kan word, sal die stelsel altyd sy intydse beperkings gehoorsaam, en word die korrektheid van interaksies tussen prosesse deur die semantiek van die boodskapwisseling gewaarborg. Die "ESE" bedryfstelselkern wat ek ontwikkel het, ondersteun stelsels wat ontwerp en geïmplementeer word volgens Jeffay se metode. Prosesse kommunikeer oor asinkrone kanale, en die ontvanger van die boodskap met die vroegste keertyd word altyd eerste geskeduleer. Jeffay se ontwerpsmetode en die "ESE" kern blyk in die praktyk baie geskik te wees vir reaktiewe stelsels wat as substelsels van groter stelsels uitvoer. Die vereistes van reaktiewe substelsels, en die kemondersteuning wat daarvoor nodig is, word bespreek.
170

Debogage Interactif des systemes embarques multicoeur base sur le model de programmation

Pouget, Kevin 03 February 2014 (has links) (PDF)
Dans cette thèse, nous proposons d'étudier le débogage interactif d'applications pour les systèmes embarqués MPSoC (Multi-Processor System on Chip). Une étude de l'art a montrée que la conception et le développement de ces applications reposent de plus en plus souvent sur des modèles de programmation et des frameworks de développement. Ces environnements définissent les bonnes pratiques, tant au niveau algorithmique qu'au niveau des techniques de programmation. Ils améliorent ainsi le cycle de développement des applications destinées aux processeurs MPSoC. L'utilisation de modèles de programmation ne garantit cependant pas que les codes pourront etre exécutés sans erreur, en particulier dans le cas de la programmation dynamique, oú ils offrent très peu d'aide à la vérification. Notre contribution pour résoudre ces challenges consiste en une nouvelle approche pour le débogage interactif, appelée Programming Model-Centric Debugging, ainsi qu'une implémentation d'un prototype de débogueur. Le débogage centré sur les modèles rapproche le débogage interactif du niveau d'abstraction fourni par les modèles de programmation, en capturant et interprétant les événements générés pendant l'exécution de l'application. Nous avons appliqué cette approche sur trois modèles de programmation, basés sur les composants logiciels, le dataflow et la programmation d'accélérateur par kernels. Ensuite, nous détaillons comment nous avons développé notre prototype de débogueur, basé sur GDB, pour la programmation de la plate-forme STHORM de STMicroelectronics. Nous montrons aussi comment aborder le débogage basé sur les modèles avec quatre études de cas~: un code de réalité augmentée construit à l'aide de composants, une implémentation dataflow d'un décodeur vidéo H.264 and deux applications de calcul scientifique.

Page generated in 0.0825 seconds