• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 52
  • 21
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 233
  • 233
  • 151
  • 74
  • 63
  • 62
  • 46
  • 45
  • 33
  • 29
  • 29
  • 29
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

OS-aware architecture for improving microprocessor performance and energy efficiency

Li, Tao 28 August 2008 (has links)
Not available / text
62

A REAL-TIME MULTI-TASKING OPERATING SYSTEM FOR MICROCOMPUTERS.

Spencer, Robert Douglas. January 1984 (has links)
No description available.
63

Device drivers : a comparison of different development strategies

Loubser, Johannes Jacobus 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: Users are not supposed to modify an operating system kernel, but it is often necessary to add a device driver for a new peripheral device. Device driver development is a difficult and time-consuming process that must be performed by an expert. Drivers are difficult to debug and a malfunctioning driver could cause the operating system to crash. Ways are therefore needed to make the development of device drivers safer and easier. A number of different device driver development methods are examined in this thesis. An existing micro-kernel that supports in-kernel device drivers as well as extensible device drivers has been modified to support user-level and loadable drivers. These extensions ensured that all the development methods were implemented in the same environment and a comparison could thus be made on a fair basis. A comparison of the different methods with respect to the efficiency of the resulting device driver, as well as the ease of the development process, is presented. / AFRIKAANSE OPSOMMING: Gebruikers is nie veronderstelom aan 'n bedryfstelsel te verander nie, maar tog is dit gereeld nodig om 'n toesteldrywer vir 'n nuwe randapparaat by te voeg. Die ontwikkeling van 'n toesteldrywer is 'n tydrowende en moeilike proses en moet deur 'n kundige aangepak word. Toesteldrywers is moeilik om te ontfout en kan deur verkeerde werking die hele stelsel tot stilstand bring. Daar is dus tegnieke nodig om die ontwikkeling van toestelhanteerders makliker en veiliger te maak. 'n Aantal verskillende ontwikkelingsmetodes vir toesteldrywers word in hierdie tesis ondersoek. 'n Bestaande mikro-kern wat in-kern, sowel as uitbreibare toesteldrywers ondersteun, is aangepas om gebruikersvlak en laaibare toestelhanteerders te ondersteun. Hierdie uitbreiding het verseker dat al die ontwikkelingsmetodes in dieselfde omgewing geïmplementeer is. Dit was dus moontlik om die metodes op 'n regverdige grondslag te vergelyk. Die vergelyking is gedoen ten opsigte van die effektiwiteit van die resulterende toesteldrywer sowel as die moeilikheidsgraad van die ontwikkelingsproses.
64

ENHANCING FILE AVAILABILITY IN DISTRIBUTED SYSTEMS (THE SAGUARO FILE SYSTEM).

Purdin, Titus Douglas Mahlon January 1987 (has links)
This dissertation describes the design and implementation of the file system component of the Saguaro operating system for computers connected by a local-area network. Systems constructed on such an architecture have the potential advantage of increased file availability due to their inherent redundancy. In Saguaro, this advantage is made available through two mechanisms that support semi-automatic file replication and access: reproduction sets and metafiles. A reproduction set is a collection of files that the system attempts to keep identical on a "best effort" basis, relying on the user to handle unusual situations that may arise. A metafile is a special file that contains symbolic path names of other files; when a metafile is opened, the system selects an available constituent file and opens it instead. These mechanisms are especially appropriate for situations that do not require guaranteed consistency or a large number of copies. Other interesting aspects of the Saguaro file system design are also described. The logical file system forms a single tree, yet any file can be placed in any of the physical file systems. This organization allows the creation of a logical association among files that is quite different from their physical association. In addition, the broken path algorithm is described. This algorithm makes it possible to bypass elements in a path name that are on inaccessible physical file systems. Thus, any accessible file can be made available, regardless of the availability of directories in its path. Details are provided on the implementation of the Saguaro file system. The servers of which the system is composed are described individually and a comprehensive operational example is supplied to illustrate their interation. The underlying data structures of the file system are presented. The virtual roots, which contain information used by the broken path algorithm, are the most novel of these. Finally, an implementation of reproduction sets and metafiles for interconnected networks running Berkeley UNIX is described. This implementation demonstrates the broad applicability of these mechanisms. It also provides insight into the way in which mechanisms to facilitate user controlled replication of files can be inexpensively added to existing file systems. Performance measurements for this implementation are also presented.
65

Current status of queueing network theory

Jou, Chi-Jiunn January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
66

File sharing : an implementation of the multiple writers feature

Kenney, Mary January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
67

Finding, Measuring, and Reducing Inefficiencies in Contemporary Computer Systems

Kambadur, Melanie Rae January 2016 (has links)
Computer systems have become increasingly diverse and specialized in recent years. This complexity supports a wide range of new computing uses and users, but is not without cost: it has become difficult to maintain the efficiency of contemporary general purpose computing systems. Computing inefficiencies, which include nonoptimal runtimes, excessive energy use, and limits to scalability, are a serious problem that can result in an inability to apply computing to solve the world's most important problems. Beyond the complexity and vast diversity of modern computing platforms and applications, a number of factors make improving general purpose efficiency challenging, including the requirement that multiple levels of the computer system stack be examined, that legacy hardware devices and software may stand in the way of achieving efficiency, and the need to balance efficiency with reusability, programmability, security, and other goals. This dissertation presents five case studies, each demonstrating different ways in which the measurement of emerging systems can provide actionable advice to help keep general purpose computing efficient. The first of the five case studies is Parallel Block Vectors, a new profiling method for understanding parallel programs with a fine-grained, code-centric perspective aids in both future hardware design and in optimizing software to map better to existing hardware. Second is a project that defines a new way of measuring application interference on a datacenter's worth of chip-multiprocessors, leading to improved scheduling where applications can more effectively utilize available hardware resources. Next is a project that uses the GT-Pin tool to define a method for accelerating the simulation of GPGPUs, ultimately allowing for the development of future hardware with fewer inefficiencies. The fourth project is an experimental energy survey that compares and combines the latest energy efficiency solutions at different levels of the stack to properly evaluate the state of the art and to find paths forward for future energy efficiency research. The final project presented is NRG-Loops, a language extension that allows programs to measure and intelligently adapt their own power and energy use.
68

The design and implementation of a load distribution facility on Mach.

January 1997 (has links)
by Hsieh Shing Leung Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 78-81). / List of Figures --- p.viii / List of Tables --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background and Related Work --- p.4 / Chapter 2.1 --- Load Distribution --- p.4 / Chapter 2.1.1 --- Load Index --- p.5 / Chapter 2.1.2 --- Task Transfer Mechanism --- p.5 / Chapter 2.1.3 --- Load Distribution Facility --- p.6 / Chapter 2.2 --- Load Distribution Algorithm --- p.6 / Chapter 2.2.1 --- Classification --- p.6 / Chapter 2.2.2 --- Components --- p.7 / Chapter 2.2.3 --- Stability and Effectiveness --- p.9 / Chapter 2.3 --- The Mach Operating System --- p.10 / Chapter 2.3.1 --- Mach kernel abstractions --- p.10 / Chapter 2.3.2 --- Mach kernel features --- p.11 / Chapter 2.4 --- Related Work --- p.12 / Chapter 3 --- The Design of Distributed Scheduling Framework --- p.16 / Chapter 3.1 --- System Model --- p.16 / Chapter 3.2 --- Design Objectives and Decisions --- p.17 / Chapter 3.3 --- An Overview of DSF Architecture --- p.17 / Chapter 3.4 --- The DSF server --- p.18 / Chapter 3.4.1 --- Load Information Module --- p.19 / Chapter 3.4.2 --- Movement Module --- p.22 / Chapter 3.4.3 --- Decision Module --- p.25 / Chapter 3.5 --- LD library --- p.28 / Chapter 3.6 --- User-Agent --- p.29 / Chapter 4 --- The System Implementation --- p.33 / Chapter 4.1 --- Shared data structure --- p.33 / Chapter 4.2 --- Synchronization --- p.37 / Chapter 4.3 --- Reentrant library --- p.39 / Chapter 4.4 --- Interprocess communication (IPC) --- p.42 / Chapter 4.4.1 --- Mach IPC --- p.42 / Chapter 4.4.2 --- Socket IPC --- p.43 / Chapter 5 --- Experimental Studies --- p.47 / Chapter 5.1 --- Load Distribution algorithms --- p.47 / Chapter 5.2 --- Experimental environment --- p.49 / Chapter 5.3 --- Experimental results --- p.50 / Chapter 5.3.1 --- Performance of LD algorithms --- p.50 / Chapter 5.3.2 --- Degree of task transfer --- p.54 / Chapter 5.3.3 --- Effect of threshold value --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Summary and Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.58 / Chapter A --- LD Library --- p.60 / Chapter B --- Sample Implementation of LD algorithms --- p.65 / Chapter B.l --- LOWEST --- p.65 / Chapter B.2 --- THRHLD --- p.67 / Chapter C --- Installation Guide --- p.71 / Chapter C.1 --- Software Requirement --- p.71 / Chapter C.2 --- Installation Steps --- p.72 / Chapter C.3 --- Configuration --- p.73 / Chapter D --- User's Guide --- p.74 / Chapter D.1 --- The DSF server --- p.74 / Chapter D.2 --- The User Agent --- p.74 / Chapter D.3 --- LD experiment --- p.77 / Bibliography --- p.78
69

Deterministic, Mutable, and Distributed Record-Replay for Operating Systems and Database Systems

Viennot, Nicolas January 2016 (has links)
Application record and replay is the ability to record application execution and replay it at a later time. Record-replay has many use cases including diagnosing and debugging applications by capturing and reproducing hard to find bugs, providing transparent application fault tolerance by maintaining a live replica of a running program, and offline instrumentation that would be too costly to run in a production environment. Different record-replay systems may offer different levels of replay faithfulness, the strongest level being deterministic replay which guarantees an identical reenactment of the original execution. Such a guarantee requires capturing all sources of nondeterminism during the recording phase. In the general case, such record-replay systems can dramatically hinder application performance, rendering them unpractical in certain application domains. Furthermore, various use cases are incompatible with strictly replaying the original execution. For example, in a primary-secondary database scenario, the secondary database would be unable to serve additional traffic while being replicated. No record-replay system fit all use cases. This dissertation shows how to make deterministic record-replay fast and efficient, how broadening replay semantics can enable powerful new use cases, and how choosing the right level of abstraction for record-replay can support distributed and heterogeneous database replication with little effort. We explore four record-replay systems with different semantics enabling different use cases. We first present Scribe, an OS-level deterministic record-replay mechanism that support multi-process applications on multi-core systems. One of the main challenge is to record the interaction of threads running on different CPU cores in an efficient manner. Scribe introduces two new lightweight OS mechanisms, rendezvous point and sync points, to efficiently record nondeterministic interactions such as related system calls, signals, and shared memory accesses. Scribe allows the capture and replication of hard to find bugs to facilitate debugging and serves as a solid foundation for our two following systems. We then present RacePro, a process race detection system to improve software correctness. Process races occur when multiple processes access shared operating system resources, such as files, without proper synchronization. Detecting process races is difficult due to the elusive nature of these bugs, and the heterogeneity of frameworks involved in such bugs. RacePro is the first tool to detect such process races. RacePro records application executions in deployed systems, allowing offline race detection by analyzing the previously recorded log. RacePro then replays the application execution and forces the manifestation of detected races to check their effect on the application. Upon failure, RacePro reports potentially harmful races to developers. Third, we present Dora, a mutable record-replay system which allows a recorded execution of an application to be replayed with a modified version of the application. Mutable record-replay provides a number of benefits for reproducing, diagnosing, and fixing software bugs. Given a recording and a modified application, finding a mutable replay is challenging, and undecidable in the general case. Despite the difficulty of the problem, we show a very simple but effective algorithm to search for suitable replays. Lastly, we present Synapse, a heterogeneous database replication system designed for Web applications. Web applications are increasingly built using a service-oriented architecture that integrates services powered by a variety of databases. Often, the same data, needed by multiple services, must be replicated across different databases and kept in sync. Unfortunately, these databases use vendor specific data replication engines which are not compatible with each other. To solve this challenge, Synapse operates at the application level to access a unified data representation through object relational mappers. Additionally, Synapse leverages application semantics to replicate data with good consistency semantics using mechanisms similar to Scribe.
70

O2-tree: a shared memory resident index in multicore architectures

Ohene-Kwofie, Daniel 06 February 2013 (has links)
Shared memory multicore computer architectures are now commonplace in computing. These can be found in modern desktops and workstation computers and also in High Performance Computing (HPC) systems. Recent advances in memory architecture and in 64-bit addressing, allow such systems to have memory sizes of the order of hundreds of gigabytes and beyond. This now allows for realistic development of main memory resident database systems. This still requires the use of a memory resident index such as T-Tree, and the B+-Tree for fast access to the data items. This thesis proposes a new indexing structure, called the O2-Tree, which is essentially an augmented Red-Black Tree in which the leaf nodes are index data blocks that store multiple pairs of key and value referred to as \key-value" pairs. The value is either the entire record associated with the key or a pointer to the location of the record. The internal nodes contain copies of the keys that split blocks of the leaf nodes in a manner similar to the B+-Tree. O2-Tree structure has the advantage that: it can be easily reconstructed by reading only the lowest value of the key of each leaf node page. The size is su ciently small and thus can be dumped and restored much faster. Analysis and comparative experimental study show that the performance of the O2-Tree is superior to other tree-based index structures with respect to various query operations for large datasets. We also present results which indicate that the O2-Tree outperforms popular key-value stores such as BerkelyDB and TreeDB of Kyoto Cabinet for various workloads. The thesis addresses various concurrent access techniques for the O2-Tree for shared memory multicore architecture and gives analysis of the O2-Tree with respect to query operations, storage utilization, failover and recovery.

Page generated in 0.1418 seconds