• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 602
  • 214
  • 194
  • 161
  • 102
  • 55
  • 40
  • 39
  • 36
  • 26
  • 20
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 1748
  • 506
  • 362
  • 339
  • 242
  • 215
  • 177
  • 150
  • 148
  • 148
  • 135
  • 127
  • 124
  • 122
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Managing Memory for Power, Performance, and Thermal Efficiency

Tolentino, Matthew Edward 08 April 2009 (has links)
Extraordinary improvements in computing performance, density, and capacity have driven rapid increases in system energy consumption, motivating the need for energy-efficient performance. Harnessing the collective computational capacity of thousands of these systems can consume megawatts of electrical power, even though many systems may be underutilized for extended periods of time. At scale, powering and cooling unused or lightly loaded systems can waste millions of dollars annually. To combat this inefficiency, we propose system software, control systems, and architectural techniques to improve the energy efficiency of high-capacity memory systems while preserving performance. We introduce and discuss several new application-transparent, memory management algorithms as well as a formal analytical model of a power-state control system rooted in classical control theory we developed to proportionally scale memory capacity with application demand. We present a prototype implementation of this control-theoretic runtime system that we evaluate on sequential memory systems. We also present and discuss why the traditional performance-motivated approach of maximizing interleaving within memory systems is problematic and should be revisited in terms of power and thermal efficiency. We then present power-aware control techniques for improving the energy efficiency of symmetrically interleaved memory systems. Given the limitations of traditional interleaved memory configurations, we propose and evaluate unorthodox, asymmetrically interleaved memory configurations. We show that when coupled with our control techniques, significant energy savings can be achieved without sacrificing application performance or memory bandwidth. / Ph. D.
352

An Analysis of Fare Collection Costs on Heavy Rail and Bus Systems in the U.S.

Plotnikov, Valeri 12 October 2001 (has links)
In this research, an effort is made to analyze the costs of fare collection on heavy rail and motorbus systems in the U.S. Since existing ticketing and fare collection (TFC) systems are major elements of transit infrastructure and there are several new alternative TFC technologies available on the market, the need to evaluate the performance of existing TFC systems arises. However, very little research has been done, so far, to assess impacts of TFC technologies on capital and operating expenses in public transit. The two objectives of this research are: (1) to formulate a conceptual evaluation framework and a plan to assess the operating costs of existing TFC systems in transit and (2) to analyze the operating expenses associated with existing TFC systems on heavy rail and motorbus transit in the U.S. with the aid of the evaluation framework and plan. This research begins with a review of the current state of knowledge in the areas of transit TFC evaluation, the economics of public transit operations, and fare collection practices and technologies. It helps to determine the scope of work related to assessment of TFC operating costs on public transit and provides the basis for the development of a conceptual evaluation framework and an evaluation plan. Next, this research presents a systematic approach to define and describe alternative TFC systems and suggests that the major TFC system determinants are payment media, fare media, TFC equipment, and transit technology (mode). Following this is the development of measures of effectiveness to evaluate alternative TFC systems. These measures assess cost-effectiveness and labor-intensiveness of TFC operations. The development of TFC System Technology Index follows. This Index recognizes the fact that TFC systems may consist of different sets of TFC technologies both traditional and innovative. Finally, this research presents statistical results that support the hypothesis that TFC operating costs are related to transit demand, transit technology (mode) and TFC technologies. These results further suggest that: (1) TFC operating costs per unlinked passenger trip on heavy rail systems are higher than on motorbus systems and (2) TFC operating costs per unlinked passenger trip tend to increase as the use of non-electronic fare media increases. Actions for further research are also recommended. / Ph. D.
353

Improving Operating System Security, Reliability, and Performance through Intra-Unikernel Isolation, Asynchronous Out-of-kernel IPC, and Advanced System Servers

Sung, Mincheol 28 March 2023 (has links)
Computer systems are vulnerable to security exploits, and the security of the operating system (OS) is crucial as it is often a trusted entity that applications rely on. Traditional OSs have a monolithic design where all components are executed in a single privilege layer, but this design is increasingly inadequate as OS code sizes have become larger and expose a large attack surface. Microkernel OSs and multiserver OSs improve security and reliability through isolation, but they come at a performance cost due to crossing privilege layers through IPCs, system calls, and mode switches. Library OSs, on the other hand, implement kernel components as libraries which avoids crossing privilege layers in performance-critical paths and thereby improves performance. Unikernels are a specialized form of library OSs that consist of a single application compiled with the necessary kernel components, and execute in a single address space, usually atop a hypervisor for strong isolation. Unikernels have recently gained popularity in various application domains due to their better performance and security. Although unikernels offer strong isolation between each instance due to virtualization, there is no isolation within a unikernel. Since the model eliminates the traditional separation between kernel and user parts of the address space, the subversion of a kernel or application component will result in the subversion of the entire unikernel. Thus, a unikernel must be viewed as a single unit of trust, reducing security. The dissertation's first contribution is intra-unikernel isolation: we use Intel's Memory Protection Keys (MPK) primitive to provide per-thread permission control over groups of virtual memory pages within a unikernel's single address space, allowing different areas of the address space to be isolated from each other. We implement our mechanisms in RustyHermit, a unikernel written in Rust. Our evaluations show that the mechanisms have low overhead and retain unikernel's low system call latency property: 0.6% slowdown on applications including memory/compute intensive benchmarks as well as micro-benchmarks. Multiserver OS, a type of microkernel OS, has high parallelism potential due to its inherent compartmentalization. However, the model suffers from inferior performance. This is due to inter-process communication (IPC) client-server crossings that require context switches for single-core systems, which are more expensive than traditional system calls; on multi-core systems (now ubiquitous), they have poor resource utilization. The dissertation's second contribution is Aoki, a new approach to IPC design for microkernel OSs. Aoki incorporates non-blocking concurrency techniques to eliminate in-kernel blocking synchronization which causes performance challenges for state-of-the-art microkernels. Aoki's non-blocking (i.e., lock-free and wait-free) IPC design not only improves performance and scalability, but also enhances reliability by preventing thread starvation. In a multiserver OS setting, the design also enables the reconnection of stateful servers after failure without loss of IPC states. Aoki solves two problems that have plagued previous microkernel IPC designs: reducing excessive transitions between user and kernel modes and enabling efficient recovery from failures. We implement Aoki in the state-of-the-art seL4 microkernel. Results from our experiments show that Aoki outperforms the baseline seL4 in both fastpath IPC and cross-core IPC, with improvements of 2.4x and 20x, respectively. The Aoki IPC design enables the design of system servers for multiserver OSs with higher performance and reliability. The dissertation's third and final contribution is the design of a fault-tolerant storage server and a copy-free file system server. We build both servers using NetBSD OS's rumprun unikernel, which provides robust isolation through hardware virtualization, and is capable of handling a wide range of storage devices including NVMe. Both servers communicate with client applications using Aoki's IPC design, which yields scalable IPC. In the case of the storage server, the IPC also enables the server to transparently recover from server failures and reconnect to client applications, with no loss of IPC state and no significant overhead. In the copy-free file system server's design, applications grant the server direct memory access to file I/O data buffers for high performance. The performance problems solved in the server designs have challenged all prior multiserver/microkernel OSs. Our evaluations show that both servers have a performance comparable to Linux and the rumprun baseline. / Doctor of Philosophy / Computer security is extremely important, especially when it comes to the operating system (OS) – the foundation upon which all applications execute. Traditional OSs adopt a monolithic design in which all of their components execute at a single privilege level (for achieving high performance). However, this design degrades security as the vulnerability of a single component can be exploited to compromise the entire system. The problem is exacerbated when the OS codebase becomes large, as is the current trend. To overcome this security challenge, researchers have developed alternative OS models such as microkernels, multiserver OSs, library OSs, and recently, unikernels. The unikernel model has recently gained popularity in application domains such as cloud computing, the internet of things (IoT), and high-performance computing due to its improved security and performance. In this model, a single application is compiled together with its necessary OS components to produce a single, small executable image. Unikernels execute atop a hypervisor, a software layer that provides strong isolation between unikernels, usually by leveraging special hardware instructions. Both ideas improve security. The dissertation's first contribution improves the security of unikernels by enabling isolation within a unikernel. This allows different components of a unikernel (e.g., safe code, unsafe code, kernel code, user code) to be isolated from each other. Thus, the vulnerability of a single component cannot be exploited to compromise the entire system. We used Intel's Memory Protection Keys (MPK), a hardware feature of Intel CPUs, to achieve this isolation. Our implementation of the technique and experimental evaluations revealed that the technique has low overhead and high performance. The dissertation's second contribution improves the performance of multiserver OSs. This OS model has excellent potential for parallelization, but its performance is hindered by slow communication between applications and OS subsystems (which are programmed as clients and servers, respectively). We develop Aoki, an Inter-Process Communication (IPC) technique that enables faster and more reliable communication between clients and servers in multiserver OSs. Our implementation of Aoki in the state-of-the-art seL4 microkernel and evaluations reveal that the technique improves IPC latency over seL4's by as much as two orders of magnitude. The dissertation's third and final contribution is the design of two servers for multiserver OSs: a storage server and a file system server. The servers are built as unikernels running atop the Xen hypervisor and are powered by Aoki's IPC mechanism for communication between the servers and applications. The storage server is designed to recover its state after a failure with no loss of data and little overhead, and the file system server is designed to communicate with applications with little overhead. Our evaluations show that both servers achieve their design goals: they have comparable performance to that of state-of-the-art high-performance OSes such as Linux.
354

Design and Evaluation of an Embedded Real-time Micro-kernel

Singh, Kuljeet 26 November 2002 (has links)
This thesis presents the design and evaluation of an operating system kernel specially designed for dataflow software. Dataflow is a style of software architecture that is well suited for control and "signal flow" applications. This architecture involves many small processes and lots of inter-process communication, which impose too much overhead on traditional RTOSes. This thesis describes design and implementation of the Dataflow Architecture Real-time Kernel (DARK). DARK is a reconfigurable, multithreaded and preemptive operating system kernel that introduces a special data-driven scheduling strategy for dataflow applications. It uses the underlying hardware for high-speed context switching between the kernel and applications, which is five times faster than the ordinary context switch. The features of the kernel can be configured according to performance requirements without change to the applications. Along with the performance evaluation of DARK, the performance comparison results of DARK with two commercial RTOSes: MicroC/OS-II and Analog Devices VDK++ are also provided. / Master of Science
355

High Performance Inter-kernel Communication and Networking in a Replicated-kernel Operating System

Ansary, B M Saif 20 January 2016 (has links)
Modern computer hardware platforms are moving towards high core-count and heterogeneous Instruction Set Architecture (ISA) processors to achieve improved performance as single core performance has reached its performance limit. These trends put the current monolithic SMP operating system (OS) under scrutiny in terms of scalability and portability. Proper pairing of computing workloads with computing resources has become increasingly arduous with traditional software architecture. One of the most promising emerging operating system architectures is the Multi-kernel. Multi-kernels not only address scalability issues, but also inherently support heterogeneity. Furthermore, provide an easy way to properly map computing workloads to the correct type of processing resources in presence of heterogeneity. Multi-kernels do so by partitioning the resources and running independent kernel instances and co-operating amongst themselves to present a unified view of the system to the application. Popcorn is one the most prominent multi-kernels today, which is unique in the sense that it runs multiple Linux instances on different cores or group of cores, and provides a unified view of the system i.e., Single System Image (SSI). This thesis presents four contributions. First, it introduces a filesystem for Popcorn, which is a vital part to provide a SSI. Popcorn supports thread/process migration that requires migration of file descriptors which is not provided by traditional filesystems as well as popular distributed file systems, this work proposes a scalable messaging based file descriptor migration and consistency protocol for Popcorn. Second, multi-kernel OSs rely heavily on a fast low latency messaging layer to be scalable. Messaging is even more important in heterogeneous systems where different types of cores are on different islands with no shared memory. Thus, another contribution proposes a fast-low latency messaging layer to enable communication among heterogeneous processor islands for Heterogeneous Popcorn. With advances in networking technology, newest Ethernet technologies are able to support up to 40 Gbps bandwidth, but due to scalability issues in monolithic kernels, the number of connections served per second does not scale with this increase in speed.Therefore, the third and fourth contributions try to address this problem with Snap Bean, a virtual network device and Angel, an opportunistic load balancer for Popcorn's network system. With the messaging layer Popcorn gets over 30% performance benefit over OpenCL and Intel Offloading technique (LEO). And with NetPopcorn we achieve over 7 to 8 times better performance over vanilla Linux and 2 to 5 times over state-of-the-art Affinity Accept. / Master of Science
356

Evaluating the Perceived Overhead Imposed by Object-Oriented Programming in a Real-time Embedded System

Bhakthavatsalam, Sumithra 16 June 2003 (has links)
This thesis presents the design and evaluation of an object-oriented (OO) operating system kernel for real-time embedded systems based on dataflow architecture. Dataflow is a software architecture that is well suited to applications that involve signal flows and value transformations. Typically, these systems comprise numerous processes with heavy inter-process communications. The dataflow style has been adopted for the control software for PEBB (Power Electronic Building Block) systems by the Center for Power Electronic Systems (CPES), Virginia Tech., which is involved in a research effort to modularize and standardize power electronic components. The goal of our research is to design and implement an efficient object-oriented kernel for the PEBB system and compare its performance vis-à-vis that of a non-OO kernel. It presents strategies for efficient OO design and a discussion of how OO performance issues can be ameliorated. We conclude the thesis with an evaluation of the advantages gained by using the OO paradigm both from the standpoint of the classically cited advantages of OO programming and other crucial aspects. / Master of Science
357

Replacement decisions with multiple stochastic values and depreciation

Adkins, Roger, Paxson, D. 2016 July 1914 (has links)
Yes / We develop an analytical real-option solution to the after-tax optimal timing boundary for a replaceable asset whose operating cost and salvage value deteriorate stochastically. We construct a general replacement model, from which seven other particular models can be derived, along with deterministic versions. We show that the presence of salvage value and tax depreciation significantly lowers the operating cost threshold that justifies (and thus hastens) replacement. Although operating cost volatility increases defer replacement, increases in the salvage value volatility hasten replacement, albeit modestly, while increases in the correlation between costs and salvage value defer replacement. Reducing the tax rate or depreciation lifetime, or allowing an investment tax credit, yield mixed results. These results are also compared with those of less complete models, and deterministic versions, showing that failure to consider several stochastic variables and taxation in the replacement process may lead to sub-optimal decisions.
358

Development of a tool to test computer protocols

Myburgh, W. D 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University, 2003. / ENGLISH ABSTRACT: Software testing tools simplify and automate the menial work associated with testing. Moreover, for complex concurrent software such as computer protocols, testing tools allow testing on an abstract level that is independent of specific implementations. Standard conformance testing methodologies and a number of testing tools are commercially available, but detailed descriptions of the implementation of such testing tools are not widely available. This thesis investigates the development of a tool for automated protocol testing in the ETH Oberon development environment. The need to develop a protocol testing tool that automates the execution of specified test cases was identified in collaboration with a local company that develops protocols in the programming language Oberon. Oberon is a strongly typed secure language that supports modularisation and promotes a readable programming style. The required tool should translate specified test cases into executable test code supported by a runtime environment. A test case consists of a sequence of input actions to which the software under test is expected to respond by executing observable output actions. A number of issues are considered of which the first is concerned with the representation of test case specifications. For this, a notation was used that is basically a subset of the test specification language TTCN-3 as standardised by the European Telecommunications Standards Institute. The second issue is the format of executable test cases and a suitable runtime environment. A translator was developed that generates executable Oberon code from specified test cases. The compiled test code is supported by a runtime library, which is part of the tool. Due to the concurrent nature of a protocol environment, concurrent processes in the runtime environment are identified. Since ETH Oberon supports multitasking in a limited sense, test cases are executed as cooperating background tasks. The third issue is concerned with the interaction between an executing test case and a system under test. It is addressed by an implementation dependent interface that maps specified test interactions onto real interactions as required by the test context in which an implementation under test operates. A supporting protocol to access the service boundary of an implementation under test remotely and underlying protocol service providers are part of a test context. The ETH Oberon system provides a platform that simplifies the implementation of protocol test systems, due to its size and simple task mechanism. Operating system functionality considered as essential is pointed out in general terms since other systems could be used to support such testing tools. In conclusion, directions for future work are proposed. / AFRIKAANSE OPSOMMING: Toetsstelsels vir programmatuur vereenvoudig en outomatiseer die slaafse werk wat met toetsing assosieer word. 'n Toetsstelsel laat verder toe dat komplekse gelyklopende programmatuur, soos rekenaarprotokolle, op 'n abstrakte vlak getoets word, wat onafhanklik van spesifieke implementasies is. Daar bestaan standaard metodes vir konformeringstoetsing en 'n aantal toetsstelsels is kommersiëel beskikbaar. Uitvoerige beskrywings van die implementering van sulke stelsels is egter nie algemeen beskikbaar nie. Hierdie tesis ondersoek die ontwikkeling van 'n stelsel vir outomatiese toetsing van protokolle in die ontwikkelingsomgewing van ETH Oberon. Die behoefte om 'n protokoltoetsstelsel te ontwikkel, wat die uitvoering van gespesifiseerde toetsgevalle outomatiseer, is geïdentifiseer in oorleg met 'n plaaslike maatskappy wat protokolle ontwikkel in die Oberon programmeertaal. Oberon is 'n sterkgetipeerde taal wat modularisering ondersteun en a leesbare programmeerstyl bevorder. Die toestsstelsel moet gespesifiseerde toetsgevalle vertaal na uitvoerbare toetskode wat ondersteun word deur 'n looptydomgewing. 'n Toetsgeval bestaan uit 'n reeks van toevoeraksies waarop verwag word dat die programmatuur wat getoets word, sal reageer deur die uitvoering van afvoeraksies wat waargeneem kan word. 'n Aantal kwessies word aangeraak, waarvan die eerste te make het met die voorstelling van die spesifikasie van toetsgevalle. Hiervoor is 'n notasie gebruik wat in wese 'n subversameling van die toetsspesifikasietaal TTCN-3 is. TTCN-3 is gestandardiseer deur die European Telecommunications Standards Institute. Die tweede kwessie is die formaat van uitvoerbare toetsgevalle en 'n geskikte looptydomgewing. 'n Vertaler is ontwikkel wat uitvoerbare Oberon-kode genereer vanaf gespesifiseerde toetsgevalle. Die vertaalde toetskode word ondersteun deur 'n biblioteek van looptydfunksies, wat deel van die stelsel is. As gevolg van die eienskap dat 'n protokolomgewing uit gelyklopende prosesse bestaan, word daar verskillende tipes van gelyklopende prosesse in 'n protokoltoetsstelsel geïdentifiseer. Aangesien ETH Oberon 'n beperkte multitaakstelsel is, word toetsgevalle vertaal na eindige outomate wat uitgevoer word as samewerkende agtergrondtake. Die derde kwessie het te make met die interaksie tussen 'n toetsgeval wat uitgevoer word en die stelsel wat getoets word. Dit word aangespreek deur 'n koppelvlak wat gespesifiseerde interaksies afbeeld op werklike interaksies soos vereis deur die konteks waarin 'n implementasie onderworpe aan toetsing uitvoer. 'n Ondersteunende protokolom die dienskoppelvlak van die implementasie oor 'n afstand te bereik en ander onderliggende protokoldienste is deel van 'n toetskonteks. Die ETH Oberon-stelsel help in die vereenvoudiging van die implementasie van protokol toetsstelsels, as gevolg van die stelsel se grootte en die eenvoudige taakhanteerder . Die essensiële funksionaliteit van bedryfsstelsels word uitgelig in algemene terme omdat ander stelsels gebruik kan word om toetsstelsels te ondersteun. Ten slotte word voorstelle vir opvolgwerk gemaak.
359

PERCEIVED IMPACT OF AMBIENT OPERATING ROOM NOISE BY CERTIFIED REGISTERED NURSE ANESTHETISTS

Cosgrove, Marianne S. 01 January 2019 (has links)
It is widely acknowledged that elevated levels of noise are commonplace in the healthcare environment, particularly in high acuity areas such as the operating room (OR). Excessive ambient noise may pose a threat to patient safety by adversely impacting provider performance and interfering with communication among perioperative care team members. With respect to the certified registered nurse anesthetist (CRNA), increased ambient OR noise may engender distractibility, diminish situation awareness and cause untoward health effects, thereby increasing the possibility for the occurrence of error and patient injury. This research project analytically examines the perceived impact of ambient noise in the operating room by CRNAs. Findings from this study reveal that CRNAs perceive elevated noise to be regularly present in the OR, specifically during the critical emergence phase of the anesthetic. However, CRNAs feel that increased noise only occasionally limits their ability to perform procedures, concentrate and communicate with the perioperative team. OR noise rarely interferes with memory retrieval. CRNAs perceive that noise is sometimes a threat to patient safety but infrequently engenders adverse patient outcomes. CRNAs do not perceive noise in the OR to be detrimental to their health but strongly agree that excessive noise can and should be controlled. Increased ambient OR noise is a veritable reality that may pose a potential threat to patient safety. Further research to identify elevations in noise during critical phases of the anesthetic and delineation of significant contributors to its genesis is warranted.
360

Compliance with standard precautions and occupational exposure reporting among operating room nurses in Australia

Osborne, Sonya Ranee, n/a January 2002 (has links)
Occupational exposures of healthcare workers tend to occur because of inconsistent compliance with standard precautions. Also, incidence of occupational exposure is underreported among operating room personnel. The purpose of this project was to develop national estimates for compliance with standard precautions and occupational exposure reporting practices among operating room nurses in Australia. Data was obtained utilizing a 96-item self-report survey. The Standard Precautions and Occupational Exposure Reporting survey was distributed anonymously to 500 members of the Australian College of Operating Room Nurses. The Health Belief Model was the theoretical framework used to guide the analysis of data. Data was analysed to examine relationships between specific constructs of the Health Belief Model to identify factors that might influence the operating room nurse to undertake particular health behaviours to comply with standard precautions and occupational exposure reporting. Results of the study revealed compliance rates of 55.6% with double gloving, 59.1% with announcing sharps transfers, 71.9% with using a hands-free sharps pass technique, 81.9% with no needle recapping and 92.0% with adequate eye protection. Although 31.6% of respondents indicated receiving an occupational exposure in the past 12 months, only 82.6% of them reported their exposures. The results of this study provide national estimates of compliance with standard precautions and occupational exposure reporting among operating room nurses in Australia. These estimates can now be used as support for the development and implementation of measures to improve practices in order to reduce occupational exposures and, ultimately, disease transmission rates among this high-risk group.

Page generated in 0.063 seconds