• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 601
  • 214
  • 194
  • 161
  • 101
  • 55
  • 40
  • 39
  • 36
  • 26
  • 20
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 1747
  • 505
  • 361
  • 338
  • 242
  • 215
  • 177
  • 150
  • 148
  • 148
  • 135
  • 127
  • 123
  • 122
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Terrier: an embedded operating system using advanced types for safety

Danish, Matthew 08 April 2016 (has links)
Operating systems software is fundamental to modern computer systems: all other applications are dependent upon the correct and timely provision of basic system services. At the same time, advances in programming languages and type theory have lead to the creation of functional programming languages with type systems that are designed to combine theorem proving with practical systems programming. The Terrier operating system project focuses on low-level systems programming in the context of a multi-core, real-time, embedded system, while taking advantage of a dependently typed programming language named ATS to improve reliability. Terrier is a new point in the design space for an operating system, one that leans heavily on an associated programming language, ATS, to provide safety that has traditionally been in the scope of hardware protection and kernel privilege. Terrier tries to have far fewer abstractions between program and hardware. The purpose of Terrier is to put programs as much in contact with the real hardware, real memory, and real timing constraints as possible, while still retaining the ability to multiplex programs and provide for a reasonable level of safety through static analysis.
202

A covariate-adjusted classification model for multiple biomarkers in disease screening and diagnosis

Yu, Suizhi January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Wei-Wen Hsu / The classification methods based on a linear combination of multiple biomarkers have been widely used to improve the accuracy in disease screening and diagnosis. However, it is seldom to include covariates such as gender and age at diagnosis into these classification procedures. It is known that biomarkers or patient outcomes are often associated with some covariates in practice, therefore the inclusion of covariates may further improve the power of prediction as well as the classification accuracy. In this study, we focus on the classification methods for multiple biomarkers adjusting for covariates. First, we proposed a covariate-adjusted classification model for multiple cross-sectional biomarkers. Technically, it is a two-stage method with a parametric or non-parametric approach to combine biomarkers first, and then incorporating covariates with the use of the maximum rank correlation estimators. Specifically, these parameter coefficients associated with covariates can be estimated by maximizing the area under the receiver operating characteristic (ROC) curve. The asymptotic properties of these estimators in the model are also discussed. An intensive simulation study is conducted to evaluate the performance of this proposed method in finite sample sizes. The data of colorectal cancer and pancreatic cancer are used to illustrate the proposed methodology for multiple cross-sectional biomarkers. We further extend our classification method to longitudinal biomarkers. With the use of a natural cubic spline basis, each subject's longitudinal biomarker profile can be characterized by spline coefficients with a significant reduction in the dimension of data. Specifically, the maximum reduction can be achieved by controlling the number of knots or degrees of freedom in the spline approach, and its coefficients can be obtained by the ordinary least squares method. We consider each spline coefficient as ``biomarker'' in our previous method, then the optimal linear combination of those spline coefficients can be acquired using Stepwise method without any distributional assumption. Afterward, covariates are included by maximizing the corresponding AUC as the second stage. The proposed method is applied to the longitudinal data of Alzheimer's disease and the primary biliary cirrhosis data for illustration. We conduct a simulation study to assess the finite-sample performance of the proposed method for longitudinal biomarkers.
203

Finding, Measuring, and Reducing Inefficiencies in Contemporary Computer Systems

Kambadur, Melanie Rae January 2016 (has links)
Computer systems have become increasingly diverse and specialized in recent years. This complexity supports a wide range of new computing uses and users, but is not without cost: it has become difficult to maintain the efficiency of contemporary general purpose computing systems. Computing inefficiencies, which include nonoptimal runtimes, excessive energy use, and limits to scalability, are a serious problem that can result in an inability to apply computing to solve the world's most important problems. Beyond the complexity and vast diversity of modern computing platforms and applications, a number of factors make improving general purpose efficiency challenging, including the requirement that multiple levels of the computer system stack be examined, that legacy hardware devices and software may stand in the way of achieving efficiency, and the need to balance efficiency with reusability, programmability, security, and other goals. This dissertation presents five case studies, each demonstrating different ways in which the measurement of emerging systems can provide actionable advice to help keep general purpose computing efficient. The first of the five case studies is Parallel Block Vectors, a new profiling method for understanding parallel programs with a fine-grained, code-centric perspective aids in both future hardware design and in optimizing software to map better to existing hardware. Second is a project that defines a new way of measuring application interference on a datacenter's worth of chip-multiprocessors, leading to improved scheduling where applications can more effectively utilize available hardware resources. Next is a project that uses the GT-Pin tool to define a method for accelerating the simulation of GPGPUs, ultimately allowing for the development of future hardware with fewer inefficiencies. The fourth project is an experimental energy survey that compares and combines the latest energy efficiency solutions at different levels of the stack to properly evaluate the state of the art and to find paths forward for future energy efficiency research. The final project presented is NRG-Loops, a language extension that allows programs to measure and intelligently adapt their own power and energy use.
204

The design and implementation of a load distribution facility on Mach.

January 1997 (has links)
by Hsieh Shing Leung Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 78-81). / List of Figures --- p.viii / List of Tables --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background and Related Work --- p.4 / Chapter 2.1 --- Load Distribution --- p.4 / Chapter 2.1.1 --- Load Index --- p.5 / Chapter 2.1.2 --- Task Transfer Mechanism --- p.5 / Chapter 2.1.3 --- Load Distribution Facility --- p.6 / Chapter 2.2 --- Load Distribution Algorithm --- p.6 / Chapter 2.2.1 --- Classification --- p.6 / Chapter 2.2.2 --- Components --- p.7 / Chapter 2.2.3 --- Stability and Effectiveness --- p.9 / Chapter 2.3 --- The Mach Operating System --- p.10 / Chapter 2.3.1 --- Mach kernel abstractions --- p.10 / Chapter 2.3.2 --- Mach kernel features --- p.11 / Chapter 2.4 --- Related Work --- p.12 / Chapter 3 --- The Design of Distributed Scheduling Framework --- p.16 / Chapter 3.1 --- System Model --- p.16 / Chapter 3.2 --- Design Objectives and Decisions --- p.17 / Chapter 3.3 --- An Overview of DSF Architecture --- p.17 / Chapter 3.4 --- The DSF server --- p.18 / Chapter 3.4.1 --- Load Information Module --- p.19 / Chapter 3.4.2 --- Movement Module --- p.22 / Chapter 3.4.3 --- Decision Module --- p.25 / Chapter 3.5 --- LD library --- p.28 / Chapter 3.6 --- User-Agent --- p.29 / Chapter 4 --- The System Implementation --- p.33 / Chapter 4.1 --- Shared data structure --- p.33 / Chapter 4.2 --- Synchronization --- p.37 / Chapter 4.3 --- Reentrant library --- p.39 / Chapter 4.4 --- Interprocess communication (IPC) --- p.42 / Chapter 4.4.1 --- Mach IPC --- p.42 / Chapter 4.4.2 --- Socket IPC --- p.43 / Chapter 5 --- Experimental Studies --- p.47 / Chapter 5.1 --- Load Distribution algorithms --- p.47 / Chapter 5.2 --- Experimental environment --- p.49 / Chapter 5.3 --- Experimental results --- p.50 / Chapter 5.3.1 --- Performance of LD algorithms --- p.50 / Chapter 5.3.2 --- Degree of task transfer --- p.54 / Chapter 5.3.3 --- Effect of threshold value --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Summary and Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.58 / Chapter A --- LD Library --- p.60 / Chapter B --- Sample Implementation of LD algorithms --- p.65 / Chapter B.l --- LOWEST --- p.65 / Chapter B.2 --- THRHLD --- p.67 / Chapter C --- Installation Guide --- p.71 / Chapter C.1 --- Software Requirement --- p.71 / Chapter C.2 --- Installation Steps --- p.72 / Chapter C.3 --- Configuration --- p.73 / Chapter D --- User's Guide --- p.74 / Chapter D.1 --- The DSF server --- p.74 / Chapter D.2 --- The User Agent --- p.74 / Chapter D.3 --- LD experiment --- p.77 / Bibliography --- p.78
205

Deterministic, Mutable, and Distributed Record-Replay for Operating Systems and Database Systems

Viennot, Nicolas January 2016 (has links)
Application record and replay is the ability to record application execution and replay it at a later time. Record-replay has many use cases including diagnosing and debugging applications by capturing and reproducing hard to find bugs, providing transparent application fault tolerance by maintaining a live replica of a running program, and offline instrumentation that would be too costly to run in a production environment. Different record-replay systems may offer different levels of replay faithfulness, the strongest level being deterministic replay which guarantees an identical reenactment of the original execution. Such a guarantee requires capturing all sources of nondeterminism during the recording phase. In the general case, such record-replay systems can dramatically hinder application performance, rendering them unpractical in certain application domains. Furthermore, various use cases are incompatible with strictly replaying the original execution. For example, in a primary-secondary database scenario, the secondary database would be unable to serve additional traffic while being replicated. No record-replay system fit all use cases. This dissertation shows how to make deterministic record-replay fast and efficient, how broadening replay semantics can enable powerful new use cases, and how choosing the right level of abstraction for record-replay can support distributed and heterogeneous database replication with little effort. We explore four record-replay systems with different semantics enabling different use cases. We first present Scribe, an OS-level deterministic record-replay mechanism that support multi-process applications on multi-core systems. One of the main challenge is to record the interaction of threads running on different CPU cores in an efficient manner. Scribe introduces two new lightweight OS mechanisms, rendezvous point and sync points, to efficiently record nondeterministic interactions such as related system calls, signals, and shared memory accesses. Scribe allows the capture and replication of hard to find bugs to facilitate debugging and serves as a solid foundation for our two following systems. We then present RacePro, a process race detection system to improve software correctness. Process races occur when multiple processes access shared operating system resources, such as files, without proper synchronization. Detecting process races is difficult due to the elusive nature of these bugs, and the heterogeneity of frameworks involved in such bugs. RacePro is the first tool to detect such process races. RacePro records application executions in deployed systems, allowing offline race detection by analyzing the previously recorded log. RacePro then replays the application execution and forces the manifestation of detected races to check their effect on the application. Upon failure, RacePro reports potentially harmful races to developers. Third, we present Dora, a mutable record-replay system which allows a recorded execution of an application to be replayed with a modified version of the application. Mutable record-replay provides a number of benefits for reproducing, diagnosing, and fixing software bugs. Given a recording and a modified application, finding a mutable replay is challenging, and undecidable in the general case. Despite the difficulty of the problem, we show a very simple but effective algorithm to search for suitable replays. Lastly, we present Synapse, a heterogeneous database replication system designed for Web applications. Web applications are increasingly built using a service-oriented architecture that integrates services powered by a variety of databases. Often, the same data, needed by multiple services, must be replicated across different databases and kept in sync. Unfortunately, these databases use vendor specific data replication engines which are not compatible with each other. To solve this challenge, Synapse operates at the application level to access a unified data representation through object relational mappers. Additionally, Synapse leverages application semantics to replicate data with good consistency semantics using mechanisms similar to Scribe.
206

Joint operating agreements : a consideration of legal aspects relevant to joint operating agreements used in Great Britain and Australia by participants thereto to regulate the joint undertaking of exploration for petroleum in offshore areas, with particular reference to their rights and duties

Mildwaters, Kenneth Charles January 1990 (has links)
This thesis examines the joint venture relationship in the context of the exploration phase of the development of an oil and gas field in Great Britain and Australia. It considers a number of issues relating to the relationship between the Participants of a typical Joint Operating Agreement within the legal regimes of Great Britain and Australia. Against this background the main issues addressed in this thesis are- 1. the nature of the joint venture?; 2. the relationship between the Participants inter se; and 3. the relationship between the Operator and the Participants. In addressing these issues the following questions are addressed: - (i) what is a joint venture?; (ii) is a joint venture a separate legal relationship?; (iii) how is a joint venture distinguished from a partnership?; (iv) what is the relationship between the participants inter se?; (v) what rights does a participant of a joint venture have in relation to the joint venture and the other participants of a joint venture?; (vi) what interest, contractural or proprietary, does a participant of a joint venture have in the joint venture and the property thereof?; vii) what duties does a participant of a joint venture have to the joint venture and the other participants of the joint venture?; and (viii) what is the legal position when a participant of a joint venture defaults in complying with its duties?
207

Variable capture levels of carbon dioxide from natural gas combined cycle power plant with integrated post-combustion capture in low carbon electricity markets

Errey, Olivia Claire January 2018 (has links)
This work considers the value of flexible power provision from natural gas-fired combined cycle (NGCC) power plants operating post-combustion carbon dioxide (CO2) capture in low carbon electricity markets. Specifically, the work assesses the value of the flexibility gained by varying CO2 capture levels, thus the specific energy penalty of capture and the resultant power plant net electricity export. The potential value of this flexible operation is quantified under different electricity market scenarios, given the corresponding variations in electricity export and CO2 emissions. A quantified assessment of natural gas-fired power plant integrated with amine-based post-combustion capture and compression is attempted through the development of an Aspen Plus simulation. To enable evaluation of flexible operation, the simulation was developed with the facility to model off-design behaviour in the steam cycle, amine capture unit and CO2 compression train. The simulation is ultimately used to determine relationships between CO2 capture level and the total specific electricity output penalty (EOP) of capture for different plant configurations. Based on this relationship, a novel methodology for maximising net plant income by optimising the operating capture level is proposed and evaluated. This methodology provides an optimisation approach for power plant operators given electricity market stimuli, namely electricity prices, fuel prices, and carbon reduction incentives. The techno-economic implications of capture level optimisation are considered in three different low carbon electricity market case studies; 1) a CO2 price operating in parallel to wholesale electricity selling prices, 2) a proportional subsidy for low carbon electricity considered to be the fraction of plant electrical output equal to the capture level, and 3) a subsidy for low carbon electricity based upon a counterfactual for net plant CO2 emissions (similar to typical approaches for implementing an Emissions Performance Standard). The incentives for variable capture levels are assessed in each market study, with the value of optimum capture level operation quantified for both plant operators and to the wider electricity market. All market case studies indicate that variable capture is likely to increase plant revenue throughout the range of market prices considered. Different market approaches, however, lead to different valuation of flexible power provision and therefore different operating outcomes.
208

O2-tree: a shared memory resident index in multicore architectures

Ohene-Kwofie, Daniel 06 February 2013 (has links)
Shared memory multicore computer architectures are now commonplace in computing. These can be found in modern desktops and workstation computers and also in High Performance Computing (HPC) systems. Recent advances in memory architecture and in 64-bit addressing, allow such systems to have memory sizes of the order of hundreds of gigabytes and beyond. This now allows for realistic development of main memory resident database systems. This still requires the use of a memory resident index such as T-Tree, and the B+-Tree for fast access to the data items. This thesis proposes a new indexing structure, called the O2-Tree, which is essentially an augmented Red-Black Tree in which the leaf nodes are index data blocks that store multiple pairs of key and value referred to as \key-value" pairs. The value is either the entire record associated with the key or a pointer to the location of the record. The internal nodes contain copies of the keys that split blocks of the leaf nodes in a manner similar to the B+-Tree. O2-Tree structure has the advantage that: it can be easily reconstructed by reading only the lowest value of the key of each leaf node page. The size is su ciently small and thus can be dumped and restored much faster. Analysis and comparative experimental study show that the performance of the O2-Tree is superior to other tree-based index structures with respect to various query operations for large datasets. We also present results which indicate that the O2-Tree outperforms popular key-value stores such as BerkelyDB and TreeDB of Kyoto Cabinet for various workloads. The thesis addresses various concurrent access techniques for the O2-Tree for shared memory multicore architecture and gives analysis of the O2-Tree with respect to query operations, storage utilization, failover and recovery.
209

Orthopaedic surgical skills: examining how we train and measure performance in wire navigation tasks

Long, Steven A. 01 May 2019 (has links)
Until recently, the model for training new orthopaedic surgeons was referred to as “see one, do one, teach one”. Resident surgeons acquired their surgical skills by observing attending surgeons in the operating room and then attempted to replicate what they had observed on new patients, under the supervision of more experienced surgeons. Learning in the operating is an unideal environment to learn because it adds more time to surgical procedures and puts patients at an increased risk of having surgical errors occur during the procedure. Programs are slowly beginning to switch to a model that involves simulation-based training outside of the operating room. Wire navigation is one key skill in orthopaedics that has traditionally been difficult for programs to train on in a simulated environment. Our group has developed a radiation free wire navigation simulator to help train residents on this key skill. For simulation training to be fully adopted by the orthopaedic community, strong evidence that it is beneficial to a surgeon’s performance must first be established. The aim of this work is to examine how simulation training with the wire navigation simulator can be used to improve a resident’s wire navigation performance. The work also examines the metrics used to evaluate a resident’s performance in a simulated environment and in the operating room to understand which metrics best capture wire navigation performance. In the first study presented, simulation training is used to improve first year resident wire navigation performance in a mock operating room. The results of this study show that depending on how the training was implemented, residents were able to significantly reduce their tip-apex distance in comparison with a group that had received a simple didactic training. The study also showed that performance on the simulator was correlated with performance in this operating room. This study helps establish the transfer validity of the simulator, a key component in validating a simulation model. The second study presents a model for using the simulator as a platform on which a variety of wire navigation procedures could be developed. In this study, the simulator platform, originally intended for hip wire navigation, was extended and modified to train residents in placing a wire across the iliosacral joint. A pilot study was performed with six residents from the University of Iowa to show that this platform could be used for training the other applications and that it was accepted by the residents. The third study examined wire navigation performance in the operating room. In this study, a new metric of performance was developed that measures decision making errors made during a wire navigation procedure. This new metric was combined with the other metrics of wire navigation performance (tip-apex distance) into a composite score. The composite score was found to have a strong correlation (R squared = 0.79) with surgical experience. In the final study, the wire navigation simulator was taken to a national fracture course to collect data on a large sample of resident performance. Three groups were created in this study, a baseline group, a group that received training on the simulator, and a third group that observed the simulator training. The results of this study showed that the training could improve the overall score of the residents compared to the baseline group. The overall distribution from resident performance between groups also shows that a large portion of residents that did not receive training came in below what might be considered as competent performance. Further studies will evaluate how this training impacts performance in the operating room.
210

TRADE-OFF BALANCING FOR STABLE AND SUSTAINABLE OPERATING ROOM SCHEDULING

Abedini, Amin 01 January 2019 (has links)
The implementation of the mandatory alternative payment model (APM) guarantees savings for Medicare regardless of participant hospitals ability for reducing spending that shifts the cost minimization burden from insurers onto the hospital administrators. Surgical interventions account for more than 30% and 40% of hospitals total cost and total revenue, respectively, with a cost structure consisting of nearly 56% direct cost, thus, large cost reduction is possible through efficient operation management. However, optimizing operating rooms (ORs) schedules is extraordinarily challenging due to the complexities involved in the process. We present new algorithms and managerial guidelines to address the problem of OR planning and scheduling with disturbances in demand and case times, and inconsistencies among the performance measures. We also present an extension of these algorithms that addresses production scheduling for sustainability. We demonstrate the effectiveness and efficiency of these algorithms via simulation and statistical analyses.

Page generated in 0.5589 seconds