• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 600
  • 214
  • 194
  • 161
  • 101
  • 55
  • 40
  • 39
  • 36
  • 26
  • 20
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 1746
  • 504
  • 361
  • 337
  • 241
  • 215
  • 177
  • 150
  • 148
  • 148
  • 135
  • 127
  • 123
  • 122
  • 118
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Cost standards in operating budget preparation and administration

Salim, Monir M., January 1965 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1965. / Vita. Typescript. Abstracted in Dissertation abstracts, v. 25 (1965) no. 10, p. 5613-14. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
142

The advanced features of Mac OS X and their benefits to the design community /

Webb, Jason N. January 2003 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 2003. / Title from accompanying material.
143

Effective interprocess communication (IPC) in a real-time transputer network

Bor, Mehmet January 1994 (has links)
The thesis describes the design and implementation of an interprocess communication (IPC) mechanism within a real-time distributed operating system kernel (RT-DOS) which is designed for a transputer-based network. The requirements of real-time operating systems are examined and existing design and implementation strategies are described. Particular attention is paid to one of the object-oriented techniques although it is concluded that these techniques are not feasible for the chosen implementation platform. Studies of a number of existing operating systems are reported. The choices for various aspects of operating system design and their influence on the IPC mechanism to be used are elucidated. The actual design choices are related to the real-time requirements and the implementation that has been adopted is described.
144

Operating system scheduling optimization

Anderson, George Georgevich 28 May 2013 (has links)
D.Phil. (Electrical and Electronic Engineering) / This thesis explores methods for improving, or optimizing, Operating System (OS) scheduling. We first study the problem of tuning an OS scheduler by setting various parameters, or knobs, made available. This problem has not been addressed extensively in the literature, and has never been solved for the default Linux OS scheduler. We present three methods useful for tuning an Operating System scheduler in order to improve the quality of scheduling leading to better performance for workloads. The first method is based on Response Surface Methodology, the second on the Particle Swarm Optimization (PSO), while the third is based on the Golden Section method. We test our proposed methods using experiments and suitable benchmarks and validate their viability. Results indicate significant gains in execution time for workloads tuned with these methods over execution time for workloads running under schedulers with default, unoptimized tuning parameters. The gains for using RSM-based over default scheduling parameter settings are only limited by the type of workload (how much time it needs to execute); gains of up to 16:48% were obtained, but even more are possible, as described in the thesis. When comparing PSO with Golden Section, PSO produced better scheduling parameter settings, but it took longer to do so, while Golden Section produced slightly worse parameter settings, but much faster. We also study a problem very critical to scheduling on modern Central Processing Units (CPUs). Modern CPUs have multicore designs, which corresponds to having more than one CPU on a single chip. These are known as Chip Multiprocessors (CMPs). The CMP is now the standard type of CPU for many different types of computers, including Personal Computers.
145

Operating room nursing science learning programmes in South Africa

Prince, Jacqueline Yvonne January 2007 (has links)
Operating room nurses form the corner-stone of the operating room because perioperative care of the patient rests mainly in the hands of the nursing personnel. Unique challenges face nurses functioning in the stressful surgical environment where anticipation to prevent or cope with life-threatening situations is the order of the day. The operating room nurse must be knowledgeable, skilled and alert, as he/she is held accountable for all acts of commission and omission. To ensure that nurses are appropriately educated and trained and able to keep trend with the changing technology in the operating room, it is essential that learning programmes meet the minimum standards for registration as prescribed by the South African Nursing Council. Reviewing and evaluating learning programmes on a regular basis by specialist nursing educationists, are therefore essential to ensure that the standards of education and training are maintained and upgraded if required. The aim of this study is to explore and describe the various Operating Room Nursing Science Learning Programmes offered at accredited Higher Education Institutions, utilized for the education and training of the operating room nursing students in South Africa. The proposed research is based on a qualitative paradigm and the theoretical grounding is found in Bergman’s model for professional accountability (Bergman, 1982:8). A document analysis of five approved comprehensive Operating Room Nursing Science Learning Programmes from higher education institutions in South Africa (nursing colleges and universities) was carried out, together with a sixth programme, the Operating Theatre Learning Programme, as suggested by the Standard Generating Body. Requests for permission were forwarded to the management of the selected colleges or universities for inclusion of the respective programmes in the study. The researcher formulated and utilized thirty-four essential criteria derived from three documents, the first being a document entitled “Proposed Standards for Nursing and Midwifery Qualifications” submitted to the SANC and SAQA by the SGB for Nursing and Midwifery (2001-2004). The second document entitled the Public and Private Higher Education Institutions format template for criteria for the Generation and Evaluation of Qualifications and Standards within the National Qualifications Framework was also utilized (SAQA, 1430/00) and thirdly the researcher included the most relevant criteria from the list of criteria for curriculum development as indicated by the South African Nursing Council. Various tables were compiled, to reflect the findings of the document analysis according to the thirty-three criteria indicated above, to provide a clear and broad overview of the specific data in the respective six Operating Room Nursing Science Learning Programmes utilized in the study. In conclusion recommendations for a broad macro-curriculum were made to facilitate formulation of programmes in Operating Room Nursing Science relevant to the South African context.
146

Reflectarray Antennas: Operating Mechanisms and Remedies for Problem Aspects

Almajali, E'qab Rateb Fayeq January 2014 (has links)
Reflectarrays that emulate paraboloidal main-reflectors, and hyperboloidal or ellipsoidal sub-reflectors, have undergone a great deal of development over the past two decades. More recently, research on the topic has concentrated on overcoming some remaining disadvantages, re-examining certain design issues, and extending reflectarray functionality. This thesis concerns itself with fixed-beam offset-fed single-layer main-reflectarrays and sub-reflectarrays comprised of square or rectangular variable size conducting elements. Both full-wave analyses and experiment are used in all the deliberations. In order to examine reflectarray operating mechanisms the thesis first describes a component-by-component technique whereby the role of the various reflectarray parts can be assessed by determining their individual and aggregate contributions to the reflectarray near- and far-fields. This technique is used to diagnose the fact that feed-image-lobes that appear at off-centre frequencies are caused not only by the groundplane as first thought, but by an imbalance in the complex currents on the patches and groundplane at such frequencies. The use of sub-wavelength elements is shown to suppress such unwanted lobes. The thesis then uses receive- and transmit-modes analysis to show that beam squint at off-centre frequencies, often not accounted for when stating the gain bandwidth of a reflectarray, is due to the shifting of the true focal points away from the geometrical one at these frequencies. It is demonstrated that a two-feed reflectarray arrangement is capable of eliminating beam squint, and that the use of smaller focal length to aperture size (F/D) ratios removes the grating lobes that can appear in such two-feed reflectarrays due to clustering of the aperture amplitude distribution. Finally, the thesis studies the effect of the reality that the angle of incidence of the feed fields on the various reflectarray elements is not the same for all elements, even though this is most often assumed when using element reflection phase versus element size databases in performing reflectarray designs. Careful full-wave analysis reveals that it is not only the dependence of element reflection phase on incidence angle that is important, but that the individual element pattern beamwidths change and distort as this angle increases. This is important not only from the point of view of the coupling of the feed fields to the elements, but also as far as the angular sector within which the reradiated fields are important. Thus sub-reflectarrays, whose radiation patterns are considerably wider than main-reflectors, are more susceptable to incidence angle effects. It is shown that the use of sub-wavelength elements in a reflectarray largely ensures its immunity to such effects.
147

Conformance testing of OSI protocols : the class O transport protocol as an example

Kou, Tian January 1987 (has links)
This thesis addresses the problem of conformance testing of communication protocol implementations. Test sequence generation techniques for finite state machines (FSM) have been developed to solve the problem of high costs of an exhaustive test. These techniques also guarantee a complete coverage of an implementation in terms of state transitions and output functions, and therefore provide a sound test of the implementation under test. In this thesis, we have modified and applied three test sequence generation techniques on the class 0 transport protocol. A local tester and executable test sequences for the ISO class 0 transport protocol have been developed on a portable protocol tester to demonstrate the practicality of the test methods and test methodologies. The local test is achieved by an upper tester residing on top of the implementation under test (IUT) and a lower tester residing at the bottom of the IUT. Tests are designed based on the state diagram of an IUT. Some methodologies of parameter variations have also been used to test primitive parameters of the implementation. Some problems encountered during the implementation of the testers and how they are resolved are also discussed in the thesis. / Science, Faculty of / Computer Science, Department of / Graduate
148

Improving Caches in Consolidated Environments

Koller, Ricardo 24 July 2012 (has links)
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer’s processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one. The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over- provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consol- idated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain dupli- cated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write poli- cies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy. We addressed these problems by modeling their impact and by proposing solu- tions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we pro- posed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.
149

ROC Curves for Ordinal Biomarkers

Peng, Hongying January 2018 (has links)
No description available.
150

A System Generation for a Small Operating System

Pargiter, Luke R., Sayers, Jerry E. 08 April 1992 (has links)
A system generation utility has been developed to assist students in producing IBM PC-based multitasking applications targeted for the small operating system (SOS) developed by Jerry E. Sayers. Our aim is to augment SOS by enabling a student to interactively tailor the characteristics of the operating system to meet the requirements of a particular application. The system allows the user to adjust factors such as: initial state, priority, and scheduling method of concurrently executed tasks, and. also, use of system resources. A custom operating system is produced by invoking a MAKE utility to bind SOS with application-specific code, in addition to intermediate source code created during the system generation process. Testing of the system included implementing an application that adds column vectors in a 5 x 5000 matrix concurrently. Further testing involves using the system generation utility along with SOS as part of an undergraduate operating systems class at East Tennessee State University.

Page generated in 0.1192 seconds