Spelling suggestions: "subject:"aperating systems"" "subject:"boperating systems""
151 |
Evaluation of information systems development in the NHS using NIMSAD frameworkKheong Lye, Sue January 1996 (has links)
The principal focus of the research effort was the management of information systems development to support the increased information needs arising from the radical health reforms of 1989. This was undertaken in collaboration with a purchaser and a provider within the health service. An action research approach was adopted wherein the researcher was actively involved in the development and successful implementation of an information system. Initial findings revealed a variety of factors hindering the purchaser and the provider from successfully developing the intended information systems to support the contracting process required in the reforms. A disparity in relative strengths between the purchaser and provider was considered a major constraint hindering the purchaser from developing the intended information system and performing their designated role in the new internal market system of the NHS. Through the rapid development of a computer-based information system the immediate needs of the purchaser and the provider were satisfied, and development of the individuals and the organisation took place. Subsequent to the development, a reflective post-intervention evaluation was carried out using a conceptual problem solving framework. Three important findings emerged from the systems development effort: [1] The employment of prototyping in the evolutionary development of the intended information system is considered to be particularly pertinent and responsive to the uncertain requirements of organisations undergoing change. [2] The embracing of a flexible blend of expert intervention and facilitation is an important element in the information systems development process. {3) The development of the individuals and the organisation is an intrinsic part of developing information systems. Using the NIMSAD framework for post-intervention evaluation of the development effort, various additional findings were abstracted from the critical evaluation and reflection on the adopted approach. The systems development process was evaluated against three identified elements - the problem situation, the problem solving process and the problem solver. Results of the evaluation and reflection revealed deficiencies in the research, which indicate that: [1] The appreciation of the context and content of the problem situation increases the level of understanding of the 'problems' leading to the adoption of appropriate methodologies for conducting the problem solving process. [2] The effectiveness of the adopted problem solving process can be enhanced by the validation of the client's definition of the problem, the facilitation of involvement from participants, the innovative use of prototyping and the need for evaluation of the process. [3] The personal characteristics of the problem solver significantly influence the possible solutions to the identified problems. Contributions from the evaluation of the research effort can be seen in: [1] The suggested reflexive model for action research, with emphasis on evaluation of the actions of the researcher as a problem solver. [2] The need to maintain close links with the client and communicate disparate perceptions of the problem and problem situation. [3] The employment of a flexible blend of expert intervention and facilitation (a hybrid approach enables the resolution of the problem from a multidisciplinary perspective). [4] Suggestion for further research into the personal characteristics of an effective problem solver.
|
152 |
Improving Storage with Stackable ExtensionsGuerra, Jorge 13 July 2012 (has links)
Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to “squeeze” more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption.
In this dissertation, we first develop, analyze, and evaluate three storage exten- sions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power.
In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome de- velopment process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other ex- tensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the stor- age extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experi- mental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.
|
153 |
PRACTICAL CLOUD COMPUTING INFRASTRUCTUREJames A Lembke (10276463) 12 March 2021 (has links)
<div>Cloud and parallel computing are fundamental components in the processing of large data sets. Deployments of distributed computers require network infrastructure that is fast, efficient, and secure. Software Defined Networking (SDN) separates the forwarding of network data by switches (data plane) from the setting and managing of network policies (control plane). While this separation provides flexibility for setting network policies affecting the establishment of network flows in the data plane, it provides little to no fault tolerance for failures, either benign or caused by corrupted/malicious applications. Such failures can cause network flows to be incorrectly routed through the network or stop such flows altogether. Without protection against faults, cloud network providers using SDN run the risk of inefficient allocation of network resources or even data loss. Furthermore, the asynchronous nature existing protocols for SDN does not provide a mechanism for consistency in network policy updates across multiple switches.</div><div>In addition, cloud and parallel applications require an efficient means for accessing local system data (input data sets, temporary storage locations, etc.). While in many cases it may be possible for a process to access this data by making calls directly to a file system (FS) kernel driver, this is not always possible (e.g. when using experimental distributed FSs where the needed libraries for accessing the FS only exist in user space).</div><div>This dissertation provides a design for fault tolerance of SDN and infrastructure for advancing the performance of user space FSs. It is divided into three main parts. The first part describes a fault tolerant, distributed SDN control plane framework. The second part expands upon the fault tolerant approach to SDN control plane by providing a practical means for dynamic control plane membership as well as providing a simple mechanism for controller authentication through threshold signatures. The third part describes an efficient framework for user space FS access.</div><div>This research makes three contributions. First, the design, specification, implementation, and evaluation of a method for fault tolerant SDN control plane that is inter-operable with existing control plane applications involving minimal instrumentation of the data plane runtime. Second, the design, specification, implementation and evaluation of a mechanism for dynamic SDN control plane membership that all ensure consistency of network policy updates and minimizes switch overhead through the use of distributed key generation and threshold signatures. Third, the design, specification, implementation, and evaluation of a user space FS access framework that is correct to the Portable Operating System Interface (POSIX) specification with significantly better performance over existing user space access methods, while requiring no implementation changes for application programmers.</div>
|
154 |
Identifying and Understanding Performance Problems in Software SystemsZhou, Fang January 2021 (has links)
No description available.
|
155 |
A study about differences in performance with parallel and sequential sorting algorithmsNyholm, Joel January 2021 (has links)
Background: Sorting algorithms are an essential part of computer science. With the use of parallelism, these algorithms performance can improve. Objectives: To assess parallel sorting algorithms performance compared with their sequential counterparts and see what contextual factors make a difference in performance. Methods: An experiment was made with quicksort, merge sort, load-balanced parallel merge sort and hyperquicksort. These algorithms executed on Ubuntu 20.10 and Windows 10 Home with three data sets, small (106 integers), medium (5 106 integers) and large (107 integers). Each algorithm executed 1 000 times per data set within each operating system resulting in 6 000 executions per sorting algorithm. Results: With the data from the executions, it was concluded that hyperquicksort had the fastest execution time. On average load-balanced parallel merge sort had the slowest execution time. The fastest operating system was Ubuntu 20.10, all but one algorithm executed faster on Ubuntu. Conclusions: The results showed that the fastest algorithm was hyperquicksort, but other conclusions also arose. The data set size correlated with both the execution time and speedup for a given parallel sorting algorithm. When the data set size increased, both the execution time and the speedup increased.
|
156 |
DistriX : an implementation of UNIX on transputersMcCullagh, Paul J January 1989 (has links)
Bibliography: pages 104-110. / Two technologies, distributed operating systems and UNIX are very relevant in computing today. Many distributed systems have been produced and many are under development. To a large extent, distributed systems are considered to be the only way to solve the computing needs of the future. UNIX, on the other hand, is becoming widely recognized as the industry standard for operating systems. The transputer, unlike. UNIX and distributed systems is a relatively new innovation. The transputer is a concurrent processing machine based on mathematical principles. Increasingly, the transputer is being used to solve a wide range of problems of a parallel nature. This thesis combines these three aspects in creating a distributed implementation of UNIX on a network of transputers. The design is based on the satellite model. In this model a central controlling processor is surrounded by worker processors, called satellites, in a master/ slave relationship.
|
157 |
A file server for the DistriX prototype : a multitransputer UNIX systemHoffman, P Kuyper January 1989 (has links)
Bibliography: pages 90-94. / The DISTRIX operating system is a multiprocessor distributed operating system based on UNIX. It consists of a number of satellite processors connected to central servers. The system is derived from the MINIX operating system, compatible with UNIX Version 7. A remote procedure call interface is used in conjunction with a system wide, end-to-end communication protocol that connects satellite processors to the central servers. A cached file server provides access to all files and devices at the UNIX system call level. The design of the file server is discussed in depth and the performance evaluated. Additional information is given about the software and hardware used during the development of the project. The MINIX operating system has proved to be a good choice as the software base, but certain features have proved to be poorer. The Inmos transputer emerges as a processor with many useful features that eased the implementation.
|
158 |
An Analysis of Vulnerabilities Presented by Android Malware and Ios JailbreaksJones, Charles Matthew 09 May 2015 (has links)
Mobile devices are increasingly becoming a greater crutch for all generations. All the while, these users are garnering a greater desire for privacy and style. Apple presents a device that is known for its security, but lacks major user customization. On the other hand, Google has developed a device that is keen to customization with Android, but can be susceptible to security flaws. This thesis strives to discuss the security models, app store protections, and best practices of both mobile operating systems. In addition, multiple experiments were conducted to demonstrate how an Android device could be more easily compromised after altering few settings, as well as to demonstrate the privileges, both good and bad, that could be gained by jailbreaking an iOS device.
|
159 |
A STUDY OF CLUSTER PAGING METHODS TO BOOST VIRTUAL MEMORY PERFORMANCERAMAN, VENKATESH 11 March 2002 (has links)
No description available.
|
160 |
A STUDY OF SWAP CACHE BASED PREFETCHING TO IMPROVE VITUAL MEMORY PERFORMANCEKUNAPULI, UDAYKUMAR 11 March 2002 (has links)
No description available.
|
Page generated in 0.126 seconds