Spelling suggestions: "subject:"aperating systems"" "subject:"boperating systems""
321 |
Leakage Temperature Dependency Aware Real-Time Scheduling for Power and Thermal OptimizationChaturvedi, Vivek 26 March 2013 (has links)
Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems.
In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.
|
322 |
Operating System Support For Optimistic Distributed SimulationRaja, V 06 1900 (has links) (PDF)
No description available.
|
323 |
Spider: An overview of an object-oriented distributed computing systemYuh, Han-Sheng 01 January 1997 (has links)
The Spider Project is an object-oriented distributed system which provides a testbed for researchers in the Department of Computer Science, CSUSB, to conduct research on distributed systems.
|
324 |
Spider II: A component-based distributed computing systemWang, Koping 01 January 2001 (has links)
Spider II system is the second version implementation of the Spider project. This system is the first distributed computation research project in the Department of Computer Science at CSUSB. Spider II is a distributed virtual machine on top of the UNIX or LINUX operating system. Spider II features multi-tasking, load balancing and fault tolerance, which optimize the performance and stability of the system.
|
325 |
FeatureIT : a platform for collaborative software developmentSiller, Gavin George 02 1900 (has links)
The development of enterprise software is a complex activity that requires a diverse set of stakeholders to communicate and coordinate in order to achieve a successful outcome. In this dissertation I introduce a high-level physical architecture for a platform titled FeatureIT that has the goal of supporting the collaboration between stakeholders throughout the entire Software Development Life Cycle (SDLC). FeatureIT is the result of unifying the theoretical foundations of the multi-disciplinary field of Computer Supported Cooperative Work (CSCW) with the paradigm and associated technologies of Web 2.0. The architecture was borne out a study of literature in the fields of CSCW, Web 2.0 and software engineering, which facilitated the identification of functional and non-functional requirements necessary for the platform. The design science research methodology was employed to construct this architecture iteratively to satisfy the requirements while validating its efficacy against a comprehensive set of scenarios that typically occur in the SDLC. / Computing / M. Sc. (Information Systems)
|
326 |
Secure Virtualization of Latency-Constrained SystemsLackorzynski, Adam 06 February 2015 (has links)
Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight.
In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach.
Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism.
|
327 |
Communication in Microkernel-Based Operating SystemsAigner, Ronald 21 January 2011 (has links)
Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed.
In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead.
|
328 |
Content Aware Request Distribution for High Performance Web Service: A Performance StudyJones, Robert M. 01 July 2002 (has links)
The World Wide Web is becoming a basic infrastructure for a variety of services, and the increases in audience size and client network bandwidth create service demands that are outpacing server capacity. Web clusters are one solution to this need for highperformance, highly available web server systems. We are interested in load distribution techniques, specifically Layer-7 algorithms that are content-aware. Layer- 7 algorithms allow distribution control based on the specific content requested, which is advantageous for a system that offers highly heterogenous services. We examine the performance of the Client Aware Policy (CAP) on a Linux/Apache web cluster consisting of a single web switch that directs requests to a pool of dual-processor SMP nodes. We show that the performance advantage of CAP over simple algorithms such as random and round-robin is as high as 29% on our testbed consisting of a mixture of static and dynamic content. Under heavily loaded conditions however, the performance decreases to the level of random distribution. In studying SMP vs. uniprocessor performance using the same number of processors with CAP distribution, we find that SMP dual-processor nodes under moderate workload levels provide equivalent throughput as the same number of CPU’s in a uniprocessor cluster. As workload increases to a heavily loaded state however, the SMP cluster shows reduced throughput compared to a cluster using uniprocessor nodes. We show that the web cluster’s maximum throughput increases linearly with the addition of more nodes to the server pool. We conclude that CAP is advantageous over random or round-robin distribution under certain conditions for highly dynamic workloads, and suggest some future enhancements that may improve its performance.
|
329 |
A Sparse Learning Approach for Linux Kernel Data Race PredictionRyan, Gabriel January 2023 (has links)
Operating system kernels rely on fine-grained concurrency to achieve optimal performance on modern multi-core processors. However, heavy usage of fine-grained concurrency mechanisms make modern operating system kernels prone to data races, which can cause severe and often elusive bugs. In this thesis, I propose a new approach to identifying data races in OS Kernels based on learning a model to predict which memory accesses can be feasibly executed concurrently with one another.
To develop an efficient learning method for memory access feasibility, I develop a novel approach based on encoding feasibility as a boolean indicator function of system calls and ordered memory accesses. A memory access feasibility function encoded this way will have a naturally sparse latent representation due to the sparsity of interthread communications and synchronization interactions, and can therefore be accurately approximated based on a small number of observed concurrent execution traces.
This thesis introduces two key contributions. First, Probabilistic Lockset Analysis (PLA), is a new analysis that exploits sparsity in input dependencies in conjunction with a conservative lockset analysis to efficiently predict data races in the Linux OS Kernel. Second, approximate happens-before analysis in the fourier domain (HBFourier) generalizes the approach used by PLA to reason about interthread memory communications and synchronization events through sparse fourier learning. In addition to being theoretically grounded, these techniques are highly practical: they find hundreds of races in a recent Linux development kernel, an order of magnitude improvement over prior work, and find races with severe security impacts that have been overlooked by existing kernel testing systems for years.
|
330 |
Virtuo-ITS: An Interactive Tutoring System to Teach Virtual Memory Concepts of an Operating SystemMusunuru, Venkata Krishna Kanth 31 May 2017 (has links)
No description available.
|
Page generated in 0.0752 seconds