Spelling suggestions: "subject:"aperating lemsystems - computers"" "subject:"aperating lemsystems - eomputers""
191 |
A Sparse Learning Approach for Linux Kernel Data Race PredictionRyan, Gabriel January 2023 (has links)
Operating system kernels rely on fine-grained concurrency to achieve optimal performance on modern multi-core processors. However, heavy usage of fine-grained concurrency mechanisms make modern operating system kernels prone to data races, which can cause severe and often elusive bugs. In this thesis, I propose a new approach to identifying data races in OS Kernels based on learning a model to predict which memory accesses can be feasibly executed concurrently with one another.
To develop an efficient learning method for memory access feasibility, I develop a novel approach based on encoding feasibility as a boolean indicator function of system calls and ordered memory accesses. A memory access feasibility function encoded this way will have a naturally sparse latent representation due to the sparsity of interthread communications and synchronization interactions, and can therefore be accurately approximated based on a small number of observed concurrent execution traces.
This thesis introduces two key contributions. First, Probabilistic Lockset Analysis (PLA), is a new analysis that exploits sparsity in input dependencies in conjunction with a conservative lockset analysis to efficiently predict data races in the Linux OS Kernel. Second, approximate happens-before analysis in the fourier domain (HBFourier) generalizes the approach used by PLA to reason about interthread memory communications and synchronization events through sparse fourier learning. In addition to being theoretically grounded, these techniques are highly practical: they find hundreds of races in a recent Linux development kernel, an order of magnitude improvement over prior work, and find races with severe security impacts that have been overlooked by existing kernel testing systems for years.
|
192 |
Multitasking for sensor based systemsReddy, Srinivas T. January 1985 (has links)
Multitasking systems are being used increasingly for real-time applications. Multitasking is suited very well for real-time systems since events in the real world do not occur in strict sequence but rather tend to overlap. Multitasking operating systems coordinate the activities of the different overlapping functions and give the user the appearance of concurrent activity. The coordination and scheduling is performed according to a user defined order of importance or priority. There are many multi tasking operating systems available for all the popular microprocessors. One such multitasking executive is VRTX/86 for the 8086 microprocessor. This executive comes in a PROM and is independent of any specific hardware configuration. Using this executive the IBM PC has been converted into a multitasking environment and multitasking test programs have been executed on the PC.
A general methodology for defining tasks and assigning priorities to these tasks has been defined. Using this methodology a typical real-time application called a Vehicle Instrumentation System was developed. / M.S.
|
193 |
A technology reference model for client/server software developmentNienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on
personal computers. The aim of the research is to compile a technology model for the development of client/server
software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
|
194 |
A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few
years. This is mainly due to advances in hardware technology, programming languages, as well as the
requirement to build better software application systems in less time. The importance of mondial (worldwide)
communication between systems is also growing exponentially. People are using network-based
applications daily, communicating not only locally, but also globally. The Internet, the global network,
therefore plays a significant role in the development of new software. Distributed object computing is one
of the computing paradigms that promise to solve the need to develop clienVserver application systems,
communicating over heterogeneous environments.
This study, of limited scope, concentrates on one crucial element without which distributed object computing
cannot be implemented. This element is the communication software, also called middleware, which allows
objects situated on different hardware platforms to communicate over a network. Two of the most important
middleware standards for distributed object computing today are the Common Object Request Broker
Architecture (CORBA) from the Object Management Group, and the Distributed Component Object
Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially
available products, allowing distributed objects to communicate over heterogeneous networks.
In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented,
namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models
are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object
infrastructures is then performed. The results are given as a set of tables in which the differences and
similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused
by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased
comparison between CORBA and DCOM is made possible, which constitutes the main aim of this
dissertation. / Computing / M. Sc. (Computer Science)
|
195 |
Experimental implementation of the new prototype in LinuxUnknown Date (has links)
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. In the wired network, TCP performs remarkably well due to its scalability and distributed end-to-end congestion control algorithms. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication and develop TCP congestion control algorithm for mobile applications. We show that the optimal TCP congestion control and link scheduling scheme amounts to window-control oriented implicit primaldual solvers for underlying network utility maximization. Based on this idea, we used a scalable congestion control algorithm called QUeueIng-Control (QUIC) TCP where it utilizes queueing-delay based MaxWeight-type scheduler for wireless links developed in [34]. Simulation and test results are provided to evaluate the proposed schemes in practical networks. / by Gee Won Han. / Thesis (M.S.C.S.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
|
196 |
An implementation of the IEEE 1609.4 wave standard for use in a vehicular networking testbedUnknown Date (has links)
We present an implementation of the IEEE WAVE (Wireless Access in Vehicular Environments) 1609.4 standard, Multichannel Operation. This implementation provides concurrent access to a control channel and one or more service channels, enabling vehicles to communicate among each other on multiple service channels while
still being able to receive urgent and control information on the control channel. Also
included is functionality that provides over-the-air timing synchronization, allowing
participation in alternating channel access in the absence of a reliable time source.
Our implementation runs on embedded Linux and is built on top of IEEE 802.11p, as
well as a customized device driver. This implementation will serve as a key compo-
nent in our IEEE 1609-compliant Vehicular Multi-technology Communication Device
(VMCD) that is being developed for a VANET testbed under the Smart Drive initiative, supported by the National Science Foundation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
197 |
Eidolon: adapting distributed applications to their environment.Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
|
198 |
OOCFA2: a PDA-based higher-order flow analysis for object-oriented programsMarquez, Nicholas Alexander 04 February 2013 (has links)
The application of higher-order PDA-based flow analyses to object-oriented languages enables comprehensive and precise characterization of program behavior, while retaining practicality with efficiency.
We implement one such flow analysis which we've named OOCFA2.
While over the years many advancements in flow analysis have been made, they have almost exclusively been with respect to functional languages, often modeled with the calculus.
Object-oriented semantics--while also able to be modeled in a functional setting--provide certain structural guarantees and common idioms which we believe are valuable to reason over in a first-class manner.
By tailoring modern, advanced flow analyses to object-oriented semantics, we believe it is possible to achieve greater precision and efficiency than could be had using a functional modeling.
This, in turn, reflects upon the possible classes of higher-level analyses using the underlying flow analysis: the more powerful, efficient, and flexible the flow analysis, the more classes of higher-level analyses--e.g., security analyses--can be practically expressed.
The growing trend is that smartphone and mobile-device (e.g., tablet) users are integrating these devices into their lives, in more frequent and more personal ways.
Accordingly, the primary application and proof-of-concept for this work is the analysis of the Android operating system's permissions-based security system vis--vis potentially malicious applications.
It is implemented atop OOCFA2.
The use of a such a powerful higher-order flow analysis allows one to apply its knowledge to create a wide variety of powerful and practical security-analysis "front-ends"--not only the permissions-checking analysis in this work, but also, e.g., information-flow analyses.
OOCFA2 is the first PDA-based higher-order flow analysis in an object-oriented setting.
We empirically evaluate its accuracy and performance to prove its practical viability.
We also evaluate the proof-of-concept security analysis' accuracy as directly related to OOCFA2; this shows promising results for the potential of building security-oriented "front-ends" atop OOCFA2.
|
199 |
Graph and geometric algorithms on distributed networks and databasesNanongkai, Danupon 16 May 2011 (has links)
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases.
In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important
primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently
develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal.
Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed
graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower
bounds show that the existing algorithms are tight.
In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally.
The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also
introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
|
200 |
Contech: a shared memory parallel program analysis frameworkVassenkov, Phillip 13 January 2014 (has links)
We are in the era of multicore machines, where we must exploit thread level parallelism for programs to run better, smarter, faster, and more efficiently. In order to increase instruction level parallelism, processors and compilers perform heavy dataflow analyses between instructions. However, there isn’t much work done in the area of inter-thread dataflow analysis. In order to pave the way and find new ways to conserve resources across a variety of domains (i.e., execution speed, chip die area, power efficiency, and computational throughput), we propose a novel framework, termed Contech, to facilitate the analysis of multithreaded program in terms of its communication and execution patterns. We focus the scope on shared memory programs rather than message passing programs, since it is more difficult to analyze the communication and execution patterns for these programs. Discovering patterns of shared memory programs has the potential to allow general purpose computing machines to turn on or off architectural tricks according to application-specific features. Our design of Contech is modular in nature, so we can glean a large variety of information from an architecturally independent representation of the program under examination.
|
Page generated in 0.1704 seconds