Spelling suggestions: "subject:"operating lemsystems"" "subject:"operating atemsystems""
181 |
An investigation of cluster analysis techniques as a means of structuring specifications in the design of complex systemsHolden, Timothy Aloysius January 1978 (has links)
Thesis (Ocean E.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering; and, (M.S.)--Massachusetts Institute of Technology Sloan School of Management, 1978. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 153-156. / by Timothy A. Holden. / Ocean E. / M.S.
|
182 |
Design and Analysis of Decoy Systems for Computer SecurityBowen, Brian M. January 2011 (has links)
This dissertation is aimed at defending against a range of internal threats, including eaves-dropping on network taps, placement of malware to capture sensitive information, and general insider threats to exfiltrate sensitive information. Although the threats and adversaries may vary, in each context where a system is threatened, decoys can be used to deny critical information to adversaries making it harder for them to achieve their target goal. The approach leverages deception and the use of decoy technologies to deceive adversaries and trap nefarious acts. This dissertation proposes a novel set of properties for decoys to serve as design goals in the development of decoy-based infrastructures. To demonstrate their applicability, we designed and prototyped network and host-based decoy systems. These systems are used to evaluate the hypothesis that network and host decoys can be used to detect inside attackers and malware. We introduce a novel, large-scale automated creation and management system for deploying decoys. Decoys may be created in various forms including bogus documents with embedded beacons, credentials for various web and email accounts, and bogus financial in- formation that is monitored for misuse. The decoy management system supplies decoys for the network and host-based decoy systems. We conjecture that the utility of the decoys depends on the believability of the bogus information; we demonstrate the believability through experimentation with human judges. For the network decoys, we developed a novel trap-based architecture for enterprise networks that detects "silent" attackers who are eavesdropping network traffic. The primary contributions of this system is the ease of injecting, automatically, large amounts of believable bait, and the integration of various detection mechanisms in the back-end. We demonstrate our methodology in a prototype platform that uses our decoy injection API to dynamically create and dispense network traps on a subset of our campus wireless network. We present results of a user study that demonstrates the believability of our automatically generated decoy traffic. We present results from a statistical and information theoretic analysis to show the believability of the traffic when automated tools are used. For host-based decoys, we introduce BotSwindler, a novel host-based bait injection sys- tem designed to delude and detect crimeware by forcing it to reveal itself during the ex- ploitation of monitored information. Our implementation of BotSwindler relies upon an out-of-host software agent to drive user-like interactions in a virtual machine, seeking to convince malware residing within the guest OS that it has captured legitimate credentials. To aid in the accuracy and realism of the simulations, we introduce a novel, low overhead approach, called virtual machine verification, for verifying whether the guest OS is in one of a predefined set of states. We provide empirical evidence to show that BotSwindler can be used to induce malware into performing observable actions and demonstrate how this approach is superior to that used in other tools. We present results from a user to study to illustrate the believability of the simulations and show that financial bait infor- mation can be used to effectively detect compromises through experimentation with real credential-collecting malware. We present results from a statistical and information theo- retic analysis to show the believability of simulated keystrokes when automated tools are used to distinguish them. Finally, we introduce and demonstrate an expanded role for decoys in educating users and measuring organizational security through experiments with approximately 4000 university students and staff.
|
183 |
Multi-Persona Mobile ComputingAndrus, Jeremy Christian January 2015 (has links)
Smartphones and tablets are increasingly ubiquitous, and many users rely on multiple mobile devices to accommodate work, personal, and geographic mobility needs. Pervasive access to always-on mobile computing has created new security and privacy concerns for mobile devices that often force users to carry multiple devices to meet those needs. The volume and popularity of mobile devices has commingled hardware and software design, and created tightly vertically integrated platforms that lock users into a single, vendor controlled ecosystem. My thesis is that lightweight mechanisms can be added to commodity operating systems to enable multiple virtual phones or tablets to run at the same time on a physical smartphone or tablet device, and to enable apps from multiple mobile platforms, such as iOS and Android, to run together on the same physical device, all while maintaining the low-latency and responsiveness expected of modern mobile devices. This dissertation presents two lightweight operating systems mechanisms, virtualization and binary compatibility, that enable multi-persona mobile computing. First, we present Cells, a mobile virtualization architecture enabling multiple virtual phones, or personas, to run simultaneously on the same physical cellphone in a secure and isolated manner. Cells introduces device namespaces that allow apps to run in a virtualized environment while still leveraging native devices such as GPUs to provide accelerated graphics. Second, we present Cycada, an operating system compatibility architecture that runs applications built for different mobile ecosystems, iOS and Android, together on a single Android device. Cycada introduces kernel-level code adaptation and diplomats to simplify binary compatibility support by reusing existing operating system code and unmodified frameworks and libraries. Both Cells and Cycada have been implemented in Android, and can run multiple Android virtual phones, and a mix of iOS and Android apps on the same device with good performance. Because mobile computing has become increasingly important, we also present a new way to teach operating systems in a mobile-centric way that incorporates the concepts of geographic mobility, sensor data acquisition, and resource-constrained design considerations.
|
184 |
O sistema operacional de rede heterogêneo HetNOS / The HetNOS heterogeneous network operating systemBarcellos, Antonio Marinho Pilla January 1993 (has links)
O advento dos computadores pessoais e posteriormente das estações de trabalho, somado ao desenvolvimento de hardware de comunicação eficiente e de baixo custo, levou a popularização das redes locais. Entretanto, o software não presenciou o mesmo desenvolvimento do hardware, especialmente devido a complexidade dos sistemas distribuídos. A heterogeneidade das máquinas, sistemas e redes, inerente aos ambientes computacionais modernos, restringe igualmente a integração e cooperação entre os nodos disponíveis. 0 objetivo do presente trabalho é, a partir da análise dos principais aspectos relacionados à distribuição e à heterogeneidade, desenvolver um sistema operacional de rede heterogêneo. Tal sistema, denominado HetNOS (de Heterogeneous Network Operating System), permite o desenvolvimento e validação de aplicações distribuídas homogêneas e heterogêneas de forma rápida e fácil. Os usuários podem concentrar-se nos aspectos de distribuição dos algoritmos, abstraindo detalhes dos mecanismos de comunicação, pois a programação de aplicações distribuídas é baseada em uma plataforma de interface homogênea, fácil de usar e com independência de localidade. Sendo um sistema operacional de rede, o HetNOS atua sobre o conjunto de sistemas operacionais nativos existentes; o ambiente de trabalho e estendido e não substituído. Não há entidades nem informações centralizadas, e os algoritmos são distribuídos, usualmente resultando maior confiabilidade e desempenho. A topologia do sistema é um anel lógico, esquema justificado pela generalidade de tal configuração e pela simplificação do projeto do núcleo distribuído do HetNOS. O paradigma de comunicação entre módulos e a troca de mensagens, mecanismo implementado sobre a interface de programação em rede sockets. Não há compartilhamento de memória em nenhuma instância, tornando o sistema mais legível, manutenível e portável. A interpelação entre módulos fica restrita à interface de mensagens definidas e aceitas por cada módulo. A arquitetura do HetNOS é estruturada e distribuída, pois o sistema é composto de camadas hierárquicas subdivididas em módulos, estes implementados com processos. O nível 1 corresponde ao conjunto de núcleos de sistemas operacionais nativos suportados, sobre o qual é implementado o núcleo distribuído heterogêneo do HetNOS, a DCL (Distributed Computing Layer). O principal serviço fornecido pela DCL (executada no nível 2), é um subsistema de troca de mensagens canônico e independente de localidade. Processos servidores e de usuários podem utilizar as mais variadas formas de comunicação por mensagens, tal como envio, recepção e propagação de mensagens síncronas, assíncronas, bloqueantes e não bloqueantes. No nível 3 estão os servidores do sistema, que estendem e implementam de forma distribuída a funcionalidade do sistema nativo. O Servidor de Nomes é o repositório global de dados, servindo a processos do sistema e de usuários. O Servidor de Autorização implementa o esquema de controle no acesso a recursos do sistema. O Servidor de Tipos permite que aplicações copiem dados estruturados de forma independente de localidade e de arquitetura. Por fim, o Servidor de Arquivos estende os serviços (de arquivos) locais de forma a integrá-los em um único domínio (espaço). No nível 4, arquiteturas e sistemas operacionais são emulados por módulos interpretadores (denominados Emulators). Aplicações de usuários estão espalhadas dos níveis 2 a 5; a camada varia com o tipo de aplicação. Para demonstrar a viabilidade do sistema, implementou-se a estrutura fundamental do HetNOS, incluindo a DCL (um núcleo distribuído heterogêneo), a versões básicas dos módulos servidores, as bibliotecas de procedimentos, além de diversos tipos de aplicações. O sistema conta hoje com mais de 25.000 linhas de código fonte C em mais de 100 arquivos. O desempenho do subsistema de comunicação implementado pela DCL (em avaliações com diferentes configurações de hardware) superou as expectativas iniciais, mas ainda está muito aquém do necessário a aplicações distribuídas. Segundo o que indicam as primeiras experiências realizadas, o HetNOS será bastante útil na prototipação e avaliação de modelos distribuídos, assim como na programação de software distribuído homogêneo e heterogêneo. Projetos de pesquisa do CPGCC envolvendo sistemas distribuídos (p.ex., tolerância a falhas e simulações) podem utilizar o HetNOS como ferramenta para implementação e validação de seus modelos. Futuramente, aplicações distribuídas e paralelas de maior porte poderão ser programadas, como sistemas de gerencia de bases de dados distribuídas, simuladores e sistemas de controle para automação industrial. / The advent of personal computers and, later, of workstations, along with the development of efficient and low-cost communication hardware has led to the popularization of local-area networks. However, distributed software did not experiment the same development of hardware, specially due to the complexity of distributed systems. The machine, system and communication network heterogeneity, inherent to the modern computing environments, is also responsible for the lack of integration and cooperation of available nodes. The purpose of this work is, from the analysis of the main aspects related to distribution and heterogeneity, to design a heterogeneous network operating system. Such system, named HetNOS (which stands for Heterogeneous Network Operating System), allows users to quickly write and validate distributed homogeneous and heterogeneous applications. Users can concentrate their work in the distributed aspects, abstracting communication mechanisms' details, because programming of distributed applications is based on a homogeneous interface platform, easy to use and location-independent. Being a network operating system, HetNOS acts over the set of native operating systems; the environment is extended instead of substituted. There are neither centralized information nor entities, and the algorithms are always distributed, usually yielding more reliability and performance. The HetNOS topology is a logical ring, scheme adopted partly due to the generality of such configuration and partly to simplify the HetNOS distributed kernel design. The communication paradigm between modules is the message exchange, a mechanism implemented over the sockets network application programming interface. There is no shared memory at all, making the system clearer, more manutible and portable. The interrelation between modules is restricted to the message interface defined and accepted by a module. The HetNOS architecture is structured and distributed, as the system is composed of hierarchical layers divided into modules, which in their turn are realized as processes. The layer 1 is the set of native operating system kernels, over which is implemented the distributed heterogeneous HetNOS kernel, namely DCL (states for Distributed Computing Layer). The main service provided by DCL (in layer 2) is a canonical, location-independent, message exchange mechanism. Server and user processes may use multiple forms of message primitives, such as synchronous, asynchronous, blocking and non-blocking send and receive. In the layer 3 are the system servers, which extend and implement in a distributed way the functionality of native systems. The name server is a global data repository, serving other system and user processes. The authorization server implements the security scheme to control the access to the system resources. The type server allows applications to transfer structured data independently of location and architecture. Finally, the file server extends the local (file) services to integrate them into a unique domain (space). In the layer 4, architectures and operating systems are emulated by interpreter modules (named Emulators). User applications are spread over the layers 2 to 5, depending on the application type. In order to prove the system viability, the fundamental HetNOS structure has been implemented, including its distributed heterogeneous kernel, the base of server modules, the procedure libraries, and several types of applications. The system source code has over 25,000 lines of C programming distributed over a hundred files. Although the optimization is an endless process, the performance of the DCL communication subsystem (evaluated using a few different hardware configurations) overestimated initial predictions, but is weak if considered the requirements to distributed processing. Accordingly to the first experiences made, HetNOS will be of great value to evaluate and prototype distributed models, as well as to the programming of homogeneous and heterogeneous distributed software. Local research projects involving distributed systems (e.g., fault tolerance and simulations) may use HetNOS as a tool to validate and implement their models. In the future, more complex distributed and parallel applications will be programmed, such as a distributed database management system, simulators and factory automation control systems.
|
185 |
Network and storage stack specialisation for performanceMarinos, Ilias January 2018 (has links)
In order to serve hundreds of millions of users, contemporary content providers employ tens of thousands of servers to scale their systems. The system software in these environments, however, is struggling to keep up with the increase in demand: contemporary network and storage stacks, as well as related APIs (e.g., BSD socket API) follow a `one-size-fits-all' design, heavily emphasising generality and feature richness at the cost of performance, leaving crucial hardware resources unexploited. Despite considerable prior research in improving I/O performance for conventional stacks, substantial hardware potential still remains unexploited because most of these proposals are fundamentally limited in their scope and effectiveness, as they still have to fit in a general-purpose design. In this dissertation, I argue that specialisation and microarchitectural awareness are necessary in system software design to effectively exploit hardware capabilities, and scale I/O performance. In particular, I argue that trading off generality and compatibility, allows us to radically re-architect the stack emphasising application-specific optimisations and efficient data movement throughout the hardware to improve performance. I first demonstrate that conventional general-purpose stacks fail to effectively utilise contemporary hardware while serving critical Internet workloads, and show why modern microarchitectural properties play a critical role in scaling I/O performance. I then identify core decisions in Operating Systems design that, although they were originally introduced to optimise performance, are now proven redundant or even detrimental. I propose clean-slate, specialised architectures for network and storage stacks designed to exploit modern hardware properties, and application domain-specific knowledge in order to sidestep historical bottlenecks in systems I/O performance, and achieve great scalability. With thorough evaluation of my systems, I illustrate how specialisation and greater microarchitectural awareness could lead to dramatic performance improvements, which could ultimately translate to improved scalability and reduced capital expenditure simultaneously.
|
186 |
Analysis of a coordination framework for mapping coarse-grain applications to distributed systemsSchaefer, Linda Ruth 01 January 1991 (has links)
A paradigm is presented for the parallelization of coarse-grain engineering and scientific applications. The coordination framework provides structure and an organizational strategy for a parallel solution in a distributed environment. Three categories of primitives which define the coordination framework are presented: structural, transformational. and operational. The prototype of the paradigm presented in this thesis is the first step towards a programming development tool. This tool will allow non-specialist programmers to parallelize existing sequential solutions through the distribution, synchronization and collection of tasks. The distributed control, multidimensional pipeline characteristics of the paradigm provide advantages which include load balancing through the use of self-directed workers, a simplified communication scheme ideally suited for infrequent task interaction, a simple programmer interface, and the ability of the programmer to use already existing code. Results for the parallelization of SPICE3Cl in a distributed system of fifteen SUN 3 workstations with one fileserver demonstrate linear speedup with slopes ranging from 0.7 to 0.9. A high-level abstraction of the system is presented in the form of a closed, single class, queuing network model. Using the Mean Value Analysis solution technique from queuing network theory, an expression for total execution time is obtained and is shown to be consistent with the well known Amdahl's Law. Our expression is in fact a refinement of Amdahl's Law which realistically captures the limitations of the system. We show that the portion of time spent executing serial code which cannot be enhanced by parallelization is a function of N, the number of workers in the system. Experiments reveal the critical nature of the communication scheme and the synchronization of the paradigm. Investigation of the synchronization center indicates that as N increases, visitations to the center increase and degrade system performance. Experimental data provides the information needed to characterize the impact of visitations on the perfoimance of the system. This characterization provides a mechanism for optimizing the speedup of an application. It is shown that the model replicates the system as well as predicts speedup over an extended range of processors, task count, and task size.
|
187 |
[en] OPERATING SYSTEM KERNEL SCRIPTING WITH LUA / [pt] LUNATIK: SCRIPTING DE KERNEL DE SISTEMA OPERACIONAL COM LUALOURIVAL PEREIRA VIEIRA NETO 26 October 2011 (has links)
[pt] Existe uma abordagem de projeto para aumentar a flexibilidade de sistemas
operacionais, chamada sistema operacional extensível, que sustenta
que sistemas operacionais devem permitir extensoes para poderem atender
a novos requisitos. Existe também uma abordagem de projetos no desenvolvimento
de aplicações que sustenta que sistemas complexos devem permitir
que usuários escrevam scripts para que eles possam tomar as suas próprias
decisões de configuração em tempo de execução. Seguindo estas duas abordagens
de projeto, nos construímos uma infra-estrutura que possibilita que
usuários carreguem e executem dinamicamente scripts Lua dentro de kernels
de sistema operacional, aumentando a flexibilidade deles. Nesta dissertação,
nos apresentamos Lunatik, a nossa infra-estrutura para scripting de kernel
baseada em Lua, e mostramos um cenário de uso real no escalonamento
dinâmico da frequência e voltagem de CPU. Lunatik está implementado
atualmente tanto para NetBSD quanto para Linux. / [en] There is a design approach to improve operating system flexibility, called
extensible operating system, that supports that operating systems must
allow extensions in order to meet new requirements. There is also a design
approach in application development that supports that complex systems
should allow users to write scripts in order to let them make their own
configuration decisions at run-time. Following these two design approaches,
we have built an infrastructure that allows users to dynamically load and
run Lua scripts into operating system kernels, improving their flexibility.
In this thesis we present Lunatik, our scripting subsystem based on Lua,
and show a real usage scenario in dynamically scaling CPU frequency and
voltage. Lunatik is currently implemented both for NetBSD and Linux.
|
188 |
Turbo-equalization for QAM constellationsPetit, Paul January 2002 (has links)
While the focus of this work is on turbo equalization, there is also an examination of equalization techniques including MMSE linear and DFE equalizers and Precoding. The losses and capacity associated with the ISI channel are also examined. Iterative decoding of concatenated codes is briefly reviewed and the MAP algorithm is explained.
|
189 |
Formal memory models for verifying C systems codeTuch, Harvey, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Systems code is almost universally written in the C programming language or a variant. C has a very low level of type and memory abstraction and formal reasoning about C systems code requires a memory model that is able to capture the semantics of C pointers and types. At the same time, proof-based verification demands abstraction, in particular from the aliasing and frame problems. In this thesis, we study the mechanisation of a series of models, from semantic to separation logic, for achieving this abstraction when performing interactive theorem-prover based verification of C systems code in higher- order logic. We do not commit common oversimplifications, but correctly deal with C's model of programming language values and the heap, while developing the ability to reason abstractly and efficiently. We validate our work by demonstrating that the models are applicable to real, security- and safety-critical code by formally verifying the memory allocator of the L4 microkernel. All formalisations and proofs have been developed and machine-checked in the Isabelle/HOL theorem prover.
|
190 |
Programmer friendly and efficient distributed shared memory integrated into a distributed operating system.Silcock, Jackie, mikewood@deakin.edu.au January 1998 (has links)
Distributed Shared Memory (DSM) provides programmers with a shared memory environment in systems where memory is not physically shared. Clusters of Workstations (COWs), an often untapped source of computing power, are characterised by a very low cost/performance ratio. The combination of Clusters of Workstations (COWs) with DSM provides an environment in which the programmer can use the well known approaches and methods of programming for physically shared memory systems and parallel processing can be carried out to make full use of the computing power and cost advantages of the COW.
The aim of this research is to synthesise and develop a distributed shared memory system as an integral part of an operating system in order to provide application programmers with a convenient environment in which the development and execution of parallel applications can be done easily and efficiently, and which does this in a transparent manner. Furthermore, in order to satisfy our challenging design requirements we want to demonstrate that the operating system into which the DSM system is integrated should be a distributed operating system.
In this thesis a study into the synthesis of a DSM system within a microkernel and client-server based distributed operating system which uses both strict and weak consistency models, with a write-invalidate and write-update based approach for consistency maintenance is reported. Furthermore a unique automatic initialisation system which allows the programmer to start the parallel execution of a group of processes with a single library call is reported. The number and location of these processes are determined by the operating system based on system load information.
The DSM system proposed has a novel approach in that it provides programmers with a complete programming environment in which they are easily able to develop and run their code or indeed run existing shared memory code. A set of demanding DSM system design requirements are presented and the incentives for the placement of the DSM system with a distributed operating system and in particular in the memory management server have been reported. The new DSM system concentrated on an event-driven set of cooperating and distributed entities, and a detailed description of the events and reactions to these events that make up the operation of the DSM system is then presented. This is followed by a pseudocode form of the detailed design of the main modules and activities of the primitives used in the proposed DSM system.
Quantitative results of performance tests and qualitative results showing the ease of programming and use of the RHODOS DSM system are reported. A study of five different application is given and the results of tests carried out on these applications together with a discussion of the results are given. A discussion of how RHODOS DSM allows programmers to write shared memory code in an easy to use and familiar environment and a comparative evaluation of RHODOS DSM with other DSM systems is presented. In particular, the ease of use and transparency of the DSM system have been demonstrated through the description of the ease with which a moderately inexperienced undergraduate programmer was able to convert, write and run applications for the testing of the DSM system. Furthermore, the description of the tests performed using physically shared memory shows that the latter is indistinguishable from distributed shared memory; this is further evidence that the DSM system is fully transparent. This study clearly demonstrates that the aim of the research has been achieved; it is possible to develop a programmer friendly and efficient DSM system fully integrated within a distributed operating system.
It is clear from this research that client-server and microkernel based distributed operating system integrated DSM makes shared memory operations transparent and almost completely removes the involvement of the programmer beyond classical activities needed to deal with shared memory. The conclusion can be drawn that DSM, when implemented within a client-server and microkernel based distributed operating system, is one of the most encouraging approaches to parallel processing since it guarantees performance improvements with minimal programmer involvement.
|
Page generated in 0.1195 seconds