• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 8
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 119
  • 28
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Virtualisation des réseaux : performance, partage et applications / Network virtualization : performance, sharing and applications

Anhalt, Fabienne 07 July 2011 (has links)
La virtualisation apparaît comme étant une solution clé pour révolutionner l'architecture ossifiée des réseaux comme Internet. En ajoutant une couche d'abstraction au-dessus du matériel, la virtualisation permet de gérer et de configurer des réseaux virtuels indépendamment les uns des autres. La flexibilité qui en résulte donne à l'opérateur d'un réseau virtuel la possibilité de configurer la topologie, et de modifier les piles protocolaires. Jusqu'à présent, la virtualisation du réseau a été implémentée dans des plateformes de test ou de recherche, pour permettre l'expérimentation avec les protocoles de routage. Dans le but d'introduire la virtualisation dans les réseaux de production comme ceux de l'Internet, plusieurs nouveaux défis apparaissent, dont en particulier la performance et le partage des ressources de commutation et de routage. Ces deux questions sont particulièrement pertinentes, lorsque le plan de données du réseau lui-même est virtualisé, pour offrir un maximum d'isolation et de configurabilité. Pour cela, nous évaluons et analysons d'abord l'impact de la virtualisation sur la performance des routeurs virtuels logiciels. Puis, dans le but de pouvoir virtualiser le plan de données dans des réseaux de production, nous proposons une architecture matérielle de commutateur virtualisé, permettant le partage différencié des ressources tels que les ports et les tampons mémoire. D'autre part, nous examinons les possibles applications des réseaux virtuels et proposons un service de réseaux virtuels à la demande, avec un routage configurable, et des bandes passantes contrôlées, Nous appliquons et évaluons ce service dans le contexte d'infrastructures virtuelles. / Virtualization appears as a key solution to revolutionize the architecture of ossified networks, such as the Internet. By adding a layer of abstraction on top of the actual hardware, virtual networks can be managed and configured flexibly and independently. The flexibility introduced into the network provides the operator with options for topology reconfiguration, besides allowing it to play with the software stacks and protocols. Today, network virtualization has been realized in research testbeds, allowing researchers to experiment with routing. Introducing virtualization in a production network such as those of the Internet raises several challenges, in particular the performance and the sharing of the routing and switching resources. These are in particular relevant, when the network data plane is virtualized, for maximum isolation and configurability in virtual networks. Therefore, we first evaluate and analyze the impact of virtualization on the performance of virtual software routers. Then, for being able to virtualize the data plane in production networks, we propose a hardware architecture of a virtualized switch, enabling the differentiated sharing of resources such as the ports and the memory buffers. Moreover, we examine the possible applications of virtual networks and propose a service for virtual networks on demand with configurable routing and controlled bandwidth. We apply and evaluate this service in the context of virtual infrastructures.
92

Modeling the performance impact of hot code misprediction in Cross-ISA virtual machines = Modelagem do impacto de erros de predição de código quente no desempenho de máquinas virtuais / Modelagem do impacto de erros de predição de código quente no desempenho de máquinas virtuais

Lucas, Divino César Soares, 1985- 04 September 2013 (has links)
Orientadores: Guido Costa Souza de Araújo, Edson Borin / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T12:28:12Z (GMT). No. of bitstreams: 1 Lucas_DivinoCesarSoares_M.pdf: 1053361 bytes, checksum: e29ab79838532619ba298ddde8ba0f39 (MD5) Previous issue date: 2013 / Resumo: Máquinas virtuais (MVs) são sistemas que se propõem a eliminar a incompatibilidade entre duas, em geral diferentes, interfaces e dessa forma habilitar a comunicação entre diferentes sistemas. Nesse sentido, atuando como mediadores, uma MV está em um ponto que a permite fomentar o desenvolvimento de soluções inovadoras para vários problemas. Tais sistemas geralmente utilizam técnicas de emulação, por exemplo, interpretação ou tradução dinâmica de binários, para executar o código da aplicação cliente. Para determinar qual técnica de emulação é a ideal para um trecho de código geralmente é necessário que a MV empregue algum tipo de predição para determinar se o benefício de compilar o código supera os custos. Este problema, na maioria dos casos, resume-se a predizer se o dado trecho de código será frequentemente executado ou não, problema conhecido pelo nome de Predição de Código Quente. Em geral, se o preditor sinalizar um trecho de código como quente, a MV imediatamente toma a decisão de compilá-lo. Contudo, um problema surge nesta estratégia, à resposta do preditor é apenas a decisão de uma heurística e é, portanto, suscetível a erros. Quando o preditor sinaliza como quente um trecho de código que não será frequentemente executado, ou seja, um código que de fato é "frio", ele está fazendo uma predição errônea de código quente. Quando uma predição incorreta é feita, ocorre que a técnica de emulação que a MV utilizará para emular o trecho de código não compensará o seu custo e, portanto a MV gastará mais tempo executando o seu próprio código do que o código da aplicação cliente. Neste trabalho, foi avaliado o impacto de predições incorretas de código quente no desempenho de MVs emulando vários tipos de aplicações. Na análise realizada foi avaliado o preditor de código quente baseado em limiar, uma técnica frequentemente utilizada para identificar regiões de código que serão frequentemente executadas. Para fazer esta análise foi criado um modelo matemático para simular o comportamento de tal preditor e a partir deste modelo uma série de resultados puderam ser explorados. Inicialmente é mostrado que este preditor frequentemente erra a predição e, como conseqüência, o tempo gasto fazendo compilações torna-se o maior componente do tempo de execução da MV. Também é mostrado como diferentes limiares de predição afetam o número de predições incorretas e qual o impacto disto no desempenho da MV. Também são apresentados resultados indicando qual o impacto do custo de compilação, tradução e velocidade do código traduzido no desempenho da MV. Por fim é mostrado que utilizando apenas o conjunto de aplicações do SPEC CPU 2006 para avaliar o desempenho de MVs que utilizam o preditor de código quente baseado em limiar pode levar a resultados imprecisos / Abstract: Virtual machines are systems that aim to eliminate the compatibility gap between two, possible distinct, interfaces, thus enabling them to communicate. This way, acting like a mediator, the VM lies at an important position that enables it to foster innovative solutions for many problems. Such systems usually rely on emulation techniques, such as interpretation and dynamic binary translation, to execute guest application code. In order to select the best emulation technique for each code segment, the VM typically needs to predict whether the cost of compiling the code overcome its future execution time. This problem, in the common case, reduce to predicting if the given code region will be frequently executed or not, a problem called Hot Code Prediction. Generally, if the predictor flags a given code region as hot the VM instantly takes the decision to compile it. However, a problem came out from this strategy, the predictor response is only a decision made by means of a heuristic and thus it can be incorrect. Whenever the predictor flags a code region that will be infrequently executed (cold code) as hot code, we say that it is doing a hotness misprediction. Whenever a misprediction happens it means that the technique the VM will use to emulate the code will not have its cost amortized by executing the optimized code and thus the VM will, in fact, spend more time executing its own code rather than the guest application code. In this work we measure the impact of hotness mispredictions in a VM emulating several kinds of applications. In our analysis we evaluate the threshold-based hot code predictor, a technique commonly used to predict hot code fragments. To do so we developed a mathematical model to simulate the behavior of such predictor and we use it to estimate the impact of mispredictions in several benchmarks. We show that this predictor frequently mispredicts the code hotness and as a result the VM emulation performance becomes dominated by miscompilations. Moreover, we show how the threshold choice can affect the number of mispredictions and how this impacts the VM performance. We also show how the compilation, interpretation and steady state execution cost of translated instructions affect the VM performance. At the end we show that using SPEC CPU 2006 benchmarks to measure the performance of a VM using the threshold-based predictor can lead to misleading results / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
93

Agent-based crowd simulation using GPU computing

O’Reilly, Sean Patrick January 2014 (has links)
M.Sc. (Information Technology) / The purpose of the research is to investigate agent-based approaches to virtual crowd simulation. Crowds are ubiquitous and are becoming an increasingly common phenomena in modern society, particularly in urban settings. As such, crowd simulation systems are becoming increasingly popular in training simulations, pedestrian modelling, emergency simulations, and multimedia. One of the primary challenges in crowd simulation is the ability to model realistic, large-scale crowd behaviours in real time. This is a challenging problem, as the size, visual fidelity, and complex behaviour models of the crowd all have an impact on the available computational resources. In the last few years, the graphics processing unit (GPU) has presented itself as a viable computational resource for general purpose computation. Traditionally, GPUs were used solely for their ability to efficiently compute operations related to graphics applications. However, the modern GPU is a highly parallel programmable processor, with substantially higher peak arithmetic and memory bandwidth than its central processing unit (CPU) counterpart. The GPU’s architecture makes it a suitable processing resource for computations that are parallel or distributed in nature. One attribute of multi-agent systems (MASs) is that they are inherently decentralised. As such, a MAS that leverages advancements in GPU computing may provide a solution for crowd simulation. The research investigates techniques and methods for general purpose crowd simulation, including topics in agent behavioural modes, pathplanning, collision avoidance and agent steering. The research also investigates how GPU computing has been utilised to address these computationally intensive problem domains. Based on the outcomes of the research, an agent-based model, Massively Parallel Crowds (MPCrowds), is proposed to address virtual crowd simulation, using the GPU as an additional resource for agent computation.
94

Research on virtualisation technlogy for real-time reconfigurable systems / Étude des techniques de virtualisation pour des systèmes temps-réel et reconfigurables dynamiquement

Xia, Tian 05 July 2016 (has links)
Cette thèse porte sur l'élaboration d'un micro-noyau original de type hyperviseur, appelé Ker-ONE, permettant de gérer la virtualisation pour des systèmes embarqués sur des plateformes de type SoC et fournissant un environnement pour les machines virtuelles en temps réel. Nous avons simplifié l'architecture du micro-noyau en ne gardant que les caractéristiques essentielles requises pour la virtualisation, et fortement réduit la complexité de la conception du noyau. Sur cette base, nous avons mis en place un mécanisme capable de gérer des ressources reconfigurables dans un système supportant des machines virtuelles. Les accélérateurs matériels reconfigurables sont mappés en tant que dispositifs classiques dans chaque machine. Grâce à une gestion efficace de la mémoire dédiée, nous détectons automatiquement le besoins de ressources et permettons une allocation dynamique des ressources sur FPGA. Suite à diverses expériences et évaluations sur la plateforme Zynq-7000, combinant ARM et ressources FPGA, nous avons montré que Ker-ONE ne dégrade que très peu les performances en termes de temps d'exécution. Les surcoûts engendrés peuvent généralement être ignorés dans les applications réelles. Nous avons également étudié l'ordonnançabilité temps réel dans les machines virtuelles. Les résultats montrent que le respect de l'échéance des tâches temps réel est garanti. Nous avons également démontré que le noyau proposé est capable d'allouer des accélérateurs matériels très rapidement. / This thesis describes an original micro-kernel that manages virtualization and that provides an environment for real-time virtual machines. We have simplified the micro-kernel architecture by only keeping critical features required for virtualization, and massively reduced the kernel design complexity. Based on this micro-kernel, we have introduced a framework capable of DPR resource management in a virtual machine system. DPR accelerators are mapped as ordinary devices in each VM. Through dedicated memory management, our framework automatically detects the request for DPR resources and allocates them dynamically. According to various experiments and evaluations on the Zynq-7000 platform we have shown that Ker-ONE causes very low virtualization overheads, which can generally be ignored in real applications. We have also studied the real-time schedulability in virtual machines. The results show that RTOS tasks are guaranteed to be scheduled while meeting their intra-VM timing constraints. We have also demonstrated that the proposed framework is capable of virtual machine DPR allocation with low overhead.
95

A comparison framework for server virtualisation systems a case study

Van Tonder, Martin Stephen January 2006 (has links)
Recent years have seen a revival of interest in virtualisation research. Although this term has been used to refer to various systems, the focus of this research is on systems which partition a single physical server into multiple virtual servers. It is difficult for researchers and practitioners to get a clear picture of the state of the art in server virtualisation. This is due in part to the large number of systems available. Another reason is that information about virtualisation systems lacks structure, and is dispersed among multiple sources. Practitioners, such as data centre managers and systems administrators, may be familiar with virtualisation systems from a specific vendor, but generally lack a broader view of the field. This makes it difficult to make informed decisions when selecting these systems. Researchers and vendors who are developing virtualisation systems also lack a standard framework for identifying the strengths and weaknesses of their systems, compared to competing systems. It is also time-consuming for researchers who are new to the field to learn about current virtualisation systems. The purpose of this research was to develop a framework to solve these problems. The objectives of the research correspond to the applications of the framework. These include conducting comparative evaluations of server virtualisation systems, identifying strengths and weaknesses of particular virtualisation systems, specifying virtualisation system requirements to facilitate system selection, and gathering information about current virtualisation systems in a structured form. These four objectives were satisfied. The design of this framework was also guided by six framework design principles. These principles, or secondary objectives, were also met. The framework was developed based on an extensive literature study of data centres, virtualisation and current virtualisation systems. Criteria were selected through an inductive process. The feasibility of conducting evaluations using the framework was demonstrated by means of literature-based evaluations, and a practical case study. The use of the framework to facilitate virtualisation system selection was also demonstrated by means of a case study featuring the NMMU Telkom CoE data centre. This framework has a number of practical applications, ranging from the facilitation of decision-making to identifying areas for improvement in current virtualisation systems. The information resulting from evaluations using the framework is also a valuable resource for researchers who are new to the field. The literature study which forms the theoretical foundation of this work is particularly useful in this regard. A future extension to this work would be to develop a decision support system based on the framework. Another possibility is to make the framework, and evaluations, available on-line as a resource for data center managers, vendors and researchers. This would also enable other researchers to provide additional feedback, enabling the framework to be further refined
96

Evaluation of a multiple criticality real-time virtual machine system and configuration of an RTOS's resources allocation techniques / Évaluation de la virtualisation sur les systèmes temps-réel à criticité multiple et configuration des techniques d'allocation de ressources sur les systèmes d'exploitation temps-réel

Aichouch, Mohamed El Mehdi 28 May 2014 (has links)
L'utilisation de la virtualisation dans le domaine des serveurs d'entreprise est aujourd'hui une méthode courante. La virtualisation est une technique qui permet de faire fonctionner sur une seule machine réelle plusieurs systèmes d'exploitation. Cette technique est train d'être adoptée dans le développement des systèmes embarqués suite à la disponibilité de nouveaux processeurs classiquement destiné à ce domaine. Cependant, il y a une différence de contraintes entre les applications d'entreprise et les applications embarquées, celleci doivent respecter des contraintes de temps-réel en réalisant leurs tâches. Dans nos travaux de recherche nous avons étudié l'impact de la virtualisation sur un système d'exploitation temps-réel. Nous avons mesuré le surcoût et la latence des fonctions internes du système d'exploitation déployé sur une machine virtuelle, et nous les avons comparés à celles du système installé sur une machine réelle. Les résultats ont montré que ces métriques sont plus élevées lorsque la virtualisation est utilisée. Notre analyse a révélé que la puce électronique doit inclure des mécanismes matériels qui assistent le logiciel de contrôle des machines virtuelles afin de réduire le surcoût de la virtualisation, mais il est aussi essentiel de choisir une politique d'allocation des ressources efficace afin de garantir le respect des contraintes de temps-réel demandées par les machines virtuelles. Notre second axe de recherche concerne la transformation d'un modèle de simulation d'un système d'exploitation vers des programmes exécutables sur un système-sur-puce. Cette transformation doit également préserver une caractéristique offerte par ce modèle qui est la facilité de configuration des techniques d'allocation de ressources. Pour transformer le modèle de système d'exploitation nous avons utilisé des techniques de l'ingénierie-dirigée par les modèles. Où dans un premier temps le modèle initiale est transformé vers un autre modèle, ensuite ce second modèle est à son tour transformé automatiquement en un code source. Pour assurer la configuration du système d'exploitation finale nous avons utilisé une librairie placée entre le système d'exploitation et l'application afin d'identifier les besoins de celle-ci en termes de ressources et adapter le système à ces besoins. L'évaluation des performances de la librairie a démontré la viabilité de l'approche. / In the domain of server and mainframe systems, virtualizing a computing system's physical resources to achieve improved sharing and utilization has been well established for decades. Full virtualization of all system resources makes it possible to run multiple guest operating systems on a single physical platform. Recently, the availability of full virtualization on physical platforms that target embedded systems creates new use-cases in the domain of real-time embedded systems. In this dissertation we use an existing “virtual machines monitor” to evaluate the performance of a real-time operating system. We observed that the virtual machine monitor affects the internal overheads and latencies of the guest OS. Our analysis revealed that the hardware mechanisms that allow a virtual machine monitor to provide an efficient way to virtualize the processor, the memory management unit, and the input/output devices, are necessary to limit the overhead of the virtualization. More importantly, the scheduling of virtual machines by the VMM is essential to guarantee the temporal constraints of the system and have to be configured carefully. In a second work and starting from a previous project aiming at allowing a system designer to explore a software-hardware codesign of a solution using high-level simulation models, we proposed a methodology that allows the transformation of a simulation model into a binary executable on a physical platform. The idea is to provide the system designer with the necessary tools to rapidly explore the design space and validate it, and then to generate a configuration that could be used directly on top of a physical platform. We used a model-driven engineering approach to perform a model-to-model transformation to convert the simulation model into an executable model. And we used a middleware able to support a variety of the resources allocation techniques in order to implement the configuration previously selected by the system designer at simulation phase. We proposed a prototype that implements our methodology and validate our concepts. The results of the experiments confirmed the viability of this approach.
97

The use of a virtual machine as an access control mechanism in a relational database management system.

Van Staden, Wynand Johannes 04 June 2008 (has links)
This dissertation considers the use of a virtual machine as an access control mechanism in a relational database management system. Such a mechanism may prove to be more flexible than the normal access control mechanism that forms part of a relational database management system. The background information provided in this text (required to clearly comprehend the issues that are related to the virtual machine and its language) introduces databases, security and security mechanisms in relational database management systems. Finally, an existing implementation of a virtual machine that is used as a pseudo access control mechanism is provided. This mechanism is used to examine data that travels across a electronic communications network. Subsequently, the language of the virtual machine is chiefly considered, since it is this language which will determine the power and flexibility that the virtual machine offers. The capabilities of the language is illustrated by showing how it can be used to implement selected access control policies. Furthermore it is shown that the language can be used to access data stored in relations in a safe manner, and that the addition of the programs to the DAC model does not cause a significant increase in the management of a decentralised access control model. Following the proposed language it is obvious that the architecture of the ìnewî access control subsystem is also important since this architecture determines where the virtual machine fits in to the access control mechanism as a whole. Other extensions to the access control subsystem which are important for the functioning of the new access control subsystem are also reected upon. Finally, before concluding, the dissertation aims to provide general considerations that have to be taken into account for any potential implementation of the virtual machine. Aspects such as the runtime support system, data types and capabilities for extensions are taken into consideration. By examining all of the previous aspects, the access control language and programs, the virtual machine and the extensions to the access control subsystem, it is shown that the virtual machine and the language offered in this text provides the capability of implementing all the basic access control policies that can normally be provided. Additionally it can equip the database administrator with a tool to implement even more complex policies which can not be handled in a simple manner by the normal access control system. Additionally it is shown that using the virtual machine does not mean that certain complex policies have to be implemented on an application level. It is also shown that the new and extended access control subsystem does not significantly alter the way in which access control is managed in a relational database management system. / Prof. M.S. Olivier
98

Otimização de alocação de máquinas virtuais em datacenter heterogêneo de sistema de computação em nuvem /

Rodrigues, João Antonio Magri. January 2019 (has links)
Orientador: Aleardo Manacero Junior / Banca: Rafael Pasquini / Banca: Rodolfo Ipolito Meneguetti / Resumo: Computação em nuvem pode ser definida como uma tecnologia de oferta de serviços de computação por meio da Internet, utilizando virtualização de máquinas. A virtualização é um procedimento em que se estabelece um ambiente virtual para execução de tarefas consumindo parte dos recursos de uma máquina real. Desse modo, o desempenho de um sistema de computação em nuvem depende da eficiência da alocação de máquinas virtuais em máquinas reais, atendendo restrições e metas diversas. Neste trabalho se propõe uma nova abordagem para alocação de máquinas virtuais que tem como objetivo otimizar o número de máquinas físicas ativas e o tráfego na rede do sistema, tratando situações de conflito e balanço entre estes dois objetivos em sistemas heterogêneos.A solução proposta é baseada em uma modificação do algoritmo para particionamento de grafos de Kernighan-Lin para tratar os custos de comunicação, além de heurísticas para a minimização do número de máquinas físicas. O texto apresenta um levantamento bibliográfico a respeito de computação em nuvem, o estado da arte relacionado ao problema de alocação de máquinas virtuais, a implementação do algoritmo e sua avaliação. O algoritmo proposto é avaliado contra uma heurística convencional e um algoritmo do estado da arte em diversos cenários. Os resultados obtidos mostram que, apesar da dificuldade de conciliação entre estes dois objetivos em se tratando de sistemas heterogêneos, as soluções obtidos pela abordagem desenvolvida são de boa qualidade / Abstract: Cloud computing is a term referring to a computing service technology offered through the Internet using machine virtualization, which is a process where a virtual environment is deployed to run an application, consuming part of the real machine resources. Therefore, the performance of a cloud computing system depends on the efficiency of the virtual machines placement in real machines, given certain goals and constraints. This work aims to present a new approach for virtual machine placement that optimizes the number of active physical machines and network traffic in its datacenter, as well as evaluate the conflict between these goals in heterogeneous systems. The proposed approach is based in a modification of the Kernighan-Lin algorithm for graph partitioning to deal with communication costs, and heuristics to minimize the number of physical machines.The text presents a conceptual review about cloud computing,the state of art of the virtual machine placement problem, the algorithm implementation and its evaluation. The proposed algorithm is evaluated against a conventional heuristic and a state of art algorithm in various scenarios. The results reveal the hardness to balance the two defined goals in heterogeneous systems as well as the quality of the solution achieved by the proposed approach / Mestre
99

TOWARDS A SECURITY REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION

Unknown Date (has links)
Network Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s request. While their usefulness can’t be denied, they also have some security implications. In complex systems like NFV, the threats can come from a variety of domains due to it containing both the hardware and the virtualize entities in its infrastructure. Also, since it relies on software, the network service in NFV can be manipulated by external entities like third-party providers or consumers. This leads the NFV to have a larger attack surface than the traditional network infrastructure. In addition to its own threats, NFV also inherits security threats from its underlying cloud infrastructure. Therefore, to design a secure NFV system and utilize its full potential, we must have a good understanding of its underlying architecture and its possible security threats. Up until now, only imprecise models of this architecture existed. We try to improve this situation by using architectural modeling to describe and analyze the threats to NFV. Architectural modeling using Patterns and Reference Architectures (RAs) applies abstraction, which helps to reduce the complexity of NFV systems by defining their components at their highest level. The literature lacks attempts to implement this approach to analyze NFV threats. We started by enumerating the possible threats that may jeopardize the NFV system. Then, we performed an analysis of the threats to identify the possible misuses that could be performed from them. These threats are realized in the form of misuse patterns that show how an attack is performed from the point of view of attackers. Some of the most important threats are privilege escalation, virtual machine escape, and distributed denial-of-service. We used a reference architecture of NFV to determine where to add security mechanisms in order to mitigate the identified threats. This produces our ultimate goal, which is building a security reference architecture for NFV. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
100

A layered virtual memory manager.

Mason, Andrew Halstead. January 1977 (has links)
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering, 1977 / Bibliography : leaves 127-132. / Elec. E. / Elec. E. Massachusetts Institute of Technology, Department of Electrical Engineering

Page generated in 0.138 seconds