Spelling suggestions: "subject:"bnetwork stack"" "subject:"conetwork stack""
1 |
High Performance Inter-kernel Communication and Networking in a Replicated-kernel Operating SystemAnsary, B M Saif 20 January 2016 (has links)
Modern computer hardware platforms are moving towards high core-count and heterogeneous Instruction Set Architecture (ISA) processors to achieve improved performance as single core performance has reached its performance limit. These trends put the current monolithic SMP operating system (OS) under scrutiny in terms of scalability and portability. Proper pairing of computing workloads with computing resources has become increasingly arduous with traditional software architecture.
One of the most promising emerging operating system architectures is the Multi-kernel. Multi-kernels not only address scalability issues, but also inherently support heterogeneity. Furthermore, provide an easy way to properly map computing workloads to the correct type of processing resources in presence of heterogeneity. Multi-kernels do so by partitioning the resources and running independent kernel instances and co-operating amongst themselves to present a unified view of the system to the application. Popcorn is one the most prominent multi-kernels today, which is unique in the sense that it runs multiple Linux instances on different cores or group of cores, and provides a unified view of the system i.e., Single System Image (SSI).
This thesis presents four contributions. First, it introduces a filesystem for Popcorn, which is a vital part to provide a SSI. Popcorn supports thread/process migration that requires migration of file descriptors which is not provided by traditional filesystems as well as popular distributed file systems, this work proposes a scalable messaging based file descriptor migration and consistency protocol for Popcorn.
Second, multi-kernel OSs rely heavily on a fast low latency messaging layer to be scalable. Messaging is even more important in heterogeneous systems where different types of cores are on different islands with no shared memory. Thus, another contribution proposes a fast-low latency messaging layer to enable communication among heterogeneous processor islands for Heterogeneous Popcorn.
With advances in networking technology, newest Ethernet technologies are able to support up to 40 Gbps bandwidth, but due to scalability issues in monolithic kernels, the number of connections served per second does not scale with this increase in speed.Therefore, the third and fourth contributions try to address this problem with Snap Bean, a virtual network device and Angel, an opportunistic load balancer for Popcorn's network system.
With the messaging layer Popcorn gets over 30% performance benefit over OpenCL and Intel Offloading technique (LEO). And with NetPopcorn we achieve over 7 to 8 times better performance over vanilla Linux and 2 to 5 times over state-of-the-art Affinity Accept. / Master of Science
|
2 |
Cognizant Networks: A Model and Framework for Session-based Communications and Adaptive NetworkingKalim, Umar 09 August 2017 (has links)
The Internet has made tremendous progress since its inception. The kingpin has been the transmission control protocol (TCP), which supports a large fraction of communication. With the Internet's wide-spread access, users now have increased expectations.
The demands have evolved to an extent which TCP was never designed to support. Since network stacks do not provide the necessary functionality for modern applications, developers are forced to implement them over and over again --- as part of the application or supporting libraries. Consequently, application developers not only bear the burden of developing application features but are also responsible for building networking libraries to support sophisticated scenarios. This leads to considerable duplication of effort.
The challenge for TCP in supporting modern use cases is mostly due to limiting assumptions, simplistic communication abstractions, and (once expedient) implementation shortcuts. To further add to the complexity, the limited TCP options space is insufficient to support extensibility and thus, contemporary communication patterns. Some argue that radical changes are required to extend the networks functionality; some researchers believe that a clean slate approach is the only path forward. Others suggest that evolution of the network stack is necessary to ensure wider adoption --- by avoiding a flag day. In either case, we see that the proposed solutions have not been adopted by the community at large. This is perhaps because the cost of transition from the incumbent to the new technology outweighs the value offered. In some cases, the limited scope of the proposed solutions limit their value. In other cases, the lack of backward compatibility or significant porting effort precludes incremental adoption altogether.
In this dissertation, we focus on the development of a communication model that explicitly acknowledges the context of the conversation and describes (much of) modern communications. We highlight how the communication stack should be able to discover, interact with and use available resources to compose richer communication constructs. The model is able to do so by using session, flow and endpoint abstractions to describe communications between two or more endpoints. These abstractions provide means to the application developers for setting up and manipulating constructs, while the ability to recognize change in the operating context and reconfigure the constructs allows applications to adapt to the changing requirements. The model considers two or more participants to be involved in the conversation and thus enables most modern communication patterns, which is in contrast with the well-established two-participant model.
Our contributions also include an implementation of a framework that realizes such communication methods and enables future innovation. We substantiate our claims by demonstrating case studies where we use the proposed abstractions to highlight the gains. We also show how the proposed model may be implemented in a backwards compatible manner, such that it does not break legacy applications, network stacks, or middleboxes in the network infrastructure. We also present use cases to substantiate our claims about backwards compatibility. This establishes that incremental evolution is possible. We highlight the benefits of context awareness in setting up complex communication constructs by presenting use cases and their evaluation. Finally, we show how the communication model may open the door for new and richer communication patterns. / PHD / In this dissertation, we focus on the development of a communication model that explicitly acknowledges the context of the conversation and describes (much of) modern communications. We highlight how the networking software should be able to discover, interact with and use available resources. The model is able to do so by using abstractions that describe communications between participants as if human beings were having a conversation i.e., the semantics of interactions between participants are defined in terms of a conversation session. These abstractions provide means to the application developers for describing communications in a holistic manner, recognizing change in the context and reconfigure communications to allow adaptation to changing requirements. The model considers two or more participants to be involved in the conversation and thus enables most modern communication patterns, which is in contrast with the well-established two-participant legacy model.
Our contributions also include an implementation of a framework that realizes such communication methods and enables future innovation. We substantiate our claims by demonstrating case studies where we use the proposed abstractions to highlight the gains. We also show how the proposed model may be implemented in a backwards compatible manner, such that it does not break legacy applications, networking software, or network infrastructure. We also present use cases to substantiate our claims about backwards compatibility. This establishes that incremental evolution is possible. We highlight the benefits of context awareness in setting up complex communication constructs by presenting use cases and their evaluation. Finally, we show how the communication model may open the door for new and richer communication patterns.
|
3 |
BUILDING FAST, SCALABLE, LOW-COST, AND SAFE RDMA SYSTEMS IN DATACENTERSShin-yeh Tsai (7027667) 16 October 2019 (has links)
<div>Remote Direct Memory Access, or RDMA, is a technology that allows one computer server to direct access the memory of another server without involving its CPU. Compared with traditional network technologies, RDMA offers several benefits including low latency, high throughput, and low CPU utilization. These features are especially attractive to datacenters, and because of this, datacenters have started to adopt RDMA in production scale in recent years.</div><div>However, RDMA was designed for confined, single-tenant, High-Performance-Computing (HPC) environments. Many of its design choices do not fit datacenters well, and it cannot be readily used by datacenter applications. To use RDMA, current datacenter applications have to build customized software stacks and fine-tune their performance. In addition, RDMA offers limited scalability and does not have good support for resource sharing or protection across different applications.</div><div>This dissertation sets out to seek solutions that can solve issues of RDMA in a systematic way and makes it more suitable for a wide range of datacenter applications.</div><div>Our first task is to make RDMA more scalable, easier to use, and have better support for safe resource sharing in datacenters. For this purpose, we propose to add an indirection layer on top of native RDMA to virtualize its low-level abstraction into a high-level one. This indirection layer safely manages RDMA resources for different datacenter applications and also provide a means for better scalability.</div><div>After making RDMA more suitable for datacenter environments, our next task is to build applications that can exploit all the benefits from (our improved) RDMA. We designed a set of systems that store data in remote persistent memory and let client machines access these data through pure one-sided RDMA communication. These systems lower monetary and energy cost compared to traditional datacenter data stores (because no processor is needed at remote persistent memory), while achieving good performance and reliability.</div><div>Our final task focuses on a completely different and so far largely overlooked one — security implications of RDMA. We discovered several key vulnerabilities in the one-sided communication pattern and in RDMA hardware. We exploited one of them to create a novel set of remote side-channel attacks, which we are able to launch on a widely used RDMA system with real RDMA hardware.</div><div>This dissertation is one of the initial efforts in making RDMA more suitable for datacenter environments from scalability, usability, cost, and security aspects. We hope that the systems we built as well as the lessons we learned can be helpful to future networking and systems researchers and practitioners.</div>
|
4 |
UMA PROPOSTA DE ARQUITETURA DE PILHA DE COMUNICAÇÃO EM REDE COM UM NÚMERO REDUZIDO DE CAMADAS / A NOVELL NETWORK STACK ARCHITECTURE WITH REDUCED NUMBER OF LAYERSFreitas, Josué Paulo José de 22 August 2009 (has links)
This work presents a network stack architecture proposal with a reduced number of layers. The reduction in number of layers aim to provided a simpler and efficient communication method to embedded systems by allowing the microprocessor, where usually application is implemented, run just application code and not running code related to network communication. The architetucture was implemented on and FPGA board and show, in average, throughput
results around 27 times better in comparision with a network stack implemented in software and running over an embedded microprocessor. / Este trabalho apresenta uma proposta arquitetura de pilha de comunicação em rede com número reduzido de camadas. A redução do número de camadas visa fornecer um método de
comunicação simples e eficaz para sistemas embarcados permitindo que o microprocessador, onde geralmente a Camada de Aplicação é implementada, execute apenas código de aplicação isentando-se assim de tarefas de comunicação em rede. A arquitetura foi implementada em placa de desenvolvimento FPGA e apresentou, em média, vazão cerca de 27 vezes superior em comparação com uma pilha de comunicação implementada em software e executada sobre um
microprocessador embarcado.
|
Page generated in 0.0785 seconds