Spelling suggestions: "subject:"infinidade"" "subject:"infini""
21 |
Virtualized resource management in high performance fabric clustersRanadive, Adit Uday 07 January 2016 (has links)
Providing performance and isolation guarantees for applications running in virtualized
datacenter environments requires continuous management of the underlying physical
resources. For communication- and I/O-intensive applications running on such platforms,
the management methods must adequately deal with the shared use of the high-performance
fabrics these applications require. In particular, new classes of latency-sensitive and
data-intensive workloads running in virtualized environments rely on emerging fabrics
like 40+Gbps Ethernet and InfiniBand/RoCE with support for RDMA, VMM-bypass and
hardware-level virtualization (SR-IOV). However, the benefits provided by these technology
advances are offset by several management constraints: (i) the inability of the hypervisor
to monitor the VMs’ usage of these fabrics can affect the platform’s ability to provide
isolation and performance guarantees, (ii) the hypervisor cannot provide fine-grained
I/O provisioning or perform management decisions for VMs, thus reducing the degree of
consolidation that can be supported on the platforms, and (iii) without such support it
is harder to integrate these fabrics into emerging cloud computing platforms and
datacenter fabric management solutions. This is made particularly challenging for
workloads spanning multiple VMs, utilizing physical resources distributed across multiple
server nodes and the interconnection fabric.
This thesis addresses the problem of realizing a flexible, dynamic resource management
system for virtualized platforms with high performance fabrics. We make the following key
contributions:
(i) A lightweight monitoring tool, IBMon, integrated with the hypervisor to monitor VMs’
use of RDMA-enabled virtualized interconnects, using memory introspection techniques.
(ii) The design and construction of a resource management system that leverages IBMon
to provide latency-sensitive applications performance guarantees. This system is built
on microeconomic principles of supply and demand and can be deployed on a per-node
(Resource Exchange) or a multi-node (Distributed Resource Exchange) basis. Fine-grained
resource allocations can be enforced through several mechanisms, including CPU capping
or fabric-level congestion control.
(iii) Sphinx, a fabric management solution that leverages Resource Exchange to orchestrate
network and provide latency proportionality for consolidated workloads, based on
user/application-specified policies.
(iv) Implementation and experimental evaluation using InfiniBand clusters virtualized with
the Xen or KVM hypervisor, managed via the OpenFloodlight SDN controller, and using
representative data-intensive and latency-sensitive benchmarks.
|
22 |
Performance analysis and improvement of InfiniBand networks : modelling and effective Quality-of-Service mechanisms for interconnection networks in cluster computing systemsYan, Shihang January 2012 (has links)
The InfiniBand Architecture (IBA) network has been proposed as a new industrial standard with high-bandwidth and low-latency suitable for constructing high-performance interconnected cluster computing systems. This architecture replaces the traditional bus-based interconnection with a switch-based network for the server Input-Output (I/O) and inter-processor communications. The efficient Quality-of-Service (QoS) mechanism is fundamental to ensure the import at QoS metrics, such as maximum throughput and minimum latency, leaving aside other aspects like guarantee to reduce the delay, blocking probability, and mean queue length, etc. Performance modelling and analysis has been and continues to be of great theoretical and practical importance in the design and development of communication networks. This thesis aims to investigate efficient and cost-effective QoS mechanisms for performance analysis and improvement of InfiniBand networks in cluster-based computing systems. Firstly, a rate-based source-response link-by-link admission and congestion control function with improved Explicit Congestion Notification (ECN) packet marking scheme is developed. This function adopts the rate control to reduce congestion of multiple-class traffic. Secondly, a credit-based flow control scheme is presented to reduce the mean queue length, throughput and response time of the system. In order to evaluate the performance of this scheme, a new queueing network model is developed. Theoretical analysis and simulation experiments show that these two schemes are quite effective and suitable for InfiniBand networks. Finally, to obtain a thorough and deep understanding of the performance attributes of InfiniBand Architecture network, two efficient threshold function flow control mechanisms are proposed to enhance the QoS of InfiniBand networks; one is Entry Threshold that sets the threshold for each entry in the arbitration table, and other is Arrival Job Threshold that sets the threshold based on the number of jobs in each Virtual Lane. Furthermore, the principle of Maximum Entropy is adopted to analyse these two new mechanisms with the Generalized Exponential (GE)-Type distribution for modelling the inter-arrival times and service times of the input traffic. Extensive simulation experiments are conducted to validate the accuracy of the analytical models.
|
23 |
Enhancing MPI with modern networking mechanisms in cluster interconnectsYu, Weikuan, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 161-168).
|
24 |
Fast Barrier Synchronization for InfiniBandHoefler, Torsten 04 January 2006 (has links) (PDF)
Barrier Synchronization is crucial for many parallel systems. This talk introduces different synchronization mechanisms and demonstrates new approaches to leverage special hardware properties of InfiniBand to lower the Barrier latency.
|
25 |
Integration einer neuen InfiniBand-Schnittstelle in die vorhandene InfiniBand MPICH2 SoftwareMosch, Marek 25 April 2006 (has links) (PDF)
Entwurf einer einheitlichen API zur Nutzung
von Mellanox V-API und OpenIB Verbs auf Basis
von C Pre-Prozessor Makros und Integration der API
in das vorhandene MPICH2-CH3 Device für Infiniband
|
26 |
Enhancing an InfiniBand driver by utilizing an efficient malloc/free library supporting multiple page sizesRex, Robert 23 October 2006 (has links) (PDF)
Despite using high-speed network interconnection
systems like InfiniBand, the communication
overhead for parallel applications, especially
in the area of High-Performance Computing (HPC), is still high. Using large
page frames - so called hugepages in Linux - can
improve the crucial work of registering
communication buffers to the network adapter.
Thus, an InfiniBand driver was modified. But these
hugepages do not only reduce communication costs
but can also improve computation time in a
perceptible manner, e.g. by less TLB misses. To
bypass the outlay of rewriting applications, a
preload library was implemented that is able
to utilize large page frames transparently.
This work also shows benchmark results with these
components and performance improvements of up to
10 %.
|
27 |
Optimierte Implementierung ausgewählter kollektiver Operationen unter Ausnutzung der Hardwareparallelität des InfiniBand NetzwerkesFranke, Maik 24 September 2007 (has links) (PDF)
Ziel der Arbet ist eine optimierte Implementierung der im MPI-1 Standard definierten Reduktionsoperationen MPI_Reduce(), MPI_Allreduce(), MPI_Scan(), MPI_Reduce_scatter() für das InfiniBand Netzwerk. Hierbei soll besonderer Wert auf spezielle InfiniBand Operationen und die Hardwareparallelität gelegt werden.
InfiniBand ermöglicht es Kommunikationsoperationen klar von Berechnungen zu trennen, was eine Überlappung beider Operationstypen in der Reduktion ermöglicht. Das Potential dieser Methode soll modelltheoretisch als auch praktisch in einer prototypischen Implementierung im Rahmen des Open MPI Frameworks erfolgen. Das Endresultat soll mit vorhandenen Implementierungen (z.B. MVAPICH) verglichen werden. / The performance of collective communication operations is one of the deciding factors in the overall performance of a MPI application. Current implementations of MPI use the point-to-point components to access the InfiniBand network. Therefore it is tried to improve the performance of a collective component by accessing the InfiniBand network directly. This should avoid overhead and make it possible to tune the algorithms to this specific network. Various algorithms for the MPI_Reduce, MPI_Allreduce, MPI_Scan and MPI_Reduce_scatter operations are presented. The theoretical performance of the algorithms is analyzed with the LogfP and LogGP models. Selected algorithms are implemented as part of an Open MPI collective component. Finally the performance of different algorithms and different MPI implementations is compared.
|
28 |
Evaluating and Improving the Performance of MPI-Allreduce on QLogic HTX/PCIe InifiniBand HCAMittenzwey, Nico 30 June 2009 (has links) (PDF)
This thesis analysed the QLogic InfiniPath QLE7140 HCA and its onload architecture
and compared the results to the Mellanox InfiniHost III Lx HCA which uses an offload
architecture. As expected, the QLogic InfiniPath QLE7140 HCA can outperform the
Mellanox InfiniHost III Lx HCA in latency and bandwidth terms on our test system in
various test scenarios. The benchmarks showed, that sending messages with multiple
threads in parallel can increase the bandwidth greatly while bi-directional sends cut
the effective bandwidth for one HCA by up to 30%.
Different all-reduce algorithms where evaluated and compared with the help of the
LogGP model. The comparison showed that new all-reduce algorithms can outperform the ones already implemented in Open MPI for different scenarios.
The thesis also demonstrated, that one can implement multicast algorithms for InfiniBand
easily by using the RDMA-CM API.
|
29 |
RAS enhancements for RDMA communicationsCardona, Omar 21 February 2011 (has links)
Ethernet as the communication medium in the enterprise data center has outlived all competing mediums and resisted the test of time with regards to speed and costs. The future is also poised for growth with 40 and 100Gps speeds just over horizon. The current state of the technology is being enhanced and extended with lossless features to allow for fabric convergence of Storage and Inter Process Communication (IPC) Networks. It is under this medium that an increase in the adoption of Remote Direct Memory Access (RDMA) over Ethernet using offloaded TCP/IP (iWARP) and Infiniband over Ethernet (RoCE) communication stacks to RDMA capable NIC adapter s (RNIC) is observed.
RDMA enables direct application to application communication over the network resulting in numerous and significant benefits such as reduced CPU utilization, lower latency communications, increased energy efficiency, and reduced overall system requirements. However, with said benefits also comes increased software complexity in how RDMA interface users communicate. The RDMA communication semantics, which originate from the HPC domain, are heavily biased towards Low-Latency and High-Bandwidth communications rather than Reliability, Availability, and Serviceability (RAS). As adoption increases, and enterprise data centers begins to leverage RDMA over Ethernet, enhancements to the OS stack software architecture and design of the components involved is required to address these deficiencies. Operating system interfaces, device drivers, adapter hardware design, and embedded firmware features must be viewed from a high-availability and maintainability point of view.
RAS enhancements for RDMA communications proposes the software architectural tradeoffs for enhancing the iWARP and RoCE RDMA implementations for communications in the enterprise data center, with new and traditional RAS features for existing communications stacks and devices. The architecture leverages software enhancements in traceability, availability, maintainability, serviceability, fault-isolation and resource management; such that in the advent of errors, the probability that the forensics data points to identify root cause are immediately and automatically available is increased. / text
|
30 |
Erweiterung eines existierenden Infiniband BenchmarksViertel, Carsten 01 June 2006 (has links)
Infiniband wird zunehmend als Verbindungsnetzwerk für Cluster eingesetzt.
Dadurch wird es nötig existierende Bibliotheken für parallele Programmiersprachen
an das neue Netzwerk bestmöglich anzupassen.
Ein wichtiger Bestandteil paralleler Programmiersprachen sind dabei kollektive
Operationen, die es erfordern, eine Nachricht von einem Knoten zu vielen
anderen oder auch von vielen Knoten an einen einzelnen zu senden.
Um herauszufinden, welche Verbindungsarten und Operationen am besten
für diese kollektiven Operationen geeignet sind, wurde ein Benchmark entwickelt.
Ziel dieser Studienarbeit ist es, dieses Programm zu erweitern, auf einem
Cluster zu testen und die Ergebnisse auszuwerten.
|
Page generated in 0.0446 seconds