• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 1
  • Tagged with
  • 13
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Network virtualization as enabler for cloud networking

Turull, Daniel January 2016 (has links)
The Internet has exponentially grown and now it is part of our everyday life. Internet services and applications rely on back-end servers that are deployed on local servers and data centers. With the growing use of data centers and cloud computing, the locations of these servers have been externalized and centralized, taking advantage of economies of scale. However, some applications need to define complex network topologies and require more than simple connectivity to the remote sites. Therefore, the network part of cloud computing, what is called cloud networking, needs to be improved and simplified. This thesis argues that network virtualization permits to fill the missing gap and we propose a network virtualization abstraction layer to ease the use of cloud networking for the end users. We implement a software prototype of our ideas using OpenFlow. We also evaluate our prototype with state of the art controllers that has similar functionalities for network virtualization. A second part of this thesis focuses on developing a tool for performance testing. We have improved the widely used tool pktgen with receiver functionalities. We use pktgen to generate traffic for our experiments with network virtualization. / <p>QC 20160428</p>
12

Improving fairness, throughput and blocking performance for long haul and short reach optical networks

Tariq, Sana 01 January 2015 (has links)
Innovations in optical communication are expected to transform the landscape of global communications, internet and datacenter networks. This dissertation investigates several important issues in optical communication such as fairness, throughput, blocking probability and differentiated quality of service (QoS). Novel algorithms and new approaches have been presented to improve the performance of optical circuit switching (OCS) and optical burst switching (OBS) for long haul, and datacenter networks. Extensive simulations tests have been conducted to evaluate the effectiveness of the proposed algorithms. These simulation tests were performed over a number of network topologies such as ring, mesh and U.S. Long-Haul, some high processing computing (HPC) topologies such as 2D and 6D mesh torus topologies and modern datacenter topologies such as FatTree and BCube. Two new schemes are proposed for long haul networks to improve throughput and hop count fairness in OBS networks. The idea is motivated by the observation that providing a slightly more priority to longer bursts over short bursts can significantly improve the throughput of the OBS networks without adversely affecting hop-count fairness. The results of extensive performance tests have shown that proposed schemes improve the throughput of optical OBS networks and enhance the hop-count fairness. Another contribution of this dissertation is the research work on developing routing and wavelength assignment schemes in multimode fiber networks. Two additional schemes for long haul networks are presented and evaluated over multimode fiber networks. First for alleviating the fairness problem in OBS networks using wavelength-division multiplexing as well as mode-division multiplexing while the second scheme for achieving higher throughput without sacrificing hop count fairness. We have also shown the significant benefits of using both mode division multiplexing and wavelength division multiplexing in real-life short-distance optical networks such as the optical circuit switching networks used in the hybrid electronic-optical switching architectures for datacenters. We evaluated four mode and wavelength assignment heuristics and compared their throughput performance. We also included preliminary results of impact of the cascaded mode conversion constraint on network throughput. Datacenter and high performance computing networks share a number of common performance goals. Another highly efficient adaptive mode wavelength- routing algorithm is presented over OBS networks to improve throughput of these networks. The effectiveness of the proposed model has been validated by extensive simulation results. In order to optimize bandwidth and maximize throughput of datacenters, an extension of TCP called multipath-TCP (MPTCP) has been evaluated over an OBS network using dense interconnect datacenter topologies. We have proposed a service differentiation scheme using MPTCP over OBS for datacenter traffic. The scheme is evaluated over mixed workload traffic model of datacenters and is shown to provide tangible service differentiation between flows of different priority levels. An adaptive QoS differentiation architecture is proposed for software defined optical datacenter networks using MPTCP over OBS. This scheme prioritizes flows based on current network state.
13

Efficient placement design and storage cost saving for big data workflow in cloud datacenters / Conception d'algorithmes de placement efficaces et économie des coûts de stockage pour les workflows du big data dans les centres de calcul de type cloud

Ikken, Sonia 14 December 2017 (has links)
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job / The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies

Page generated in 0.0862 seconds