• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the Role of Performance Interference in Consolidated Environments

Rameshan, Navaneeth January 2016 (has links)
With the advent of resource shared environments such as the Cloud, virtualization has become the de facto standard for server consolidation. While consolidation improves utilization, it causes performance-interference between Virtual Machines (VMs) from contention in shared resources such as CPU, Last Level Cache (LLC) and memory bandwidth. Over-provisioning resources for performance sensitive applications can guarantee Quality of Service (QoS), however, it results in low machine utilization. Thus, assuring QoS for performance sensitive applications while allowing co-location has been a challenging problem. In this thesis, we identify ways to mitigate performance interference without undue over-provisioning and also point out the need to model and account for performance interference to improve the reliability and accuracy of elastic scaling. The end goal of this research is to leverage on the observations to provide efficient resource management that is both performance and cost aware. Our main contributions are threefold; first, we improve the overall machine utilization by executing best-effort applications along side latency critical applications without violating its performance requirements. Our solution is able to dynamically adapt and leverage on the changing workload/phase behaviour to execute best-effort applications without causing excessive interference on performance; second, we identify that certain performance metrics used for elastic scaling decisions may become unreliable if performance interference is unaccounted. By modelling performance interference, we show that these performance metrics become reliable in a multi-tenant environment; and third, we identify and demonstrate the impact of interference on the accuracy of elastic scaling and propose a solution to significantly minimise performance violations at a reduced cost. / <p>QC 20160927</p>
2

Efficient support for data-intensive scientific workflows on geo-distributed clouds / Support pour l'exécution efficace des workflows scientifiques à traitement intensif de données sur les cloud géo-distribués

Pineda Morales, Luis Eduardo 24 May 2017 (has links)
D’ici 2020, l’univers numérique atteindra 44 zettaoctets puisqu’il double tous les deux ans. Les données se présentent sous les formes les plus diverses et proviennent de sources géographiquement dispersées. L’explosion de données crée un besoin sans précédent en terme de stockage et de traitement de données, mais aussi en terme de logiciels de traitement de données capables d’exploiter au mieux ces ressources informatiques. Ces applications à grande échelle prennent souvent la forme de workflows qui aident à définir les dépendances de données entre leurs différents composants. De plus en plus de workflows scientifiques sont exécutés sur des clouds car ils constituent une alternative rentable pour le calcul intensif. Parfois, les workflows doivent être répartis sur plusieurs data centers. Soit parce qu’ils dépassent la capacité d’un site unique en raison de leurs énormes besoins de stockage et de calcul, soit car les données qu’ils traitent sont dispersées dans différents endroits. L’exécution de workflows multisite entraîne plusieurs problèmes, pour lesquels peu de solutions ont été développées : il n’existe pas de système de fichiers commun pour le transfert de données, les latences inter-sites sont élevées et la gestion centralisée devient un goulet d’étranglement. Cette thèse présente trois contributions qui visent à réduire l’écart entre les exécutions de workflows sur un seul site ou plusieurs data centers. Tout d’abord, nous présentons plusieurs stratégies pour le soutien efficace de l’exécution des workflows sur des clouds multisite en réduisant le coût des opérations de métadonnées. Ensuite, nous expliquons comment la manipulation sélective des métadonnées, classées par fréquence d’accès, améliore la performance des workflows dans un environnement multisite. Enfin, nous examinons une approche différente pour optimiser l’exécution de workflows sur le cloud en étudiant les paramètres d’exécution pour modéliser le passage élastique à l’échelle. / By 2020, the digital universe is expected to reach 44 zettabytes, as it is doubling every two years. Data come in the most diverse shapes and from the most geographically dispersed sources ever. The data explosion calls for applications capable of highlyscalable, distributed computation, and for infrastructures with massive storage and processing power to support them. These large-scale applications are often expressed as workflows that help defining data dependencies between their different components. More and more scientific workflows are executed on clouds, for they are a cost-effective alternative for intensive computing. Sometimes, workflows must be executed across multiple geodistributed cloud datacenters. It is either because these workflows exceed a single site capacity due to their huge storage and computation requirements, or because the data they process is scattered in different locations. Multisite workflow execution brings about several issues, for which little support has been developed: there is no common ile system for data transfer, inter-site latencies are high, and centralized management becomes a bottleneck. This thesis consists of three contributions towards bridging the gap between single- and multisite workflow execution. First, we present several design strategies to eficiently support the execution of workflow engines across multisite clouds, by reducing the cost of metadata operations. Then, we take one step further and explain how selective handling of metadata, classified by frequency of access, improves workflows performance in a multisite environment. Finally, we look into a different approach to optimize cloud workflow execution by studying some parameters to model and steer elastic scaling.
3

Emerging Paradigms in the Convergence of Cloud and High-Performance Computing

Araújo De Medeiros, Daniel January 2023 (has links)
Traditional HPC scientific workloads are tightly coupled, while emerging scientific workflows exhibit even more complex patterns, consisting of multiple characteristically different stages that may be IO-intensive, compute-intensive, or memory-intensive. New high-performance computer systems are evolving to adapt to these new requirements and are motivated by the need for performance and efficiency in resource usage. On the other hand, cloud workloads are loosely coupled, and their systems have matured technologies under different constraints from HPC. In this thesis, the use of cloud technologies designed for loosely coupled dynamic and elastic workloads is explored, repurposed, and examined in the landscape of HPC in three major parts. The first part deals with the deployment of HPC workloads in cloud-native environments through the use of containers and analyses the feasibility and trade-offs of elastic scaling. The second part relates to the use of workflow management systems in HPC workflows; in particular, a molecular docking workflow executed through Airflow is discussed. Finally, object storage systems, a cost-effective and scalable solution widely used in the cloud, and their usage in HPC applications through MPI I/O are discussed in the third part of this thesis. / Framväxande vetenskapliga applikationer är mycket datatunga och starkt kopplade. Nya högpresterande datorsystem anpassar sig till dessa nya krav och motiveras av behovet av prestanda och effektivitet i resursanvändningen. Å andra sidan är moln-applikationer löst kopplade och deras system har mogna teknologier som utvecklats under andra begränsningar än HPC. I den här avhandlingen diskuteras användningen av moln-teknologier som har mognat under löst kopplade applikationer i HPC-landskapet i tre huvuddelar. Den första delen handlar om implementeringen av HPC-applikationer i molnmiljöer genom användning av containrar och analyserar genomförbarheten och avvägningarna av elastisk skalning. Den andra delen handlar om användningen av arbetsflödeshanteringsystem i HPC-arbetsflöden; särskilt diskuteras ett molekylär dockningsarbetsflöde som utförs genom Airflow. Objektlagringssystem och deras användning inom HPC, tillsammans med ett gränssnitt mellan S3-standard och MPI I/O, diskuteras i den tredje delen av denna avhandling / <p>QC 20231122</p>

Page generated in 0.1014 seconds