• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Système d'agents mobiles pour les architectures de calculs auto-adaptatifs / Mobile Agent System dedicated to adaptable numerical architecture

Dumont, Cyril 28 May 2014 (has links)
Ce travail appartient au domaine de la simulation numérique sur des plates-formes d'exécution distribuées hétérogènes telles que des grilles de calcul. Ce type de plate-forme se caractérise par des possibles changements de condition d'exécution et par une probabilité importante de défaillance de certains composants. Une application qui s'exécute dans un tel environnement se doit d'être adaptable à son contexte d'exécution et tolérante aux pannes. Face à la complexité croissante de la mise en place de cas de calcul sur des grilles de calcul, nous proposons une plateforme logicielle pour la résolution de cas de calcul numérique dans un environnement distribué hétérogène. Nos travaux apportent une solution qui se base sur un système d'agents mobiles, ce qui permet à une application de s'adapter au changement de son environnement d'exécution. Dans un premier temps, nous utilisons le langage pi calcul d'ordre supérieur pour spécifier une « ferme de travailleurs » capable de participer à la résolution de tout type de cas de calcul. Ensuite, nous énonçons des propriétés qui caractérisent le bon fonctionnement de ce système avec une logique temporelle TCTL. Pour cela, nous souhaitons modéliser notre système à l'aide d'automates temporisés à partir des termes définis par la spécification formelle en pi calcul. Dans ce but, nous définissons une transformation de termes écrits en pi calcul en automates temporisés. Les propriétés sont alors vérifiées avec l'outil UppAal. Pour valider ce travail de modélisation, nous avons réalisé le framework MCA (pour Mobile Computing Architecture). Celui-ci propose un ensemble d'outils facilitant la mise en place de composants sur un environnement distribué hétérogène dans le but d'effectuer la résolution de cas de calcul. La librairie avec laquelle sont développés ces composants, qu'ils soient mobiles ou non, est implantée en Java et se base les technologies Jini et JavaSpaces. Enfin, nous réalisons l'évaluation du framework MCA en procédant à la résolution de trois cas de calcul différents. Chacune de ces expériences, réalisées sur une grappe de 20 noeuds, nous permet de montrer les caractéristiques essentielles de notre framework : une simplicité de programmation, un faible surcoût en temps d'exécution sans l'activation de la tolérance aux pannes et une tolérance aux pannes efficace / This work belongs to the domain of numerical simulation on heterogeneous distributed platforms such as grids. This type of platform is characterized by possible changes in execution conditions and a significant probability of some components failure. An application running in such an environment must be adaptable to its execution context and fault tolerant. Facing the growing complexity of implementing computation cases on grid computing, we propose a software platform which solves numerical computation cases in a distributed heterogeneous environment. Our work provides a solution based on a mobile agent system, which allows an application to adapt to change in its execution environment. At first, we use the higher-order pi calculus language to specify a « farm of workers » able to take part in solving any type of computation case. Then we set the properties that characterize the system's correct execution with a temporal logic TCTL. In order to do this, we perform a temporal modeling system based on terms defined by the formal specification in pi calculus. To achieve this transformation, we define a translation of terms written in pi calculus into timed automata. The properties are verified with the UppAal tool. To validate this modeling work, we develop the MCA (for Mobile Computing Architecture) framework. It offers a set of tools which facilitate the implementation of distributed heterogeneous components in order to solve computation cases. These components, mobile or not, are developed with a library written in Java and which uses Jini and JavaSpaces technologies. Finally, our framework is evaluated through the resolution of three different computation cases. Each of these experiments, performed on a 20 node cluster allow us to highlight our framework's main characteristics : programming simplicity, low overhead in execution time without the fault tolerance activation and efficient fault tolerance
212

A distributed computing architecture to enable advances in field operations and management of distributed infrastructure

Khan, Kashif January 2012 (has links)
Distributed infrastructures (e.g., water networks and electric Grids) are difficult to manage due to their scale, lack of accessibility, complexity, ageing and uncertainties in knowledge of their structure. In addition they are subject to loads that can be highly variable and unpredictable and to accidental events such as component failure, leakage and malicious tampering. To support in-field operations and central management of these infrastructures, the availability of consistent and up-to-date knowledge about the current state of the network and how it would respond to planned interventions is argued to be highly desirable. However, at present, large-scale infrastructures are “data rich but knowledge poor”. Data, algorithms and tools for network analysis are improving but there is a need to integrate them to support more directly engineering operations. Current ICT solutions are mainly based on specialized, monolithic and heavyweight software packages that restrict the dissemination of dynamic information and its appropriate and timely presentation particularly to field engineers who operate in a resource constrained and less reliable environments. This thesis proposes a solution to these problems by recognizing that current monolithic ICT solutions for infrastructure management seek to meet the requirements of different human roles and operating environments (defined in this work as field and central sides). It proposes an architectural approach to providing dynamic, predictive, user-centric, device and platform independent access to consistent and up-to-date knowledge. This architecture integrates the components required to implement the functionalities of data gathering, data storage, simulation modelling, and information visualization and analysis. These components are tightly coupled in current implementations of software for analysing the behaviour of networks. The architectural approach, by contrast, requires they be kept as separate as possible and interact only when required using common and standard protocols. The thesis particularly concentrates on engineering practices in clean water distribution networks but the methods are applicable to other structural networks, for example, the electricity Grid. A prototype implementation is provided that establishes a dynamic hydraulic simulation model and enables the model to be queried via remote access in a device and platform independent manner.This thesis provides an extensive evaluation comparing the architecture driven approach with current approaches, to substantiate the above claims. This evaluation is conducted by the use of benchmarks that are currently published and accepted in the water engineering community. To facilitate this evaluation, a working prototype of the whole architecture has been developed and is made available under an open source licence.
213

\"Armazenamento distribuído de dados e checkpointing de aplicações paralelas em grades oportunistas\" / Distributed data storage and checkpointing of parallel applications in opportunistic grids

Raphael Yokoingawa de Camargo 04 May 2007 (has links)
Grades computacionais oportunistas utilizam recursos ociosos de máquinas compartilhadas para executar aplicações que necessitam de um alto poder computacional e/ou trabalham com grandes quantidades de dados. Mas a execução de aplicações paralelas computacionalmente intensivas em ambientes dinâmicos e heterogêneos, como grades computacionais oportunistas, é uma tarefa difícil. Máquinas podem falhar, ficar inacessíveis ou passar de ociosas para ocupadas inesperadamente, comprometendo a execução de aplicações. Um mecanismo de tolerância a falhas que dê suporte a arquiteturas heterogêneas é um importante requisito para estes sistemas. Neste trabalho, analisamos, implementamos e avaliamos um mecanismo de tolerância a falhas baseado em checkpointing para aplicações paralelas em grades computacionais oportunistas. Este mecanismo permite o monitoramento de execuções e a migração de aplicações entre nós heterogêneos da grade. Mas além da execução, é preciso gerenciar e armazenar os dados gerados e utilizados por estas aplicações. Desejamos uma infra-estrutura de armazenamento de dados de baixo custo e que utilize o espaço livre em disco de máquinas compartilhadas da grade. Devemos utilizar somente os ciclos ociosos destas máquinas para armazenar e recuperar dados, de modo que um sistema de armazenamento distribuído que as utilize deve ser redundante e tolerante a falhas. Para resolver o problema do armazenamento de dados em grades oportunistas, projetamos, implementamos e avaliamos o middleware OppStore. Este middleware provê armazenamento distribuído e confiável de dados, que podem ser acessados de qualquer máquina da grade. As máquinas são organizadas em aglomerados, que são conectados por uma rede peer-to-peer auto-organizável e tolerante a falhas. Dados são codificados em fragmentos redundantes antes de serem armazenados, de modo que arquivos podem ser reconstruídos utilizando apenas um subconjunto destes fragmentos. Finalmente, para lidar com a heterogeneidade dos recursos, desenvolvemos uma extensão ao protocolo de roteamento em redes peer-to-peer Pastry. Esta extensão adiciona balanceamento de carga e suporte à heterogeneidade de máquinas ao protocolo Pastry. / Opportunistic computational grids use idle resources from shared machines to execute applications that need large amounts of computational power and/or deal with large amounts of data. But executing computationally intensive parallel applications in dynamic and heterogeneous environments, such as opportunistic grids, is a daunting task. Machines may fail, become inaccessible, or change from idle to occupied unexpectedly, compromising the application execution. A fault tolerance mechanism that supports heterogeneous architectures is an important requisite for such systems. In this work, we analyze, implement and evaluate a checkpointing-based fault tolerance mechanism for parallel applications running on opportunistic grids. The mechanism monitors application execution and allows the migration of applications between heterogeneous nodes of the grid. But besides application execution, it is necessary to manage data generated and used by those applications. We want a low cost data storage infrastructure that utilizes the unused disk space of grid shared machines. The system should use the machines to store and recover data only during their idle periods, requiring the system to be redundant and fault-tolerant. To solve the data storage problem in opportunistic grids, we designed, implemented and evaluated the OppStore middleware. This middleware provides reliable distributed storage for application data, which can be accessed from any machine in the grid. The machines are organized in clusters, connected by a self-organizing and fault-tolerant peer-to-peer network. During storage, data is codified into redundant fragments, allowing the reconstruction of the original file using only a subset of those fragments. Finally, to deal with resource heterogeneity, we developed an extension to the Pastry peer-to-peer routing substrate, enabling heterogeneity-aware load-balancing message routing.
214

Environnement décentralisé et protocole de communication pour le calcul intensif sur grille / A decentralized environment and a protocol of communication for high performance computing on grid architecture

Fakih, Bilal 09 November 2018 (has links)
Dans cette thèse nous présentons un environnement décentralisé pour la mise en oeuvre des calcul intensif sur grille. Nous nous intéressons à des applications dans les domaines de la simulation numérique qui font appel à des modèles de type parallélisme de tâches et qui sont résolues par des méthodes itératives parallèles ou distribuées; nous nous intéressons aussi aux problèmes de planification. Mes contributions se situent au niveau de la conception et la réalisation d'un environnement de programmation GRIDHPC. GRIDHPC permet l'utilisation de tous les ressources de calcul, c'est-à-dire de tous les coeurs des processeurs multi-coeurs ainsi que l'utilisation du protocole de communication RMNP pour exploiter simultanément différents réseaux hauts débits comme Infiniband, Myrinet et aussi Ethernet. Notons que RMNP peut se reconfigurer automatiquement et dynamiquement en fonction des exigences de l'application, comme les schémas de calcul, c.-à-d, les schémas itératifs synchrones ou asynchrones, des éléments de contexte comme la topologie du réseau et le type de réseau comme Ethernet, Infiniband et Myrinet en choisissant le meilleur mode de communication entre les noeuds de calcul et le meilleur réseau. Nous présentons et analysons des résultats expérimentaux obtenus sur des grappes de calcul de la grille Grid5000 pour le problème de l'obstacle et le problème de planification. / This thesis aims at designing an environment for the implementation of high performance computing applications on Grid platforms. We are interested in applications like loosely synchronous applications and pleasingly parallel applications. For loosely synchronous applications, we are interested in particular in applications in the domains of numerical simulation that can be solved via parallel or distributed iterative methods, i.e., synchronous, asynchronous and hybrid iterative method; while, for pleasingly parallel applications, we are interested in planning problems. Our thesis work aims at designing the decentralized environment GRIDHPC. GRIDHPC exploits all the computing resources (all the available cores of computing nodes) using OpenMP as well as several types of networks like Ethernet, Infiniband and Myrinet of the grid platform using the reconfigurable multi network protocol RMNP. Note that RMNP can configure itself automatically and dynamically in function of application requirements like schemes of computation, i.e., synchronous or asynchronous iterative schemes, elements of context like network topology and type of network like Ethernet, Infiniband and Myrinet by choosing the best communication mode between computing nodes and the best network. We present and analyze a set of computational results obtained on Grid5000 platform for the obstacle and planning problems.
215

Modeling, Design And Evaluation Of Networking Systems And Protocols Through Simulation

Lacks, Daniel Jonathan 01 January 2007 (has links)
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
216

Improving Scheduling in Heterogeneous Grid and Hadoop Systems

Rasooli, Oskooei Aysan 10 1900 (has links)
<p>Advances in network technologies and computing resources have led to the deployment of large scale computational systems, such as those following Grid or Cloud architectures. The scheduling problem is a significant issue in these distributed computing environments, where a scheduling algorithm should consider multiple objectives and performance metrics. Moreover, heterogeneity is increasing at both the application and resource levels. The heterogeneity in these systems can have a huge impact on performance in terms of metrics such as average completion time. However, traditional Grid and Cloud scheduling algorithms neglect heterogeneity in their scheduling decisions. This PhD dissertation studies the scheduling challenges in Computational Grid, Data Grid, and Cloud computing systems, and introduces new scheduling algorithms for each of these systems.</p> <p>The main issue in Grid scheduling is the wide distribution of resources. As a result, gathering full state information can add huge overhead to the scheduler. This thesis introduces a Computational Grid scheduling algorithm which simultaneously addresses minimizing completion times (by considering system heterogeneity), while requiring zero dynamic state information. Simulation results show the promising performance of this algorithm, and its robustness with respect to errors in parameter estimates.</p> <p>In the case of Data Grid schedulers, an efficient scheduling decision should select a combination of resources for a task that simultaneously mitigates the computation and the communication costs. Therefore, these schedulers need to consider a large search space to find an appropriate combination. This thesis introduces a new Data Grid scheduling algorithm, which dynamically makes replication and scheduling decisions. The proposed algorithm reduces the search space, decreases the required state information, and improves the performance by considering the system heterogeneity. Simulation results illustrate the promising performance of the introduced algorithm.</p> <p>Cloud computing can be considered as a next generation of Grid computing. One of the main challenges in Cloud systems is the enormous expansion of data in different applications. The MapReduce programming model and Hadoop framework were designed as a solution for executing large scale data-intensive applications. A large number of (heterogeneous) users, using the same Hadoop cluster, can result in tensions between the various performance metrics by which such systems are measured. This research introduces and implements a Hadoop scheduling system, which uses system information such as estimated job arrival rates and mean job execution times to make scheduling decisions. The proposed scheduling system, named COSHH (Classification and Optimization based Scheduler for Heterogeneous Hadoop systems), considers heterogeneity at both the application and cluster levels. The main objective of COSHH is to improve the average completion time of jobs. However, as it is concerned with other key Hadoop performance metrics, it also achieves competitive performance under minimum share satisfaction, fairness and locality metrics, with respect to other well-known Hadoop schedulers. The proposed scheduler can be efficiently applied in heterogeneous clusters, in contrast to most Hadoop schedulers, which assume homogeneous clusters.</p> <p>A Hadoop system can be described based on three factors: cluster, workload, and user. Each factor is either heterogeneous or homogeneous, which reflects the heterogeneity level of the Hadoop system. This PhD research studies the effect of heterogeneity in each of these factors on the performance of Hadoop schedulers. Three schedulers which consider different levels of Hadoop heterogeneity are used for the analysis: FIFO, Fair sharing, and COSHH. Performance issues are introduced for Hadoop schedulers, and experiments are provided to evaluate these issues. The reported results suggest guidelines for selecting an appropriate scheduler for different Hadoop systems. The focus of these guidelines is on systems which do not have significant fluctuations in the number of resources or jobs.</p> <p>There is a considerable challenge in Hadoop to schedule tasks and resources in a scalable manner. Moreover, the potential heterogeneous nature of deployed Hadoop systems tends to increase this challenge. This thesis analyzes the performance of widely used Hadoop schedulers including FIFO and Fair sharing and compares them with the COSHH scheduler. Based on the discussed insights, a hybrid solution is introduced, which selects appropriate scheduling algorithms for scalable and heterogeneous Hadoop systems with respect to the number of incoming jobs and available resources. The proposed hybrid scheduler considers the guidelines provided for heterogeneous Hadoop systems in the case that the number of jobs and resources change considerably.</p> <p>To improve the performance of high priority users, Hadoop guarantees minimum numbers of resource shares for these users at each point in time. This research compares different scheduling algorithms based on minimum share consideration and under different heterogeneous and homogeneous environments. For this evaluation, a real Hadoop system is developed and evaluated using Facebook workloads. Based on the experiments performed, a reliable scheduling algorithm is suggested which can work efficiently in different environments.</p> / Doctor of Philosophy (PhD)
217

Data Transfer and Management through the IKAROS framework : Adopting an asynchronous non-blocking event driven approach to implement the Elastic-Transfer's IMAP client-server connection

Gkikas, Nikolaos January 2015 (has links)
Given the current state of input/output (I/O) and storage devices in petascale systems, incremental solutions would be ineffective when implemented in exascale environments. According to the "The International Exascale Software Roadmap", by Dongarra, et al. existing I/O architectures are not sufficiently scalable, especially because current shared file systems have limitations when used in large-scale environments. These limitations are: Bandwidth does not scale economically to large-scale systems, I/O traffic on the high speed network can impact on and be influenced by other unrelated jobs, and I/O traffic on the storage server can impact on and be influenced by other unrelated jobs. Future applications on exascale computers will require I/O bandwidth proportional to their computational capabilities. To avoid these limitations C. Filippidis, C. Markou, and Y. Cotronis proposed the IKAROS framework. In this thesis project, the capabilities of the publicly available elastic-transfer (eT) module which was directly derived from the IKAROS, will be expanded. The eT uses Google’s Gmail service as an utility for efficient meta-data management. Gmail is based on the IMAP protocol, and the existing version of the eT framework implements the Internet Message Access Protocol (IMAP) client-server connection through the ‘‘Inbox’’ module from the Node Package Manager (NPM) of the Node.js programming language. This module was used as a proof of concept, but in a production environment this implementation undermines the system’s scalability and there is an inefficient allocation of the system’s resources when a large number of concurrent requests arrive at the eT′s meta-data server (MDS) at the same time. This thesis solves this problem by adopting an asynchronous non-blocking event driven approach to implement the IMAP client-server connection. This was done by integrating and modifying the ‘‘Imap’’ NPM module from the NPM repository to suit the eT framework. Additionally, since the JavaScript Object Notation (JSON) format has become one of the most widespread data-interchange formats, eT′s meta-data scheme is appropriately modified to make the system’s meta-data easily parsed as JSON objects. This feature creates a framework with wider compatibility and interoperability with external systems. The evaluation and operational behavior of the new module was tested through a set of data transfer experiments over a wide area network environment. These experiments were performed to ensure that the changes in the system’s architecture did not affected its performance. / Givet det nuvarande läget för input/output (I/O) och lagringsenheter för system i peta-skala, skulle inkrementella lösningar bli ineffektiva om de implementerades i exa-skalamiljöer. Enligt ”The International Exascale Software Roadmap”, av Dongarra et al., är nuvarande I/O-arkitekturer inte tillräckligt skalbara, särskilt eftersom nuvarande delade filsystem har begränsningar när de används i storskaliga miljöer. Dessa begränsningar är: Bandbredd skalar inte på ett ekonomiskt sätt i storskaliga system, I/O-trafik på höghastighetsnätverk kan ha påverkan på och blir påverkad av andra orelaterade jobb, och I/O-trafik på lagringsservern kan ha påverkan på och bli påverkad av andra orelaterade jobb. Framtida applikationer på exa-skaladatorer kommer kräva I/O-bandbredd proportionellt till deras beräkningskapacitet. För att undvika dessa begränsningar föreslog C. Filippidis, C. Markou och Y. Cotronis ramverket IKAROS. I detta examensarbete utökas funktionaliteten hos den publikt tillgängliga modulen elastic-transfer (eT) som framtagits utifrån IKAROS. Den befintliga versionen av eT-ramverket implementerar Internet Message Access Protocol (IMAP) klient-serverkommunikation genom modulen ”Inbox” från Node Package Manager (NPM) ur Node.js programmeringsspråk. Denna modul användes som ett koncepttest, men i en verklig miljö så underminerar denna implementation systemets skalbarhet när ett stort antal värdar ansluter till systemet. Varje klient begär individuellt information relaterad till systemets metadata från IMAP-servern, vilket leder till en ineffektiv allokering av systemets resurser när ett stort antal värdar är samtidigt anslutna till eT-ramverket. Denna uppsats löser problemet genom att använda ett asynkront, icke-blockerande och händelsedrivet tillvägagångssätt för att implementera en IMAP klient-serveranslutning. Detta görs genom att integrera och modifiera NPM:s ”Imap”-modul, tagen från NPM:s katalog, så att den passar eT-ramverket. Eftersom formatet JavaScript Object Notation (JSON) har blivit ett av de mest spridda formaten för datautbyte så modifieras även eT:s metadata-struktur för att göra systemets metadata enkelt att omvandla till JSON-objekt. Denna funktionalitet ger ett bredare kompatibilitet och interoperabilitet med externa system. Utvärdering och tester av den nya modulens operationella beteende utfördes genom en serie dataöverföringsexperiment i en wide area network-miljö. Dessa experiment genomfördes för att få bekräftat att förändringarna i systemets arkitektur inte påverkade dess prestanda.
218

Planejamento de experimentos com várias replicações em paralelo em grades computacionais / Towards distributed simulation design of experiments on computational grids

Pereira Júnior, Lourenço Alves 07 June 2010 (has links)
Este trabalho de mestrado apresenta um estudo de Grades Computacionais e Simulações Distribuídas sobre a técnica MRIP. A partir deste estudo foi possível propor e implementar o protótipo de uma ferramenta para Gerenciamento de Experimento em Ambiente de Grade, denominada Grid Experiments Manager - GEM, organizada de forma modular podendo ser usada como um programa ou integrada com outro software, podendo ser expansível para vários middlewares de Grades Computacionais. Com a implementação também foi possível avaliar o desempenho de simulações sequenciais com aquelas executadas em cluster e em uma Grade Computacional de teste, sendo construído um benchmark que possibilitou repetir a mesma carga de trabalho para os sistemas sobre avaliação. Com os testes foi possível verificar um ganho alto no tempo de execução, quando comparadas as execuções sequenciais e em cluster, obteve-se eficiência em torno de 197% para simulações com tempo de execução baixo e 239% para aquelas com tempo de execução maior; na comparação das execuções em cluster e em grade, obteve-se os valores para eficiência de 98% e 105%, para simulações pequenas e grandes, respectivamente / This master\'s thesis presents a study of Grid Computing and Distributed Simulations using the MRIP approach. From this study was possible to design and implement the prototype of a tool for Management of Experiments in Grid Environment, called Grid Experiments Manager - GEM, which is organized in a modular way and can be used as a program or be integrated with another piece of software, being expansible to varius middlewares of Computational Grids. With its implementation was also possible to evaluate the performance of sequencial simulations executed in clusters and a Computational testbed Grid, also being implemented a benchmark which allowed repeat the same workload at the systems in evaluation. A high gain turnaround of the executions was infered with those results. When compared Sequential and Cluster executions, the eficiency was about of 197% for thin time of execution and 239% for those bigger in execution; when compared Cluster and Grid executions, the eficiency was about of 98% and 105% for thin and bigger simulations, repectivelly
219

Uma abordagem orientada a sistemas para otimização de escalonamento de processos em grades computacionais / A system-centric approach for process scheduling optimization in computational grids

Gabriel, Paulo Henrique Ribeiro 26 April 2013 (has links)
Um dos maiores desafios envolvidos no projeto de grades computacionais é o escalonamento de processos, o qual consiste no mapeamento de processos sobre os computadores disponíveis, a fim de reduzir o tempo de execução de aplicações ou maximizar a utilização de recursos. A literatura na área de Sistemas Distribuídos trata, geralmente, esses dois objetivos separadamente, dando origem às abordagens de escalonamento orientado a aplicações e orientado a recursos, respectivamente. Mais recentemente, uma nova abordagem, denominada escalonamento orientado a sistemas, tem recebido destaque, buscando otimizar ambos objetivos simultaneamente. Seguindo essas abordagens, algoritmos heurísticos e de aproximação têm sido propostos. Os heurísticos buscam por soluções de maneira eficiente sem, contudo, apresentar garantias quanto à qualidade das soluções obtidas. Em contrapartida, os algoritmos de aproximação provêm tais garantias, contudo são mais difíceis de serem projetados, o que justifica o fato de haver apenas versões simplificadas desses algoritmos para cenários de escalonamento de processos. A falta de algoritmos de aproximação adequados para abordar o problema de escalonamento de processos e a necessidade de soluções que atendam o escalonamento orientado a sistemas motivaram esta tese de doutorado que apresenta a proposta do Min Heap-based Scheduling Algorithm (MHSA), um algoritmo de aproximação para o problema de escalonamento de processos orientado a sistemas. Esse algoritmo foi baseado em um modelo de otimização matemática proposto no contexto desta tese. Esse modelo considera os comportamentos de processos e recursos a fim de quantificar a qualidade de soluções de escalonamento. O funcionamento do MHSA envolve a construção de uma árvore min-heap, em que os nós representam computadores e as chaves de ordenação correspondem aos tempos de fila, i.e., ocupação dos computadores. Apesar de esse algoritmo primordialmente reduzir o tempo de execução (ou makespan) de aplicações, essa estrutura em árvore permite que qualquer computador que ocupe o nó raiz receba cargas, o que favorece a ocupação de recursos e, portanto, sua orientação a sistemas. Esse algoritmo tem complexidade assintótica de pior caso igual a O(\'log IND. 2 m\'), em que m corresponde ao número de computadores do sistema. Sua razão de aproximação foi estudada para ambientes distribuídos heterogêneos com e sem a presença de comunicação entre processos, o que permite conhecer, a priori, o nível mínimo de qualidade alcançado por suas soluções. Experimentos foram conduzidos para avaliar o algoritmo proposto e compará-lo a outras propostas. Os resultados confirmam que o MHSA reduz o tempo dispendido na obtenção de boas soluções de escalonamento / One of the most important challenges involved in the design of grid computing systems is process scheduling, which maps applications into the available computers in attempt to reduce the application execution time, or maximize resource utilization. The literature of Distributed Systems usually deals with these two objectives separately, supporting the application-centric and the resourcecentric scheduling, respectively. More recently, a third approach referred to as system-centric scheduling has emerged which attempts to optimize both objectives in conjunction. Heuristic-based and approximation-based algorithms have been proposed to address this third type of scheduling. Heuristics aim to find good solutions at acceptable time constraints, without guaranteeing solution quality. On the other hand, approximation-based algorithms provide optimal solution bounds, however they are more difficult to design what makes them available only to simple scenarios. The need for approximation-based algorithms to support system-centric scheduling has motivated this thesis which presents Min Heap-based Scheduling Algorithm (MHSA). This approximation algorithm is based on a mathematical optimization model, also proposed in this work, which considers process and resource behaviors to measure the quality of scheduling solutions. MHSA builds a min-heap data structure in which tree nodes represent computers and sorting keys correspond to queuing times, i.e., computer workloads. Besides this algorithm primarily reduces application execution times (also referred to as makespan), its data structure allows any computer assume the root node and, consequently, receive workloads, what favors resource utilization. This algorithm has the worst-case time complexity equals to O(\'log IND. 2 m\'), in which m represents the number of system computers. Its approximation ratio was analyzed to heterogeneous distributed systems considering bag-of-tasks and communication-intensive applications. Having this ratio, we know the minimum quality level provided by every scheduling solution. Experiments were performed to compare MHSA to others. Results confirm MHSA reduces the time spent to obtain good quality scheduling solutions
220

SCALABLE AND FAULT TOLERANT HIERARCHICAL B&B ALGORITHMS FOR COMPUTATIONAL GRIDS

Bendjoudi, Ahcène 24 April 2012 (has links) (PDF)
La résolution exacte de problèmes d'optimisation combinatoire avec les algorithmes Branch and Bound (B&B) nécessite un nombre exorbitant de ressources de calcul. Actuellement, cette puissance est offerte par les environnements large échelle comme les grilles de calcul. Cependant, les grilles présentent de nouveaux challenges : le passage à l'échelle, l'hétérogénéité et la tolérance aux pannes. La majorité des algorithmes B&B revisités pour les grilles de calcul sont basés sur le paradigme Master-Worker, ce qui limite leur passage à l'échelle. De plus, la tolérance aux pannes est rarement adressée dans ces travaux. Dans cette thèse, nous proposons trois principales contributions : P2P-B&B, H-B&B et FTH-B&B. P2P-B&B est un famework basé sur le paradigme Master-Worker traite le passage à l'échelle par la réduction de la fréquence de requêtes de tâches et en permettant les communications directes entre les workers. H-B&B traite aussi le passage à l'échelle. Contrairement aux approches proposées dans la littérature, H-B&B est complètement dynamique et adaptatif i.e. prenant en compte l'acquisition dynamique des ressources de calcul. FTH-B&B est basé sur de nouveaux méchanismes de tolérance aux pannes permettant de construire et maintenir la hiérarchie équilibrée, et de minimiser la redondance de travail quand les tâches sont sauvegardées et restaurées. Les approches proposées ont été implémentées avec la plateforme pour grille ProActive et ont été appliquées au problème d'ordonnancement de type Flow-Shop. Les expérimentations large échelle effectuées sur la grille Grid'5000 ont prouvé l'éfficacité des approches proposées.

Page generated in 0.0687 seconds