Spelling suggestions: "subject:"volunteer computing"" "subject:"volunteer acomputing""
1 |
Ad hoc cloud computingMcGilvary, Gary Andrew January 2014 (has links)
Commercial and private cloud providers offer virtualized resources via a set of co-located and dedicated hosts that are exclusively reserved for the purpose of offering a cloud service. While both cloud models appeal to the mass market, there are many cases where outsourcing to a remote platform or procuring an in-house infrastructure may not be ideal or even possible. To offer an attractive alternative, we introduce and develop an ad hoc cloud computing platform to transform spare resource capacity from an infrastructure owner’s locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud platform. The foundation of the ad hoc cloud relies on transferring and instantiating lightweight virtual machines on-demand upon near-optimal hosts while virtual machine checkpoints are distributed in a P2P fashion to other members of the ad hoc cloud. Virtual machines found to be non-operational are restored elsewhere ensuring the continuity of cloud jobs. In this thesis we investigate the feasibility, reliability and performance of ad hoc cloud computing infrastructures. We firstly show that the combination of both volunteer computing and virtualization is the backbone of the ad hoc cloud. We outline the process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC distributes virtual machines to volunteer hosts allowing volunteer applications to be executed in the sandbox environment to solve many of the downfalls of BOINC; this however also provides the basis for an ad hoc cloud computing platform to be developed. We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline the transformational process and integrated extensions. These include a BOINC job submission system, cloud job and virtual machine restoration schedulers and a periodic P2P checkpoint distribution component. Furthermore, as current monitoring tools are unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure monitoring and management tool called the Cloudlet Control Monitoring System is developed and presented. We evaluate each of our individual contributions as well as the reliability, performance and overheads associated with an ad hoc cloud deployed on a realistically simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a feasible concept but also a viable computational alternative that offers high levels of reliability and can at least offer reasonable performance, which at times may exceed the performance of a commercial cloud infrastructure.
|
2 |
Desenvolvimento de um ambiente de computação voluntária baseado em computação ponto-a-ponto / Development of an volunteer computing environment based in peer-to-peer computingSantiago, Caio Rafael do Nascimento 13 March 2015 (has links)
As necessidades computacionais de experimentos científicos muitas vezes exigem computadores potentes. Uma forma alternativa de obter esse processamento é aproveitar o processamento ocioso de computadores pessoais de modo voluntário. Essa técnica é conhecida como computação voluntária e possui grande potencial na ajuda aos cientistas. No entanto existem diversos fatores que podem reduzir sua eficiência quando aplicada a experimentos científicos complexos, por exemplo, aqueles que envolvem processamento de longa duração, uso de dados de entrada ou saída muito grandes, etc. Na tentativa de solucionar alguns desses problemas surgiram abordagens que aplicam conceitos de computação ponto-a-ponto. Neste projeto foram especificados, desenvolvidos e testados um ambiente e um escalonador de atividades que aplicam conceitos de computação ponto-a-ponto à execução de workflows com computação voluntária. Quando comparado com a execução local de atividades e com a computação voluntária tradicional houve melhoras em relação ao tempo de execução (até 22% de redução quando comparada com a computação voluntária tradicional nos testes mais complexos) e em alguns casos também houve uma redução no consumo de banda de upload do servidor de até 62%. / The computational needs of scientific experiments often require powerful computers. One alternative way to obtain this processing power is taking advantage of the idle processing of personal computers as volunteers. This technique is known as volunteer computing and has great potential in helping scientists. However, there are several issues which can reduce the efficiency of this approach when applied to complex scientific experiments, such as, the ones with long processing time, very large input or output data, etc. In an attempt to solve these problems some approaches based on P2P concepts arisen. In this project a workflow execution environment and a scheduler of activities were specified, developed and tested applying P2P concepts in the workflows execution using volunteer computing. When compared with the local execution of activities and traditional volunteer computing was the execution time was improved (until 22% of reduction when compared with the traditional volunteer computing in the most complex tests) and in some cases there was also a reduction of the server upload bandwidth use of until 62%.
|
3 |
RESOURCE MANAGEMENT FRAMEWORK FOR VOLUNTEER CLOUD COMPUTINGMengistu, Tessema Mindaye 01 December 2018 (has links)
The need for high computing resources is on the rise, despite the exponential increase of the computing capacity of workstations, the proliferation of mobile devices, and the omnipresence of data centers with massive server farms that housed tens (if not hundreds) of thousands of powerful servers. This is mainly due to the unprecedented increase in the number of Internet users worldwide and the Internet of Things (IoTs). So far, Cloud Computing has been providing the necessary computing infrastructures for applications, including IoT applications. However, the current cloud infrastructures that are based on dedicated datacenters are expensive to set-up; running the infrastructure needs expertise, a lot of electrical power for cooling the facilities, and redundant supply of everything in a data center to provide the desired resilience. Moreover, the current centralized cloud infrastructures will not suffice for IoT's network intensive applications with very fast response requirements. Alternative cloud computing models that depend on spare resources of volunteer computers are emerging, including volunteer cloud computing, in addition to the conventional data center based clouds. These alternative cloud models have one characteristic in common -- they do not rely on dedicated data centers to provide the cloud services. Volunteer clouds are opportunistic cloud systems that run over donated spare resources of volunteer computers. On the one hand, volunteer clouds claim numerous outstanding advantages: affordability, on-premise, self-provision, greener computing (owing to consolidate use of existent computers), etc. On the other hand, full-fledged implementation of volunteer cloud computing raises unique technical and research challenges: management of highly dynamic and heterogeneous compute resources, Quality of Service (QoS) assurance, meeting Service Level Agreement (SLA), reliability, security/trust, which are all made more difficult due to the high dynamics and heterogeneity of the non-dedicated cloud hosts. This dissertation investigates the resource management aspect of volunteer cloud computing. Due to the intermittent availability and heterogeneity of computing resource involved, resource management is one of the challenging tasks in volunteer cloud computing. The dissertation, specifically, focuses on the Resource Discovery and VM Placement tasks of resource management. The resource base over which volunteer cloud computing depends on is a scavenged, sporadically available, aggregate computing power of individual volunteer computers. Delivering reliable cloud services over these unreliable nodes is a big challenge in volunteer cloud computing. The fault tolerance of the whole system rests on the reliability and availability of the infrastructure base. This dissertation discusses the modelling of a fault tolerant prediction based resource discovery in volunteer cloud computing. It presents a multi-state semi-Markov process based model to predict the future availability and reliability of nodes in volunteer cloud systems. A volunteer node is modelled as a semi-Markov process, whose future state depends only on its current state. This exactly matches with a key observation made in analyzing the traces of personal computers in enterprises that the daily patterns of resource availability are comparable to those in the most recent days. The dissertation illustrates how prediction based resource discovery enables volunteer cloud systems to provide reliable cloud services over the unreliable and non-dedicated volunteer hosts with empirical evidences. VM placement algorithms play crucial role in Cloud Computing in fulfilling its characteristics and achieving its objectives. In general, VM placement is a challenging problem that has been extensively studied in conventional Cloud Computing context. Due to its divergent characteristics, volunteer cloud computing needs a novel and unique way of solving the existing Cloud Computing problems, including VM placement. Intermittent availability of nodes, unreliable infrastructure, and resource constrained nodes are some of the characteristics of volunteer cloud computing that make VM placement problem more complicated. In this dissertation, we model the VM placement problem as a \textit{Bounded 0-1 Multi-Dimensional Knapsack Problem}. As a known NP-hard problem, the dissertation discusses heuristic based algorithms that takes the typical characteristics of volunteer cloud computing into consideration, to solve the VM placement problem formulated as a knapsack problem. Three algorithms are developed to meet the objectives and constraints specific to volunteer cloud computing. The algorithms are tested on a real volunteer cloud computing test-bed and showed a good performance results based on their optimization objectives. The dissertation also presents the design and implementation of a real volunteer cloud computing system, cuCloud, that bases its resource infrastructure on donated computing resource of computers. The need for the development of cuCloud stems from the lack of experimentation platform, real or simulation, that specifically works for volunteer cloud computing. The cuCloud is a system that can be called a genuine volunteer cloud computing system, which manifests the concept of ``Volunteer Computing as a Service'' (VCaaS), with a particular significance in edge computing and related applications. In the course of this dissertation, empirical evaluations show that volunteer clouds can be used to execute range of applications reliably and efficiently. Moreover, the physical proximity of volunteer nodes to where applications originate, edge of the network, helps them in reducing the round trip time latency of applications. However, the overall computing capability of volunteer clouds will not suffice to handle highly resource intensive applications by itself. Based on these observations, the dissertation also proposes the use of volunteer clouds as a resource fabric in the emerging Edge Computing paradigm as a future work.
|
4 |
Improving the Productivity of Volunteer ComputingToth, David M. 15 March 2008 (has links)
The price of computers has dropped drastically over the past years enabling many households to have at least one computer. At the same time, the performance of computers has skyrocketed, far surpassing what a typical user needs, and most of the computational power of personal computers is wasted. Volunteer computing projects attempt to use this wasted computational power in order to solve problems that would otherwise be computationally infeasible. Some of these problems include medical applications like searching for cures for AIDS and cancer. However, the number of volunteer computing projects is increasing rapidly, requiring improvements in the field of volunteer computing to enable the increasing number of volunteer projects to continue making significant progress. This dissertation examines two ways to increase the productivity of volunteer computing: using the volunteered CPU cycles more effectively and exploring ways to increase the amount of CPU cycles that are donated. Each of the existing volunteer computing projects uses one of two task retrieval policies to enable the volunteered computers participating in projects to retrieve work. This dissertation compares the amount of work completed by the volunteered computers participating in projects based on which of the two task retrieval techniques the project employs. Additional task retrieval policies are also proposed and evaluated. The most commonly used task retrieval policy is shown to be less effective than both the less frequently used policy and a proposed policy. The potential that video game consoles have to be used for volunteer computing is explored, as well as the potential benefits of constructing different types of volunteer computing clients, rather than the most popular client implementation: the screensaver. In addition to examining methods of increasing the productivity of volunteer computing, 140 traces of computer usage detailing when computers are available to participate in volunteer computing is collected and made publicly available. Volunteer computing project-specific information that can be used in researching how to improve volunteer computing is collected and combined into the first summary of which we are aware.
|
5 |
Desenvolvimento de um ambiente de computação voluntária baseado em computação ponto-a-ponto / Development of an volunteer computing environment based in peer-to-peer computingCaio Rafael do Nascimento Santiago 13 March 2015 (has links)
As necessidades computacionais de experimentos científicos muitas vezes exigem computadores potentes. Uma forma alternativa de obter esse processamento é aproveitar o processamento ocioso de computadores pessoais de modo voluntário. Essa técnica é conhecida como computação voluntária e possui grande potencial na ajuda aos cientistas. No entanto existem diversos fatores que podem reduzir sua eficiência quando aplicada a experimentos científicos complexos, por exemplo, aqueles que envolvem processamento de longa duração, uso de dados de entrada ou saída muito grandes, etc. Na tentativa de solucionar alguns desses problemas surgiram abordagens que aplicam conceitos de computação ponto-a-ponto. Neste projeto foram especificados, desenvolvidos e testados um ambiente e um escalonador de atividades que aplicam conceitos de computação ponto-a-ponto à execução de workflows com computação voluntária. Quando comparado com a execução local de atividades e com a computação voluntária tradicional houve melhoras em relação ao tempo de execução (até 22% de redução quando comparada com a computação voluntária tradicional nos testes mais complexos) e em alguns casos também houve uma redução no consumo de banda de upload do servidor de até 62%. / The computational needs of scientific experiments often require powerful computers. One alternative way to obtain this processing power is taking advantage of the idle processing of personal computers as volunteers. This technique is known as volunteer computing and has great potential in helping scientists. However, there are several issues which can reduce the efficiency of this approach when applied to complex scientific experiments, such as, the ones with long processing time, very large input or output data, etc. In an attempt to solve these problems some approaches based on P2P concepts arisen. In this project a workflow execution environment and a scheduler of activities were specified, developed and tested applying P2P concepts in the workflows execution using volunteer computing. When compared with the local execution of activities and traditional volunteer computing was the execution time was improved (until 22% of reduction when compared with the traditional volunteer computing in the most complex tests) and in some cases there was also a reduction of the server upload bandwidth use of until 62%.
|
6 |
A programming model and performance model for cycle stealingSumitomo, Jiro January 2006 (has links)
This work describes a programming model and performance model for cycle stealing on the Internet. Cycle stealing is the use of otherwise idle computers to perform work, and promises high performance computing at relatively low cost. The Internet, being the largest pool of potentially idle computers, is an obvious target for cycle stealing. However, computers connected to the Internet are often protected by firewalls, preventing point-to-point communication between them. The fluctuating avail-ability of computers for cycle stealing as they move in and out of an idle state, combined with the restricted communication of the Internet environment, means that programming models and abstractions suitable for programming supercom-puters and clusters are not ideal. Therefore, I have created a programming model for cycle stealing which reflects the types of parallel applications that are suitable for execution using idle computers connected to the Internet. The model is de-signed for use by non-expert parallel programmers, and I will show how it simpli-fies the development of cycle stealing applications, enabling rapid application de-velopment, and straightforward porting of existing sequential applications. This simple to use programming model, combined with the low cost of cycle stealing, improves the accessibility of high performance computing to non-traditional us-ers of supercomputers and clusters. Deployment on the Internet, and the need to navigate through firewalls, suggests a web based framework using common web protocols, web servers and web browsers. Part of this work investigates the feasibility of web based approaches to cycle stealing, from the setup of a cycle stealing system, application development and deployment, and connection of potentially idle computers. I designed and implemented a cycle stealing framework, deployable on the web, to meet expec-tations of performance, reliability, ease of use and safety. Existing cycle stealing frameworks emphasise the need for applications to be de-composed into a set of jobs that execute for a long period, that is, a job should have a computation time sufficient to justify its communication cost. However, there are no tools available for users to determine what an appropriate computa-tion time might be, given a job's data communication requirements. To date, de-ciding the granularity of jobs has been a matter of intuition. Therefore, a user may experience uncertainty as to the benefit of cycle stealing for their particular application, especially if the applications will have relatively short-lived jobs. Based on performance analysis of my framework, I have developed an analytical model and simulator, which can be used to predict, and help to optimise, the per-formance of user applications, and show the feasibility of executing a particular application using the cycle stealing framework.
|
7 |
A framework for fully decentralised cycle stealingMason, Richard S. January 2007 (has links)
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome.
Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage.
As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context.
In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
|
8 |
Análise do comportamento não cooperativo em computação voluntária / Analyses of non-cooperative behavior in volunteer computing environmentsDonassolo, Bruno Luis de Moura January 2011 (has links)
Os avanços nas tecnologias de rede e nos componentes computacionais possibilitaram a criação dos sistemas de Computação Voluntária (CV) que permitem que voluntários doem seus ciclos de CPU ociosos da máquina para um determinado projeto. O BOINC é a infra-estrutura mais popular atualmente, composta de mais 5.900.000 máquinas que processam mais de 4.003 TeraFLOP por dia. Os projetos do BOINC normalmente possuem centenas de milhares de tarefas independentes e estão interessados no throughput. Cada projeto tem seu próprio servidor que é responsável por distribuir unidades de trabalho para os clientes, recuperando os resultados e validando-os. Os algoritmos de escalonamento do BOINC são complexos e têm sido usados por muitos anos. Sua eficiência e justiça foram comprovadas no contexto dos projetos orientados ao throughput. Ainda, recentemente, surgiram projetos em rajadas, com menos tarefas e interessados no tempo de resposta. Diversos trabalhos propuseram novos algoritmos de escalonamento para otimizar seu tempo de resposta individual. Entretanto, seu uso pode ser problemático na presença de outros projetos. Neste texto, são estudadas as consequências do comportamento não cooperativo nos ambientes de Computação Voluntária. Para realizar o estudo, foi necessário modificar o simulador SimGrid para melhorar seu desempenho na simulação dos sistemas de CV. A primeira contribuição do trabalho é um conjunto de melhorias no núcleo de simulação do SimGrid para remover os gargalos de desempenho. O resultado é um simulador consideravelmente mais rápido que as versões anteriores e capaz de rodar experimentos nessa área. Ainda, como segunda grande contribuição, apresentou-se como os algoritmos de escalonamento atuais do BOINC são incapazes de garantir a justiça e isolação entre os projetos. Os projetos em rajadas podem impactar drasticamente o desempenho de todos os outros projetos (rajadas ou não). Para estudar tais interações, realizou-se um detalhado, multi jogador e multi objetivo, estudo baseado em teoria dos jogos. Os experimentos e análise realizados proporcionaram um bom entendimento do impacto dos diferentes parâmetros de escalonamento e mostraram que a otimização não cooperativa pode resultar em ineficiências e num compartilhamento injusto dos recursos. / Advances in inter-networking technology and computing components have enabled Volunteer Computing (VC) systems that allows volunteers to donate their computers’ idle CPU cycles to a given project. BOINC is the most popular VC infrastructure today with over 5.900.000 hosts that deliver over 4.003 TeraFLOP per day. BOINC projects usually have hundreds of thousands of independent tasks and are interested in overall throughput. Each project has its own server which is responsible for distributing work units to clients, recovering results and validating them. The BOINC scheduling algorithms are complex and have been used for many years now. Their efficiency and fairness have been assessed in the context of throughput oriented projects. Yet, recently, burst projects, with fewer tasks and interested in response time, have emerged. Many works have proposed new scheduling algorithms to optimize individual response time but their use may be problematic in presence of other projects. In this text, we study the consequences of non-cooperative behavior in volunteer computing environment. In order to perform our study, we needed to modify the SimGrid simulator to improve its performance simulating VC systems. So, the first contribution is a set of improvements in SimGrid’s core simulation to remove its performance bottlenecks. The result is a simulator considerably faster than the previous versions and able to run VC experiments. Also, in the second contribution, we show that the commonly used BOINC scheduling algorithms are unable to enforce fairness and project isolation. Burst projects may dramatically impact the performance of all other projects (burst or non-burst). To study such interactions, we perform a detailed, multi-player and multi-objective game theoretic study. Our analysis and experiments provide a good understanding on the impact of the different scheduling parameters and show that the non-cooperative optimization may result in inefficient and unfair share of the resources.
|
9 |
Proposta de mecanismo de checkpoint com armazenamento de contexto em memória para ambientes de computação voluntária / A Proposal for a checkpoint mechanism based on memory execution-context storage for volunteer computing environmentsDal Zotto, Rafael January 2010 (has links)
Computação voluntária é um tipo de computação distribuída na qual o proprietário do computador cede parte dos seus recursos computacionais, tais como poder de processamento ou armazenamento, para a execução de um ou mais projetos de pesquisa de seu interesse. Na área de processamento de alto desempenho, o modelo de computação voluntária desempenha um papel muito importante. Sistemas de computação voluntária de larga escala provaram ser mecanismos eficientes para resolução de problemas complexos. Em tais sistemas, que são essencialmente centralizados, centenas ou milhares de computadores são organizados em rede para processar uma série de tarefas, encaminhadas e distribuídas por um servidor central. Nesse tipo de solução, é imprescindível ter um mecanismo para a persistência dos resultados intermediários produzidos, de maneira periódica, para evitar a perda de informações em caso de falhas. Esse mecanismo, chamado de checkpoint, também é importante, em ambientes de computação voluntária, para garantir que no momento em que o proprietário do recurso retomar sua utilização, os resultados intermediários produzidos sejam armazenados para uma posterior recuperação. Sem um mecanismo de checkpoint consistente, resultados produzidos pelos nodos de computação voluntária podem ser perdidos, gerando um desperdício do poder de computação. A pesquisa contemplada nessa dissertação tem por objetivo propor um mecanismo de checkpoint baseado no armazenamento do contexto de execução, através da prevalência de objetos. Essa abordagem proporciona a participação, em sistemas de computação voluntária, de recursos com capacidades limitadas de processamento, memória e espaço em disco que possuam curtos, porém frequentes, períodos de inatividade. Dessa forma, esses recursos poderão realizar checkpoints rápidos e frequentes, produzindo resultados efetivos. / Volunteer computing is a type of distributed computing in which resource owners donate their computing resources, such as processing power and storage, to one or more projects of interest. In the high-performance computing field, the volunteer computing model has been playing an important role. On current volunteer computing systems, which are essentially center-based, hundreds or thousands of computers are organized in a network to process a series of tasks, originally distributed by a centralized server. For this kind of environment, it is essential to have a mechanism to ensure that all intermediate produced results are stored, avoiding the loss of already processed data in case of failures. This mechanism, known as checkpoint, is also important in volunteer computing environments to ensure that when the resource owner takes control of the activities, all intermediate results are saved for later recovery. Without a consistent checkpoint mechanism, already produced data could be lost, leading to waste of computing power. The research done on this dissertation aims mainly at introducing a checkpoint mechanism based on context execution storage, through object prevalence. On it, resources which usually have limited processing power, memory and storage and with small but frequent periods of inactivity could be allowed to join volunteer computing environments. This is possible because they would be able to execute fast and frequent checkpoint operations in short period of times and therefore, be able to effectively produce results during its inactivity periods.
|
10 |
Análise do comportamento não cooperativo em computação voluntária / Analyses of non-cooperative behavior in volunteer computing environmentsDonassolo, Bruno Luis de Moura January 2011 (has links)
Os avanços nas tecnologias de rede e nos componentes computacionais possibilitaram a criação dos sistemas de Computação Voluntária (CV) que permitem que voluntários doem seus ciclos de CPU ociosos da máquina para um determinado projeto. O BOINC é a infra-estrutura mais popular atualmente, composta de mais 5.900.000 máquinas que processam mais de 4.003 TeraFLOP por dia. Os projetos do BOINC normalmente possuem centenas de milhares de tarefas independentes e estão interessados no throughput. Cada projeto tem seu próprio servidor que é responsável por distribuir unidades de trabalho para os clientes, recuperando os resultados e validando-os. Os algoritmos de escalonamento do BOINC são complexos e têm sido usados por muitos anos. Sua eficiência e justiça foram comprovadas no contexto dos projetos orientados ao throughput. Ainda, recentemente, surgiram projetos em rajadas, com menos tarefas e interessados no tempo de resposta. Diversos trabalhos propuseram novos algoritmos de escalonamento para otimizar seu tempo de resposta individual. Entretanto, seu uso pode ser problemático na presença de outros projetos. Neste texto, são estudadas as consequências do comportamento não cooperativo nos ambientes de Computação Voluntária. Para realizar o estudo, foi necessário modificar o simulador SimGrid para melhorar seu desempenho na simulação dos sistemas de CV. A primeira contribuição do trabalho é um conjunto de melhorias no núcleo de simulação do SimGrid para remover os gargalos de desempenho. O resultado é um simulador consideravelmente mais rápido que as versões anteriores e capaz de rodar experimentos nessa área. Ainda, como segunda grande contribuição, apresentou-se como os algoritmos de escalonamento atuais do BOINC são incapazes de garantir a justiça e isolação entre os projetos. Os projetos em rajadas podem impactar drasticamente o desempenho de todos os outros projetos (rajadas ou não). Para estudar tais interações, realizou-se um detalhado, multi jogador e multi objetivo, estudo baseado em teoria dos jogos. Os experimentos e análise realizados proporcionaram um bom entendimento do impacto dos diferentes parâmetros de escalonamento e mostraram que a otimização não cooperativa pode resultar em ineficiências e num compartilhamento injusto dos recursos. / Advances in inter-networking technology and computing components have enabled Volunteer Computing (VC) systems that allows volunteers to donate their computers’ idle CPU cycles to a given project. BOINC is the most popular VC infrastructure today with over 5.900.000 hosts that deliver over 4.003 TeraFLOP per day. BOINC projects usually have hundreds of thousands of independent tasks and are interested in overall throughput. Each project has its own server which is responsible for distributing work units to clients, recovering results and validating them. The BOINC scheduling algorithms are complex and have been used for many years now. Their efficiency and fairness have been assessed in the context of throughput oriented projects. Yet, recently, burst projects, with fewer tasks and interested in response time, have emerged. Many works have proposed new scheduling algorithms to optimize individual response time but their use may be problematic in presence of other projects. In this text, we study the consequences of non-cooperative behavior in volunteer computing environment. In order to perform our study, we needed to modify the SimGrid simulator to improve its performance simulating VC systems. So, the first contribution is a set of improvements in SimGrid’s core simulation to remove its performance bottlenecks. The result is a simulator considerably faster than the previous versions and able to run VC experiments. Also, in the second contribution, we show that the commonly used BOINC scheduling algorithms are unable to enforce fairness and project isolation. Burst projects may dramatically impact the performance of all other projects (burst or non-burst). To study such interactions, we perform a detailed, multi-player and multi-objective game theoretic study. Our analysis and experiments provide a good understanding on the impact of the different scheduling parameters and show that the non-cooperative optimization may result in inefficient and unfair share of the resources.
|
Page generated in 0.1064 seconds