Return to search

Simulation and Performance Evaluation of Hadoop Capacity Scheduler

MapReduce is a parallel programming paradigm used for processing huge datasets on certain classes of
distributable problems using a cluster. Budgetary constraints and the need for better usage of resources in a
MapReduce cluster often make organizations rent or share hardware resources for their main data processing
and analysis tasks. Thus, there may be many competing jobs from different clients performing simultaneous
requests to the MapReduce framework on a particular cluster. Schedulers like Fair Share and Capacity have
been specially designed for such purposes. Administrators and users run into performance problems, however,
because they do not know the exact meaning of different task scheduler settings and what impact they can
have with respect to the resource allocation scheme across organizations for a shared MapReduce cluster. In
this work, Capacity Scheduler is integrated into an existing MRPERF simulator to predict the performance
of MapReduce jobs in a shared cluster under different settings for Capacity Scheduler.
A few case studies on the behaviour of Capacity Scheduler across different job patterns etc. using integrated simulator are also conducted.

Identiferoai:union.ndltd.org:USASK/oai:ecommons.usask.ca:10388/ETD-2013-06-1172
Date2013 June 1900
ContributorsMakaroff, Dwight J., Grassmann, Winfried
Source SetsUniversity of Saskatchewan Library
LanguageEnglish
Detected LanguageEnglish
Typetext, thesis

Page generated in 0.0018 seconds