An Automated Approach to Multidimensional Benchmarking on Large-Scale Systems

Samuel Leeman-Munk and Aaron Weeden

Volume 1, Issue 1 (December 2010), pp. 44–50

https://doi.org/10.22369/issn.2153-4136/1/1/7

PDF icon Download PDF

BibTeX
@article{jocse-1-1-7,
  author={Samuel Leeman-Munk and Aaron Weeden},
  title={An Automated Approach to Multidimensional Benchmarking on Large-Scale Systems},
  journal={The Journal of Computational Science Education},
  year=2010,
  month=dec,
  volume=1,
  issue=1,
  pages={44--50},
  doi={https://doi.org/10.22369/issn.2153-4136/1/1/7}
}
Copied to clipboard!

High performance computing raises the bar for benchmarking. Existing benchmarking applications such as Linpack measure raw power of a computer in one dimension, but in the myriad architectures of high performance cluster computing an algorithm may show excellent performance on one cluster while on another cluster of the same benchmark it performs poorly. For a year a group of Earlham student researchers worked through the Undergraduate Petascale Education Program (UPEP) on an improved, multidimensional benchmarking technique that would more precisely capture the appropriateness of a cluster resource to a given algorithm. We planned to measure cluster effectiveness according to the thirteen dwarfs of computing as published in Berkeley's parallel computing research paper. To accomplish this we created PetaKit, a software stack for building and running programs on cluster computers.