MLPerf: An industry standard benchmark suite for machine learning performance

Peter Mattson, Hanlin Tang, Gu Yeon Wei, Carole Jean Wu, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, David Patterson, Guenther Schmuelling

Research output: Contribution to journalArticlepeer-review

70 Scopus citations


In this article, we describe the design choices behind MLPerf, a machine learning performance benchmark that has become an industry standard. The first two rounds of the MLPerf Training benchmark helped drive improvements to software-stack performance and scalability, showing a 1.3× speedup in the top 16-chip results despite higher quality targets and a 5.5× increase in system scale. The first round of MLPerf Inference received over 500 benchmark results from 14 different organizations, showing growing adoption.

Original languageEnglish (US)
Article number9001257
Pages (from-to)8-16
Number of pages9
JournalIEEE Micro
Issue number2
StatePublished - Mar 1 2020

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering


Dive into the research topics of 'MLPerf: An industry standard benchmark suite for machine learning performance'. Together they form a unique fingerprint.

Cite this