The Vision behind MLPerf: Understanding AI Inference Performance

Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole Jean Wu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Deep learning has sparked a renaissance in computer systems and architecture. Despite the breakneck pace of innovation, there is a crucial issue concerning the research and industry communities at large: how to enable neutral and useful performance assessment for machine learning (ML) software frameworks, ML hardware accelerators, and ML systems comprising both the software stack and the hardware. The ML field needs systematic methods for evaluating performance that represents real-world use cases and useful for making comparisons across different software and hardware implementations. MLPerf answers the call. MLPerf is an ML benchmark standard driven by academia and industry (70+ organizations). Built out of the expertise of multiple organizations, MLPerf establishes a standard benchmark suite with proper metrics and benchmarking methodologies to level the playing field for ML system performance measurement of different ML inference hardware, software, and services.

Original languageEnglish (US)
Article number9380984
Pages (from-to)10-18
Number of pages9
JournalIEEE Micro
Volume41
Issue number3
DOIs
StatePublished - May 1 2021

Keywords

  • Benchmarks
  • Inference
  • Machine learning

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'The Vision behind MLPerf: Understanding AI Inference Performance'. Together they form a unique fingerprint.

Cite this