Abstract
Deep learning has sparked a renaissance in computer systems and architecture. Despite the breakneck pace of innovation, there is a crucial issue concerning the research and industry communities at large: how to enable neutral and useful performance assessment for machine learning (ML) software frameworks, ML hardware accelerators, and ML systems comprising both the software stack and the hardware. The ML field needs systematic methods for evaluating performance that represents real-world use cases and useful for making comparisons across different software and hardware implementations. MLPerf answers the call. MLPerf is an ML benchmark standard driven by academia and industry (70+ organizations). Built out of the expertise of multiple organizations, MLPerf establishes a standard benchmark suite with proper metrics and benchmarking methodologies to level the playing field for ML system performance measurement of different ML inference hardware, software, and services.
Original language | English (US) |
---|---|
Article number | 9380984 |
Pages (from-to) | 10-18 |
Number of pages | 9 |
Journal | IEEE Micro |
Volume | 41 |
Issue number | 3 |
DOIs | |
State | Published - May 1 2021 |
Keywords
- Benchmarks
- Inference
- Machine learning
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Electrical and Electronic Engineering