Understanding Data Storage and Ingestion for Large-Scale Deep Recommendation Model Training

Mark Zhao, Niket Agarwal, Aarti Basant, Bugra Gedik, Satadru Pan, Mustafa Ozdal, Rakesh Komuravelli, Jerry Pan, Tianshu Bao, Haowei Lu, Sundaram Narayanan, Jack Langman, Kevin Wilfong, Harsha Rastogi, Carole Jean Wu, Christos Kozyrakis, Parik Pol

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Datacenter-scale AI training clusters consisting of thousands of domain-specifc accelerators (DSA) are used to train increasinglycomplex deep learning models. These clusters rely on a data storage and ingestion (DSI) pipeline, responsible for storing exabytes of training data and serving it at tens of terabytes per second. As DSAs continue to push training efciency and throughput, the DSI pipeline is becoming the dominating factor that constrains the overall training performance and capacity. Innovations that improve the efciency and performance of DSI systems and hardware are urgent, demanding a deep understanding of DSI characteristics and infrastructure at scale. This paper presents Meta's end-to-end DSI pipeline, composed of a central data warehouse built on distributed storage and a Data PreProcessing Service that scales to eliminate data stalls. We characterize how hundreds of models are collaboratively trained across geo-distributed datacenters via diverse and continuous training jobs. These training jobs read and heavily flter massive and evolving datasets, resulting in popular features and samples used across training jobs. We measure the intense network, memory, and compute resources required by each training job to preprocess samples during training. Finally, we synthesize key takeaways based on our production infrastructure characterization. These include identifying hardware bottlenecks, discussing opportunities for heterogeneous DSI hardware, motivating research in datacenter scheduling and benchmark datasets, and assimilating lessons learned in optimizing DSI infrastructure.

Original languageEnglish (US)
Title of host publicationISCA 2022 - Proceedings of the 49th Annual International Symposium on Computer Architecture
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1042-1057
Number of pages16
ISBN (Electronic)9781450386104
DOIs
StatePublished - Jun 18 2022
Event49th IEEE/ACM International Symposium on Computer Architecture, ISCA 2022 - New York, United States
Duration: Jun 18 2022Jun 22 2022

Publication series

NameProceedings - International Symposium on Computer Architecture
ISSN (Print)1063-6897

Conference

Conference49th IEEE/ACM International Symposium on Computer Architecture, ISCA 2022
Country/TerritoryUnited States
CityNew York
Period6/18/226/22/22

Keywords

  • Data ingestion
  • Data storage
  • Databases
  • Distributed systems
  • Machine learning systems

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Understanding Data Storage and Ingestion for Large-Scale Deep Recommendation Model Training'. Together they form a unique fingerprint.

Cite this