TY - GEN
T1 - Learning cache replacement with CACHEUS
AU - Rodriguez, Liana V.
AU - Yusuf, Farzana
AU - Lyons, Steven
AU - Paz, Eysler
AU - Rangaswami, Raju
AU - Liu, Jason
AU - Zhao, Ming
AU - Narasimhan, Giri
N1 - Funding Information:
We would like to thank the reviewers of this paper and our shepherd Ken Salem for insightful feedback that helped improve the content and presentation of this paper substantially. This work was supported in part by a NetApp Faculty Fellowship, and NSF grants CCF-1718335, CNS-1563883, and CNS-1956229.
Publisher Copyright:
© 2021 by The USENIX Association.
PY - 2021
Y1 - 2021
N2 - Recent advances in machine learning open up new and attractive approaches for solving classic problems in computing systems. For storage systems, cache replacement is one such problem because of its enormous impact on performance. We classify workloads as a composition of four workload primitive types — LFU-friendly, LRU-friendly, scan, and churn. We then design and evaluate CACHEUS, a new class of fully adaptive, machine-learned caching algorithms that utilize a combination of experts designed to address these workload primitive types. The experts used by CACHEUS include the state-of-the-art ARC, LIRS and LFU, and two new ones – SR-LRU, a scan-resistant version of LRU, and CR-LFU, a churn-resistant version of LFU. We evaluate CACHEUS using 17,766 simulation experiments on a collection of 329 workloads run against 6 different cache configurations. Paired t-test analysis demonstrates that CACHEUS using the newly proposed lightweight experts, SR-LRU and CR-LFU, is the most consistently performing caching algorithm across a range of workloads and cache sizes. Furthermore, CACHEUS enables augmenting state-of-the-art algorithms (e.g., LIRS, ARC) by combining it with a complementary cache replacement algorithm (e.g., LFU) to better handle a wider variety of workload primitive types.
AB - Recent advances in machine learning open up new and attractive approaches for solving classic problems in computing systems. For storage systems, cache replacement is one such problem because of its enormous impact on performance. We classify workloads as a composition of four workload primitive types — LFU-friendly, LRU-friendly, scan, and churn. We then design and evaluate CACHEUS, a new class of fully adaptive, machine-learned caching algorithms that utilize a combination of experts designed to address these workload primitive types. The experts used by CACHEUS include the state-of-the-art ARC, LIRS and LFU, and two new ones – SR-LRU, a scan-resistant version of LRU, and CR-LFU, a churn-resistant version of LFU. We evaluate CACHEUS using 17,766 simulation experiments on a collection of 329 workloads run against 6 different cache configurations. Paired t-test analysis demonstrates that CACHEUS using the newly proposed lightweight experts, SR-LRU and CR-LFU, is the most consistently performing caching algorithm across a range of workloads and cache sizes. Furthermore, CACHEUS enables augmenting state-of-the-art algorithms (e.g., LIRS, ARC) by combining it with a complementary cache replacement algorithm (e.g., LFU) to better handle a wider variety of workload primitive types.
UR - http://www.scopus.com/inward/record.url?scp=85102980505&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102980505&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85102980505
T3 - Proceedings of the 19th USENIX Conference on File and Storage Technologies, FAST 2021
SP - 341
EP - 354
BT - Proceedings of the 19th USENIX Conference on File and Storage Technologies, FAST 2021
PB - USENIX Association
T2 - 19th USENIX Conference on File and Storage Technologies, FAST 2021
Y2 - 23 February 2021 through 25 February 2021
ER -