TY - GEN
T1 - OPTIMIZING (L0, L1)-SMOOTH FUNCTIONS BY GRADIENT METHODS
AU - Vankov, Daniil
AU - Rodomanov, Anton
AU - Nedić, Angelia
AU - Sankar, Lalitha
AU - Stich, Sebastian U.
N1 - Publisher Copyright:
© 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
PY - 2025
Y1 - 2025
N2 - We study gradient methods for optimizing (L0, L1)-smooth functions, a class that generalizes Lipschitz-smooth functions and has gained attention for its relevance in machine learning. We provide new insights into the structure of this function class and develop a principled framework for analyzing optimization methods in this setting. While our convergence rate estimates recover existing results for minimizing the gradient norm in nonconvex problems, our approach significantly improves the best-known complexity bounds for convex objectives. Moreover, we show that the gradient method with Polyak stepsizes and the normalized gradient method achieve nearly the same complexity guarantees as methods that rely on explicit knowledge of (L0, L1). Finally, we demonstrate that a carefully designed accelerated gradient method can be applied to (L0, L1)-smooth functions, further improving all previous results.
AB - We study gradient methods for optimizing (L0, L1)-smooth functions, a class that generalizes Lipschitz-smooth functions and has gained attention for its relevance in machine learning. We provide new insights into the structure of this function class and develop a principled framework for analyzing optimization methods in this setting. While our convergence rate estimates recover existing results for minimizing the gradient norm in nonconvex problems, our approach significantly improves the best-known complexity bounds for convex objectives. Moreover, we show that the gradient method with Polyak stepsizes and the normalized gradient method achieve nearly the same complexity guarantees as methods that rely on explicit knowledge of (L0, L1). Finally, we demonstrate that a carefully designed accelerated gradient method can be applied to (L0, L1)-smooth functions, further improving all previous results.
UR - https://www.scopus.com/pages/publications/105010195608
UR - https://www.scopus.com/pages/publications/105010195608#tab=citedBy
M3 - Conference contribution
AN - SCOPUS:105010195608
T3 - 13th International Conference on Learning Representations, ICLR 2025
SP - 39615
EP - 39641
BT - 13th International Conference on Learning Representations, ICLR 2025
PB - International Conference on Learning Representations, ICLR
T2 - 13th International Conference on Learning Representations, ICLR 2025
Y2 - 24 April 2025 through 28 April 2025
ER -