Fast proximal gradient descent for a class of non-convex and non-smooth sparse learning problems

Yingzhen Yang, Jiahui Yu

Research output: Contribution to conferencePaperpeer-review

Abstract

Non-convex and non-smooth optimization problems are important for statistics and machine learning. However, solving such problems is always challenging. In this paper, we propose fast proximal gradient descent based methods to solve a class of non-convex and non-smooth sparse learning problems, i.e. the ℓ0 regularization problems. We prove improved convergence rate of proximal gradient descent on the ℓ0 regularization problems, and propose two accelerated versions by support projection. The proposed accelerated proximal gradient descent methods by support projection have convergence rates which match the Nesterov’s optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Experimental results demonstrate the effectiveness of the proposed algorithms. We also propose feed-forward neural networks as fast encoders to approximate the optimization results generated by the proposed accelerated algorithms.

Original languageEnglish (US)
StatePublished - 2019
Event35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 - Tel Aviv, Israel
Duration: Jul 22 2019Jul 25 2019

Conference

Conference35th Conference on Uncertainty in Artificial Intelligence, UAI 2019
Country/TerritoryIsrael
CityTel Aviv
Period7/22/197/25/19

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Fast proximal gradient descent for a class of non-convex and non-smooth sparse learning problems'. Together they form a unique fingerprint.

Cite this