Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems

Yingzhen Yang, Jiahui Yu

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Non-convex and non-smooth optimization problems are important for statistics and machine learning. However, solving such problems is always challenging. In this paper, we propose fast proximal gradient descent based methods to solve a class of non-convex and non-smooth sparse learning problems, i.e. the `0 regularization problems. We prove improved convergence rate of proximal gradient descent on the `0 regularization problems, and propose two accelerated versions by support projection. The proposed accelerated proximal gradient descent methods by support projection have convergence rates which match the Nesterov’s optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Experimental results demonstrate the effectiveness of the proposed algorithms. We also propose feed-forward neural networks as fast encoders to approximate the optimization results generated by the proposed accelerated algorithms.

Original languageEnglish (US)
Pages (from-to)1253-1262
Number of pages10
JournalProceedings of Machine Learning Research
Volume115
StatePublished - 2019
Event35th Uncertainty in Artificial Intelligence Conference, UAI 2019 - Tel Aviv, Israel
Duration: Jul 22 2019Jul 25 2019

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems'. Together they form a unique fingerprint.

Cite this