Abstract
Non-convex and non-smooth optimization problems are important for statistics and machine learning. However, solving such problems is always challenging. In this paper, we propose fast proximal gradient descent based methods to solve a class of non-convex and non-smooth sparse learning problems, i.e. the `0 regularization problems. We prove improved convergence rate of proximal gradient descent on the `0 regularization problems, and propose two accelerated versions by support projection. The proposed accelerated proximal gradient descent methods by support projection have convergence rates which match the Nesterov’s optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Experimental results demonstrate the effectiveness of the proposed algorithms. We also propose feed-forward neural networks as fast encoders to approximate the optimization results generated by the proposed accelerated algorithms.
Original language | English (US) |
---|---|
Pages (from-to) | 1253-1262 |
Number of pages | 10 |
Journal | Proceedings of Machine Learning Research |
Volume | 115 |
State | Published - 2019 |
Event | 35th Uncertainty in Artificial Intelligence Conference, UAI 2019 - Tel Aviv, Israel Duration: Jul 22 2019 → Jul 25 2019 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability