TY - GEN
T1 - Non-monotonic feature selection
AU - Xu, Zenglin
AU - Jin, Rong
AU - Ye, Jieping
AU - Lyu, Michael R.
AU - King, Irwin
PY - 2009/12/9
Y1 - 2009/12/9
N2 - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger than m. We refer to this property as the " monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution of MKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.
AB - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger than m. We refer to this property as the " monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution of MKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.
UR - http://www.scopus.com/inward/record.url?scp=71149100436&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=71149100436&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:71149100436
SN - 9781605585161
T3 - Proceedings of the 26th International Conference On Machine Learning, ICML 2009
SP - 1145
EP - 1152
BT - Proceedings of the 26th International Conference On Machine Learning, ICML 2009
T2 - 26th International Conference On Machine Learning, ICML 2009
Y2 - 14 June 2009 through 18 June 2009
ER -