TY - GEN
T1 - Mission
T2 - 35th International Conference on Machine Learning, ICML 2018
AU - Aghazadeh, Amirali
AU - Spring, Ryan
AU - Lejeune, Daniel
AU - Dasarathy, Gautam
AU - Shrivastava, Anshuniali
AU - Baraniuk, Richard G.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Feature selection is an important challenge in machine learning. Il plays a crucial role in the ex- plainabiliry of machine-driven decisions that are rapidly permeating throughout modem society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard ap-proach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only 0(log2p) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
AB - Feature selection is an important challenge in machine learning. Il plays a crucial role in the ex- plainabiliry of machine-driven decisions that are rapidly permeating throughout modem society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard ap-proach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only 0(log2p) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
UR - http://www.scopus.com/inward/record.url?scp=85057251294&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057251294&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85057251294
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 143
EP - 154
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Krause, Andreas
A2 - Dy, Jennifer
PB - International Machine Learning Society (IMLS)
Y2 - 10 July 2018 through 15 July 2018
ER -