TY - GEN
T1 - ISparse
T2 - 10th ACM International Conference on Multimedia Retrieval, ICMR 2020
AU - Garg, Yash
AU - Candan, K. Selçuk
N1 - Funding Information:
∗This work is partially funded by NSF #1909555 (p-Causal), #1827757 (BDMC), #1629888 (GEARS), #1633381 (Complex Systems), and #1610282 (DataStorm) Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ICMR ’20, June 8–11, 2020, Dublin, Ireland © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-7087-5/20/06...$15.00 https://doi.org/10.1145/3372278.3390688
Publisher Copyright:
© 2020 ACM.
PY - 2020/6/8
Y1 - 2020/6/8
N2 - Deep neural networks have demonstrated unprecedented success in various multimedia applications. However, the networks created are often very complex, with large numbers of trainable edges that require extensive computational resources. We note that many successful networks nevertheless often contain large numbers of redundant edges. Moreover, many of these edges may have negligible contributions towards the overall network performance. In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network without impacting the network performance. iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output. Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead. Comparisons of iSparse against Dropout, L1, DropConnect, Retraining-Free, and Lottery-Ticket Hypothesis on benchmark datasets show that iSparse leads to effective network sparsifications.
AB - Deep neural networks have demonstrated unprecedented success in various multimedia applications. However, the networks created are often very complex, with large numbers of trainable edges that require extensive computational resources. We note that many successful networks nevertheless often contain large numbers of redundant edges. Moreover, many of these edges may have negligible contributions towards the overall network performance. In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network without impacting the network performance. iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output. Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead. Comparisons of iSparse against Dropout, L1, DropConnect, Retraining-Free, and Lottery-Ticket Hypothesis on benchmark datasets show that iSparse leads to effective network sparsifications.
KW - DropConnect
KW - Dropout
KW - Neural network
KW - Pruning
KW - Sparsification
UR - http://www.scopus.com/inward/record.url?scp=85086895826&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086895826&partnerID=8YFLogxK
U2 - 10.1145/3372278.3390688
DO - 10.1145/3372278.3390688
M3 - Conference contribution
AN - SCOPUS:85086895826
T3 - ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval
SP - 180
EP - 188
BT - ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
Y2 - 8 June 2020 through 11 June 2020
ER -