TY - GEN
T1 - Adversarial Text Purification
T2 - 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024
AU - Moraffah, Raha
AU - Khandelwal, Shubh
AU - Bhattacharjee, Amrita
AU - Liu, Huan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Adversarial purification is a defense mechanism for safeguarding classifiers against adversarial attacks without knowing the type of attacks or training of the classifier. These techniques characterize and eliminate adversarial perturbations from the attacked inputs, aiming to restore purified samples that retain similarity to the initially attacked ones and are correctly classified by the classifier. Due to the inherent challenges associated with characterizing noise perturbations for discrete inputs, adversarial text purification has been relatively unexplored. In this paper, we investigate the effectiveness of adversarial purification methods in defending text classifiers. We propose a novel adversarial text purification that harnesses the generative capabilities of Large Language Models (LLMs) to purify adversarial text without the need to explicitly characterize the discrete noise perturbations. We utilize prompt engineering to exploit LLMs for recovering the purified samples for given adversarial examples such that they are semantically similar and correctly classified. Our proposed method demonstrates remarkable performance over various classifiers, improving their accuracy under the attack by over 65% on average.
AB - Adversarial purification is a defense mechanism for safeguarding classifiers against adversarial attacks without knowing the type of attacks or training of the classifier. These techniques characterize and eliminate adversarial perturbations from the attacked inputs, aiming to restore purified samples that retain similarity to the initially attacked ones and are correctly classified by the classifier. Due to the inherent challenges associated with characterizing noise perturbations for discrete inputs, adversarial text purification has been relatively unexplored. In this paper, we investigate the effectiveness of adversarial purification methods in defending text classifiers. We propose a novel adversarial text purification that harnesses the generative capabilities of Large Language Models (LLMs) to purify adversarial text without the need to explicitly characterize the discrete noise perturbations. We utilize prompt engineering to exploit LLMs for recovering the purified samples for given adversarial examples such that they are semantically similar and correctly classified. Our proposed method demonstrates remarkable performance over various classifiers, improving their accuracy under the attack by over 65% on average.
KW - Adversarial Purification
KW - Large Language Model
KW - Textual Adversarial Defenses
KW - Textual Adversarial Defenses
UR - https://www.scopus.com/pages/publications/85192843369
UR - https://www.scopus.com/pages/publications/85192843369#tab=citedBy
U2 - 10.1007/978-981-97-2262-4_6
DO - 10.1007/978-981-97-2262-4_6
M3 - Conference contribution
AN - SCOPUS:85192843369
SN - 9789819722648
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 65
EP - 77
BT - Advances in Knowledge Discovery and Data Mining - 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, Proceedings
A2 - Yang, De-Nian
A2 - Xie, Xing
A2 - Tseng, Vincent S.
A2 - Pei, Jian
A2 - Huang, Jen-Wei
A2 - Lin, Jerry Chun-Wei
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 7 May 2024 through 10 May 2024
ER -