TY - GEN
T1 - Causality Guided Disentanglement for Cross-Platform Hate Speech Detection
AU - Sheth, Paras
AU - Moraffah, Raha
AU - Kumarage, Tharindu S.
AU - Chadha, Aman
AU - Liu, Huan
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/3/4
Y1 - 2024/3/4
N2 - espite their value in promoting open discourse, social media plat-forms are often exploited to spread harmful content. Current deep learning and natural language processing models used for detect-ing this harmful content rely on domain-specific terms affecting their ability to adapt to generalizable hate speech detection. This is because they tend to focus too narrowly on particular linguistic signals or the use of certain categories of words. Another signifi-cant challenge arises when platforms lack high-quality annotated data for training, leading to a need for cross-platform models that can adapt to different distribution shifts. Our research introduces a cross-platform hate speech detection model capable of being trained on one platform's data and generalizing to multiple unseen platforms. One way to achieve good generalizability across plat-forms is to disentangle the input representations into invariant and platform-dependent features. We also argue that learning causal relationships, which remain constant across diverse environments, can significantly aid in understanding invariant representations in hate speech. By disentangling input into platform-dependent fea-tures (useful for predicting hate targets) and platform-independent features (used to predict the presence of hate), we learn invariant representations resistant to distribution shifts. These features are then used to predict hate speech across unseen platforms.
AB - espite their value in promoting open discourse, social media plat-forms are often exploited to spread harmful content. Current deep learning and natural language processing models used for detect-ing this harmful content rely on domain-specific terms affecting their ability to adapt to generalizable hate speech detection. This is because they tend to focus too narrowly on particular linguistic signals or the use of certain categories of words. Another signifi-cant challenge arises when platforms lack high-quality annotated data for training, leading to a need for cross-platform models that can adapt to different distribution shifts. Our research introduces a cross-platform hate speech detection model capable of being trained on one platform's data and generalizing to multiple unseen platforms. One way to achieve good generalizability across plat-forms is to disentangle the input representations into invariant and platform-dependent features. We also argue that learning causal relationships, which remain constant across diverse environments, can significantly aid in understanding invariant representations in hate speech. By disentangling input into platform-dependent fea-tures (useful for predicting hate targets) and platform-independent features (used to predict the presence of hate), we learn invariant representations resistant to distribution shifts. These features are then used to predict hate speech across unseen platforms.
KW - causal representation learning
KW - domain generalization
KW - hate speech detection
UR - http://www.scopus.com/inward/record.url?scp=85189049435&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85189049435&partnerID=8YFLogxK
U2 - 10.1145/3616855.3635771
DO - 10.1145/3616855.3635771
M3 - Conference contribution
AN - SCOPUS:85189049435
T3 - WSDM 2024 - Proceedings of the 17th ACM International Conference on Web Search and Data Mining
SP - 626
EP - 635
BT - WSDM 2024 - Proceedings of the 17th ACM International Conference on Web Search and Data Mining
PB - Association for Computing Machinery, Inc
T2 - 17th ACM International Conference on Web Search and Data Mining, WSDM 2024
Y2 - 4 March 2024 through 8 March 2024
ER -