Adversarial Learning for Multi-Task Sequence Labeling with Attention Mechanism

Yu Wang, Yun Li, Ziye Zhu, Hanghang Tong, Yue Huang

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


With the requirements of natural language applications, multi-task sequence labeling methods have some immediate benefits over the single-task sequence labeling methods. Recently, many state-of-the-art multi-task sequence labeling methods were proposed, while still many issues to be resolved including (C1) exploring a more general relationship between tasks, (C2) extracting the task-shared knowledge purely and (C3) merging the task-shared knowledge for each task appropriately. To address the above challenges, we propose MTAA, a symmetric multi-task sequence labeling model, which performs an arbitrary number of tasks simultaneously. Furthermore, MTAA extracts the shared knowledge among tasks by adversarial learning and integrates the proposed multi-representation fusion attention mechanism for merging feature representations. We evaluate MTAA on two widely used data sets: CoNLL2003 and OntoNotes5.0. Experimental results show that our proposed model outperforms the latest methods on the named entity recognition and the syntactic chunking task by a large margin, and achieves state-of-the-art results on the part-of-speech tagging task.

Original languageEnglish (US)
Article number9153102
Pages (from-to)2476-2488
Number of pages13
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
StatePublished - 2020


  • Adversarial learning
  • attention mechanism
  • multi-task learning
  • sequence labeling

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Adversarial Learning for Multi-Task Sequence Labeling with Attention Mechanism'. Together they form a unique fingerprint.

Cite this