Deakin University
Browse

Domain-adversarial graph neural networks for text classification

conference contribution
posted on 2019-01-01, 00:00 authored by Man Wu, Shirui Pan, Xingquan Zhu, Chuan Zhou, Lei Pan
Text classification, in cross-domain setting, is a challenging task. On the one hand, data from other domains are often useful to improve the learning on the target domain; on the other hand, domain variance and hierarchical structure of documents from words, key phrases, sentences, paragraphs, etc. make it difficult to align domains for effective learning. To date, existing cross-domain text classification methods mainly strive to minimize feature distribution differences between domains, and they typically suffer from three major limitations — (1) difficult to capture semantics in non-consecutive phrases and long-distance word dependency because of treating texts as word sequences, (2) neglect of hierarchical coarse-grained structures of document for feature learning, and (3) narrow focus of the domains at instance levels, without using domains as supervisions to improve text classification. This paper proposes an end-to-end, domain-adversarial graph neural networks (DAGNN), for cross-domain text classification. Our motivation is to model documents as graphs and use a domain-adversarial training principle to lean features from each graph (as well as learning the separation of domains) for effective text classification. At the instance level, DAGNN uses a graph to model each document, so that it can capture non-consecutive and long-distance semantics. At the feature level, DAGNN uses graphs from different domains to jointly train hierarchical graph neural networks in order to learn good features. At the learning level, DAGNN proposes a domain-adversarial principle such that the learned features not only optimally classify documents but also separates domains. Experiments on benchmark datasets demonstrate the effectiveness of our method in cross-domain classification tasks.

History

Pagination

648-657

Location

Beijing, China

Start date

2019-11-08

End date

2019-11-11

ISBN-13

9781728146041

Language

eng

Publication classification

E1 Full written paper - refereed

Title of proceedings

ICDM 2019 : Proceedings of the IEEE International Conference on Data Mining

Event

Data Mining. Conference (2019 : Beijing, China)

Publisher

IEEE

Place of publication

Piscataway, N. J.

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC