File(s) under permanent embargo

On robustness of neural semantic parsers

conference contribution
posted on 2021-04-01, 00:00 authored by S Huang, Z Li, L Qu, Lei PanLei Pan
Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples. In this paper, we provide the first empirical study on the robustness of semantic parsers in the presence of adversarial attacks. Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs, whose utterances have exactly the same meanings as the original ones. A scalable methodology is proposed to construct robustness test sets based on existing benchmark corpora. Our results answered five research questions in measuring the sate-of-the-art parsers’ performance on robustness test sets, and evaluating the effect of data augmentation.

History

Event

Association for Computational Linguistics. Conference (16th : 2021 : Online)

Series

Association for Computational Linguistics Conference

Pagination

3333 - 3342

Publisher

Association for Computational Linguistics

Location

Online

Place of publication

Stroudsburg, Pa.

Start date

2021-04-19

End date

2021-04-23

ISBN-13

9781954085022

Language

eng

Publication classification

E1 Full written paper - refereed

Editor/Contributor(s)

P Merlo, J Tiedemann, R Tsarfaty

Title of proceedings

EACL 2021 : Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics