DAMP: Doubly Aligned Multilingual Parser for Task-Oriented DialogueDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Modern virtual assistants are powered by task-oriented dialogue systems with internal semantic parsing engines. In global markets such as India and Latin America, mixed language input from bilingual users is prevalent. Prior work has shown that multilingual transformer-based models exhibit worse multilingual transfer for semantic parsing than for other benchmark tasks. In this work, we improve zero-shot multilingual semantic parsing without harming supervised performance. First, we show that pretraining alignment objectives improve multilingual transfer while also reducing negative transfer to English. We then introduce a constrained optimization method to improve alignment using domain adversarial training. Our \textbf{D}oubly \textbf{A}ligned \textbf{M}ultilingual \textbf{P}arser (\textbf{DAMP}) improves mBERT transfer performance by 3x, 6x, and 81x on the Spanish-English Task Oriented Parsing, Hindi-English Task Oriented Parsing and Multilingual Task Oriented Parsing benchmarks respectively, and outperforms XLM-R and mT5-Large while using 3.2x fewer parameters.
Paper Type: long
0 Replies

Loading