Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence ModelsDownload PDF

Published: 20 Dec 2019, Last Modified: 03 Apr 2024ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We propose measurement of phrase importance and algorithms for hierarchical explanation of neural sequence model predictions
Abstract: The impressive performance of neural networks on natural language processing tasks attributes to their ability to model complicated word and phrase compositions. To explain how the model handles semantic compositions, we study hierarchical explanation of neural network predictions. We identify non-additivity and context independent importance attributions within hierarchies as two desirable properties for highlighting word and phrase compositions. We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models. In this paper, we start by proposing a formal and general way to quantify the importance of each word and phrase. Following the formulation, we propose Sampling and Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC) algorithm. Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms. Our algorithms help to visualize semantic composition captured by models, extract classification rules and improve human trust of models.
Keywords: natural language processing, interpretability
Data: [SST](https://paperswithcode.com/dataset/sst), [SST-2](https://paperswithcode.com/dataset/sst-2)
Code: [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=BkxRRkSKwr)
Original Pdf: pdf
11 Replies

Loading