Towards Semantic Interpretation and Validation of Graph Attention-based ExplanationsDownload PDF

Published: 09 May 2023, Last Modified: 07 Jun 2023ICRA2023 XRo OralReaders: Everyone
Keywords: Attention, eXplanable AI, graph neural networks, pose estimation
TL;DR: Using semantic attention to explain the performance of a GNN-based pose estimation model.
Abstract: In this work, we investigate the use of semantic attention to explain the performance of a Graph Neural Network (GNN)-based pose estimation model. To validate our approach, we apply semantically-informed perturbations to the input data and correlate the predicted feature importance weights with the accuracy of the model. Graph Deep Learning (GDL) is an emerging field of machine learning for tasks like scene interpretation, as it exploits flexible graph structures to describe complex features and relationships in a very concise format. However, due to the unconventional structure of the graphs, traditional explainability methods used in eXplainable AI (XAI) require further adaptation and thus, graph-specific methods are introduced. Attention is a powerful tool, introduced to estimate the importance of input features in deep learning models. It has been previously used to provide feature-based explanations on the predictions of GNN models. In our proposed work, we exploit graph attention to identify key semantic classes for lidar pointcloud pose estimation. We extend the current attention- based graph explainability methods by investigating the use of attention weights as importance indicators of semantically sorted feature sets by analysing the correlation between attention weights distribution and model accuracy. Our method has shown promising results for post-hoc semantic explanation of graph-based pose estimation.
0 Replies

Loading