Robustness of Explainable Artificial Intelligence in Industrial Process Modelling

ICML 2024 Workshop ML4LMS Submission3 Authors

29 Apr 2024 (modified: 30 Apr 2024)ICML 2024 Workshop ML4LMS SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: eXplainable Artificial Intelligence, Robustness, Noise Analysis, Machine Learning, Industrial Process Modelling, Evaluation Methodology
Abstract: This paper evaluates current eXplainable Artificial Intelligence (XAI) methods based on simulations and sensitivity analysis of Electric Arc Furnaces (EAFs) to better understand the limits and robustness characteristics of methods such as SHapley Additive exPlanations SHAP), Local Interpretable Model-agnostic Explanations (LIME), as well as Averaged Local Effects (ALE) or Smooth Gradients (SG). These methods were applied to various types of black-box models and then evaluated using a novel scoring methodology over a range of simulated noise and different data settings. A lower bound for the scores on random data was computed too, showing the theoretical lower performance bounds for the scoring method. The resulting evaluation showed that the capability of the models to capture the process accurately is, indeed, coupled with the correctness of the explainability of the underlying data-generating process. We furthermore showed that the explanation methodology influences the correctness of the effects that were predicted, furthering the understanding of the processes at hand.
Submission Number: 3
Loading