Editing Event-based Commonsense Knowledge in LLMs

ACL ARR 2024 April Submission834 Authors

16 Apr 2024 (modified: 16 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge editing technology is essential for maintaining the accuracy and currency of information within large language models (LLMs). However, existing methods have predominantly focused on factual triplet knowledge, neglecting the crucial role of commonsense knowledge, particularly in real-world scenarios where events are prevalent. This study addresses this research gap by introducing the Event-based Commonsense Knowledge Editing (ECKE) dataset, which encompasses a comprehensive collection of 15600 directly generated questions, 3682 multiple-choice questions, and 48349 true/false questions. Additionally, we investigate the storage and recall formats of commonsense knowledge within LLMs, shedding light on the associated challenges of editing such knowledge. To overcome these challenges, we propose a Dynamics-aware Editing Method(DEM) that effectively edits commonsense knowledge in LLMs. Our experimental results demonstrate a significant 15.7% improvement in Comonsense while preserving the naturalness of the generated outputs.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 834
Loading