TY - GEN
T1 - PLEASE
T2 - 26th European Conference on Artificial Intelligence, ECAI 2023
AU - Vasileiou, Stylianos Loukas
AU - Yeoh, William
N1 - Publisher Copyright:
© 2023 The Authors.
PY - 2023/9/28
Y1 - 2023/9/28
N2 - Model Reconciliation Problems (MRPs) and their variant, Logic-based MRPs (L-MRPs), have emerged as popular methods for explainable planning problems. Both MRP and L-MRP approaches assume that the explaining agent has access to an assumed model of the human user receiving the explanation, and it reconciles its own model with the human model to find the differences such that when they are provided as explanations to the human, they will understand them. However, in practical applications, the agent is likely to be fairly uncertain on the actual model of the human and wrong assumptions can lead to incoherent or unintelligible explanations. In this paper, we propose a less stringent requirement: The agent has access to a task-specific vocabulary known by the human and, if available, a human model capturing confidently-known information. Our goal is to find a personalized explanation, which is an explanation that is at an appropriate abstraction level with respect to the human's vocabulary and model. Using a logic-based method called knowledge forgetting for generating abstractions, we propose a simple framework compatible with L-MRP approaches, and evaluate its efficacy through computational and human user experiments.
AB - Model Reconciliation Problems (MRPs) and their variant, Logic-based MRPs (L-MRPs), have emerged as popular methods for explainable planning problems. Both MRP and L-MRP approaches assume that the explaining agent has access to an assumed model of the human user receiving the explanation, and it reconciles its own model with the human model to find the differences such that when they are provided as explanations to the human, they will understand them. However, in practical applications, the agent is likely to be fairly uncertain on the actual model of the human and wrong assumptions can lead to incoherent or unintelligible explanations. In this paper, we propose a less stringent requirement: The agent has access to a task-specific vocabulary known by the human and, if available, a human model capturing confidently-known information. Our goal is to find a personalized explanation, which is an explanation that is at an appropriate abstraction level with respect to the human's vocabulary and model. Using a logic-based method called knowledge forgetting for generating abstractions, we propose a simple framework compatible with L-MRP approaches, and evaluate its efficacy through computational and human user experiments.
UR - https://www.scopus.com/pages/publications/85175859762
U2 - 10.3233/FAIA230543
DO - 10.3233/FAIA230543
M3 - Conference contribution
AN - SCOPUS:85175859762
T3 - Frontiers in Artificial Intelligence and Applications
SP - 2411
EP - 2418
BT - ECAI 2023 - 26th European Conference on Artificial Intelligence, including 12th Conference on Prestigious Applications of Intelligent Systems, PAIS 2023 - Proceedings
A2 - Gal, Kobi
A2 - Gal, Kobi
A2 - Nowe, Ann
A2 - Nalepa, Grzegorz J.
A2 - Fairstein, Roy
A2 - Radulescu, Roxana
PB - IOS Press BV
Y2 - 30 September 2023 through 4 October 2023
ER -