Planning is a classic problem in Artificial Intelligence (AI). Recently, the need for creating "Explainable AI" has been recognised and voiced by many researchers. Leveraging on the strength of argumentation, in particular, the Related Admissible semantics for generating explanations, this work makes an initial step towards "explainable planning". We illustrate (1) how plan generation can be equated to constructing acceptable arguments and (2) how explanations for both "planning solutions" as well as "invalid plans" can be obtained by extracting information from an arguing process. We present an argumentation-based model which takes plans written in a STRIPS-like language as its inputs and returns Assumption-based Argumentation (ABA) frameworks as its outputs. The presented plan construction mapping is both sound and complete in that the planning problem has a solution if and only if its corresponding ABA framework has a set of Related Admissible arguments with the planning goal as its topic. We use the classic Tower of Hanoi puzzle as our case study and demonstrate how ABA can be used to solve this planning puzzle while giving explanations.
展开▼
机译:在端对端激活f 1 Sub>(00)的“ k”个“最小化区域”结果参数 +1 Sup> m k Sub>中生成的方法 min Sub>→ +1 Sup> m k Sub>用于根据三元数系统的f(+ 1,0,-1)结构的算术公理进行转换模拟信号的参数“«-/ +»[m j Sub>] f(+/-)--”互补代码“转换为条件最小化位置信号的结构模拟信号±< / Sup> [m j Sub>] f усл Sub>(+/-) min Sub>及其实现的功能结构(俄罗斯逻辑版本)