【24h】

On the Complexity of Error Explanation

机译:关于错误解释的复杂性

获取原文

摘要

When a system fails to satisfy its specification, the model checker produces an error trace (or counter-example) that demonstrates an undesirable behavior, which is then used in debugging the system. Error explanation is the task of discovering errors in the system or the reasons why the system exhibits the error trace. While there has been considerable recent interest in automating this task and developing tools based on different heuristics, there has been very little effort in characterizing the computational complexity of the problem of error explanation. In this paper, we study the complexity of two popular heuristics used in error explanation. The first approach tries to compute the smallest number of system changes that need to be made in order to ensure that the given counter-example is no longer exhibited, with the intuition being that these changes are the errors that need fixing. The second approach relies on the observation that differences between correct and faulty runs of a system shed considerable light on the sources of errors. In this approach, one tries to compute the correct trace of the system that is closest to the counter-example. We consider three commonly used abstractions to model programs and systems, namely, finite state Mealy machines, extended finite state machines and pushdown automata. We show that the first approach of trying to find the fewest program changes is NP-complete no matter which of the three formal models is used to represent the system. Moreover we show that no polynomial factor approximation algorithm for computing the smallest set of changes is possible, unless P = NP. For the second approach, we present a polynomial time algorithm that finds the closest correct trace, when the program is represented by a Mealy machine or a pushdown automata. When the program is represented by an extended finite state machine, the problem is once again NP-complete, and no polynomial factor approximation algorithm is likely.
机译:当系统未能满足其规范时,模型检查器会产生错误跟踪(或逆示例),其演示了不希望的行为,然后用于调试系统。错误说明是在系统中发现错误的任务或系统呈现错误跟踪的原因。虽然近来近来近来有利于基于不同启发式的自动化此任务和开发工具的兴趣,但在表征错误解释问题的计算复杂性时已经存在很少的努力。在本文中,我们研究了错误解释中使用的两个流行启发式的复杂性。第一种方法尝试计算需要进行的最小系统更改,以确保不再展示给定的反例,随着这些更改是需要修复的错误。第二种方法依赖于观察到系统对系统的正确和故障运行之间的差异,从而对误差的来源。在这种方法中,一个人试图计算最接近逆示例的系统的正确轨迹。我们考虑三种常用的抽象来模拟程序和系统,即有限状态的机器,扩展有限状态机和推动自动机。我们表明,尝试找到最少的程序变化的第一种方法是NP-Trice,无论使用哪三种正式模型中哪一个代表系统。此外,除非P = NP,否则我们表明没有用于计算最小变化集的多项式因子近似算法。对于第二种方法,我们介绍了一种多项式时间算法,该多项式时间算法找到最接近的正确轨迹,当程序由MEALY机器或推动自动机表示。当程序由扩展的有限状态机表示时,问题再次NP完整,并且不可能是多项式因子近似算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号