【24h】

Behaviourally Adequate Software Testing

机译:行为充分的软件测试

获取原文
获取原文并翻译 | 示例

摘要

Identifying a finite test set that adequately captures the essential behaviour of a program such that all faults are identified is a well-established problem. Traditional adequacy metrics can be impractical, and may be misleading even if they are satisfied. One intuitive notion of adequacy, which has been discussed in theoretical terms over the past three decades, is the idea of behavioural coverage, if it is possible to infer an accurate model of a system from its test executions, then the test set must be adequate. Despite its intuitive basis, it has remained almost entirely in the theoretical domain because inferred models have been expected to be exact (generally an infeasible task), and have not allowed for any pragmatic interim measures of adequacy to guide test set generation. In this work we present a new test generation technique that is founded on behavioural adequacy, which combines a model evaluation framework from the domain of statistical learning theory with search-based white-box test generation strategies. Experiments with our BESTEST prototype indicate that such test sets not only come with a statistically valid measurement of adequacy, but also detect significantly more defects.
机译:确定能够充分捕获程序基本行为的有限测试集,以便识别所有故障是一个公认的问题。传统的充足性度量标准可能不切实际,即使满足了标准也可能会产生误导。在过去的三十年中,一直在理论上讨论过一个足够的直觉概念,即行为覆盖的想法,如果可以从其测试执行中推断出一个准确的系统模型,那么测试集就必须足够。尽管有直观的基础,但由于已推断出模型是精确的(通常是不可行的任务),并且不允许采取任何实用的临时性足够措施来指导测试集的生成,因此它几乎一直处于理论领域。在这项工作中,我们提出了一种基于行为充分性的新测试生成技术,该技术将统计学习理论领域的模型评估框架与基于搜索的白盒测试生成策略相结合。用我们的BESTEST原型进行的实验表明,这样的测试集不仅具有统计上有效的充足性度量,而且还可以发现更多的缺陷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号