首页> 外文会议>IEEE/ACM International Workshop on Automation of Software Test >Analyzing Automatic Test Generation Tools for Refactoring Validation
【24h】

Analyzing Automatic Test Generation Tools for Refactoring Validation

机译:分析自动测试生成工具进行重构验证

获取原文

摘要

Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).
机译:重构编辑在敏捷开发期间非常常见。由于其固有的复杂性,重构易于出错。从这个意义上讲,重构编辑需要验证来检查是否介绍了行为改变。验证重构的有效方法是使用自动生成的回归测试套件。然而,虽然流行,但它不确定用于生成测试的工具(例如,RANDOOP和EVOSUITE)实际上是否适合在这种情况下。本文提出了一个探索性研究,调查了自动工具产生的套房效率,了解其检测重构故障的能力。我们的结果表明,Randoop和Evosuite套房都错过了超过50 %的所有注入的故障。此外,他们的套房包括在编辑(过时的测试用例)之后无法整体运行的大量测试。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号