首页> 外文期刊>The Journal of Systems and Software >Scented since the beginning: On the diffuseness of test smells in automatically generated test code
【24h】

Scented since the beginning: On the diffuseness of test smells in automatically generated test code

机译:从一开始就闻起来:在自动生成的测试代码中扩散测试气味

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Software testing represents a key software engineering practice to ensure source code quality and reliability. To support developers in this activity and reduce testing effort, several automated unit test generation tools have been proposed. Most of these approaches have the main goal of covering as more branches as possible. While these approaches have good performance, little is still known on the maintainability of the test code they produce, i.e.,whether the generated tests have a good code quality and if they do not possibly introduce issues threatening their effectiveness. To bridge this gap, in this paper we study to what extent existing automated test case generation tools produce potentially problematic test code. We consider seven test smells, i.e.,suboptimal design choices applied by programmers during the development of test cases, as measure of code quality of the generated tests, and evaluate their diffuseness in the unit test classes automatically generated by three state-of-the-art tools such as RANDOOP, JTExPERT, and EVOSUITE. Moreover, we investigate whether there are characteristics of test and production code influencing the generation of smelly tests. Our study shows that all the considered tools tend to generate a high quantity of two specific test smell types, i.e.,Assertion Roulette and Eager Test, which are those that previous studies showed to negatively impact the reliability of production code. We also discover that test size is correlated with the generation of smelly tests. Based on our findings, we argue that more effective automated generation algorithms that explicitly take into account test code quality should be further investigated and devised. (C) 2019 Elsevier Inc. All rights reserved.
机译:软件测试是确保源代码质量和可靠性的关键软件工程实践。为了支持开发人员进行此活动并减少测试工作,已经提出了几种自动化的单元测试生成工具。这些方法大多数的主要目标是覆盖尽可能多的分支。尽管这些方法具有良好的性能,但是对于它们产生的测试代码的可维护性仍然知之甚少,即,所生成的测试是否具有良好的代码质量,以及是否不会引入威胁其有效性的问题。为了弥合这种差距,本文将研究现有的自动测试用例生成工具在多大程度上产生潜在的有问题的测试代码。我们考虑了七个测试气味,即程序员在测试用例开发过程中采用的次优设计选择,作为所生成测试的代码质量的度量,并评估了它们在由三种状态自动生成的单元测试类中的扩散性。艺术工具,例如RANDOOP,JTExPERT和EVOSUITE。此外,我们调查是否存在影响臭味测试生成的测试和生产代码的特征。我们的研究表明,所有考虑的工具都倾向于生成大量两种特定的测试气味类型,即断言轮盘赌和急切测试,这是先前研究表明会对生产代码的可靠性产生负面影响的工具。我们还发现测试量与臭味测试的产生相关。根据我们的发现,我们认为应该进一步研究和设计更有效的自动生成算法,该算法明确考虑了测试代码的质量。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号