首页> 外文学位 >Comparing the effectiveness of automated test generation tools 'EVOSUITE' and 'Tpalus'.
【24h】

Comparing the effectiveness of automated test generation tools 'EVOSUITE' and 'Tpalus'.

机译:比较自动测试生成工具“ EVOSUITE”和“ Tpalus”的有效性。

获取原文
获取原文并翻译 | 示例

摘要

Automated testing has been evolving over the years and the main reason behind the growth of these tools is to reduce the manual effort in checking the correctness of any software. Writing test cases to check the correctness of software is very time consuming and requires a great deal of patience. A lot of time and effort used on writing manual test cases can be saved and in turn we can focus on improving the performance of the application. Statistics show that 50% of the total cost of software development is devoted to software testing, even more in the case of critical software. The growth of these automated test generation tools lead us to a big question of "How effective are these tools in checking the correctness of the application?" There are several challenges associated with developing automated test generation tools and currently there is no particular tool or metric to check the effectiveness of these automated test generation tools.;In my thesis, I aim to measure the effectiveness of two automated test generation tools. The two automated tools on which I have experimented on are Tpalus and EVOSUITE. Tpalus and EVOSUITE are capable of generating test cases for any program written in Java. They are specifically designed to work on Java. Several metrics have to be considered in measuring the effectiveness of a tool. I use the results obtained from these tools on several open source subjects to evaluate both the tools. The metrics that were chosen in comparing these tools include code coverage, mutation scores, and size of the test suite. Code coverage tells us how well the source code is covered by the test cases. A better test suite generally aims to cover most of the source code to consider each and every statement as a part of testing. A mutation score is an indication of the test suite detecting and killing mutants. In this case, a mutant is a new version of a program that is created by making a small syntactic change to the original program. The higher mutation score, the higher the number of mutants detected and killed. Results obtained during the experiment include branch coverage, line coverage, raw kill score and normalized kill score. These results help us to decide how effective these tools are when testing critical software.
机译:多年来,自动化测试一直在发展,而这些工具的增长背后的主要原因是减少了检查任何软件正确性的人工工作。编写测试用例以检查软件的正确性非常耗时,并且需要大量的耐心。可以节省用于编写手动测试用例的大量时间和精力,进而可以专注于提高应用程序的性能。统计数据表明,软件开发总成本的50%用于软件测试,对于关键软件而言,甚至更多。这些自动测试生成工具的增长使我们面临一个大问题:“这些工具在检查应用程序的正确性方面有多有效?”开发自动化测试生成工具存在许多挑战,目前还没有特定的工具或度量来检查这些自动化测试生成工具的有效性。在本文中,我旨在衡量两种自动化测试生成工具的有效性。我尝试过的两个自动化工具是Tpalus和EVOSUITE。 Tpalus和EVOSUITE能够为任何用Java编写的程序生成测试用例。它们是专门为在Java上工作而设计的。在衡量工具的有效性时,必须考虑几个指标。我将这些工具在几个开放源代码主题中获得的结果用于评估这两个工具。在比较这些工具时选择的度量标准包括代码覆盖率,变异分数和测试套件的大小。代码覆盖率告诉我们测试案例覆盖源代码的程度。更好的测试套件通常旨在覆盖大多数源代码,以将每个语句都视为测试的一部分。突变评分是测试套件检测和杀死突变体的指标。在这种情况下,变量是程序的新版本,是通过对原始程序进行小的语法更改来创建的。突变得分越高,检测到并杀死的突变体数量就越高。实验期间获得的结果包括分支覆盖率,行覆盖率,原始杀死分数和归一化杀死分数。这些结果有助于我们确定测试关键软件时这些工具的有效性。

著录项

  • 作者

    Chitirala, Sai Charan Raj.;

  • 作者单位

    University of Minnesota.;

  • 授予单位 University of Minnesota.;
  • 学科 Computer science.
  • 学位 M.S.
  • 年度 2016
  • 页码 81 p.
  • 总页数 81
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号