首页> 外文会议>Software Testing, Verification and Validation Workshops, 2009. ICSTW '09 >Test Case Generation Using Model Checking for Software Components Deployed into New  Environments
【24h】

Test Case Generation Using Model Checking for Software Components Deployed into New  Environments

机译:使用模型检查生成测试用例,以对部署到新环境中的软件组件进行检查

获取原文

摘要

In this paper, we show how to generate test cases for a component deployed into a new software environment. This problem is important for software engineers who need to deploy a component into a new environment. Most existing model based testing approaches generate models from high level specifications. This leaves a semantic gap between the high level specification and the actual implementation. Furthermore, the high level specification often needs to be manually translated into a model, which is a time consuming and error prone process. We propose generating the model automatically by abstracting the source code of the component using an under-approximating predicate abstraction scheme and leaving the environment concrete. Test cases are generated by iteratively executing the entire system and storing the border states between the component and the environment. A model checker is used in the component to explore non-deterministic behaviors of the component due to the concurrency or data abstraction. The environment is symbolically simulated to detect refinement conditions. Assuming the run time environment is able to do symbolic execution and that the run time environment has a single unique response to a given input, we prove that our approach can generate test cases that have complete coverage of the component when the proposed algorithm terminates. When the algorithm does not terminate, the abstract-concrete model can be refined iteratively to generate additional test cases. Test cases generated from this abstract-concrete model can be used to check whether a new environment is compatible with the existing component.
机译:在本文中,我们展示了如何为部署到新软件环境中的组件生成测试用例。对于需要将组件部署到新环境中的软件工程师来说,此问题很重要。大多数现有的基于模型的测试方法均会根据高级规范生成模型。这在高级规范和实际实现之间留下了语义鸿沟。此外,高级规范通常需要手动转换为模型,这是一个耗时且容易出错的过程。我们建议通过使用近似逼近的谓词抽象方案对组件的源代码进行抽象,并使环境具体化,从而自动生成模型。通过迭代执行整个系统并存储组件和环境之间的边界状态来生成测试用例。组件中使用模型检查器来研究由于并发或数据抽象导致的组件的不确定性行为。符号化地模拟环境以检测优化条件。假设运行时环境能够执行符号执行,并且运行时环境对给定的输入具有单个唯一的响应,我们证明了当所提出的算法终止时,我们的方法可以生成完全覆盖组件的测试用例。当算法没有终止时,抽象混凝土模型可以迭代地改进以生成其他测试用例。从此抽象混凝土模型生成的测试用例可用于检查新环境是否与现有组件兼容。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号