...
首页> 外文期刊>Empirical Software Engineering >What am I testing and where? Comparing testing procedures based on lightweight requirements annotations
【24h】

What am I testing and where? Comparing testing procedures based on lightweight requirements annotations

机译:我测试了什么以及哪里?比较基于轻量级要求注释的测试程序

获取原文
获取原文并翻译 | 示例

摘要

Context The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. Objective Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. Method With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. Results We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. Conclusion With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.
机译:背景信息在不同的测试阶段中执行软件密集型系统的测试,每个测试阶段具有大量的测试用例。这些测试用例通常来自要求。每个测试阶段对其细节程度以及可以测试的特定要求和约束。因此,针对每个测试阶段定义了特定的测试套件。在本文中,重点是嵌入式系统的域,其中,其中,典型的测试阶段是软件和硬件循环。客观监测和控制哪些要求的验证,其中细节和测试阶段是工程师的挑战。但是,此信息是确保某些测试覆盖所必需的,以最大限度地减少冗余测试程序,并避免测试阶段之间不一致。此外,工程师不愿意在结构化语言或模型方面不愿意促进要求对测试执行的关系。方法采用我们的方法,我们缩短了需求规范与测试执行之间的差距。以前,我们提出了一种用于要求的轻量级标记语言,其提供了一组可应用于自然语言要求的注释。注释映射到测试执行中的事件和信号。因此,来自一组测试执行的有意义的见解可以与要求规范中的伪影直接相关。在本文中,我们使用标记语言将不同的测试阶段彼此进行比较。结果我们用我们轻量级标记语言注释了驾驶员辅助系统的443天然语言要求。然后,从仿真环境中链接到1300个测试执行,并从具有人类驱动程序的测试驱动器的53个测试执行执行。基于注释,我们能够分析测试阶段如何以及测试阶段和测试用例如何与要求对齐。此外,我们通过这种广泛的实验评估突出了我们方法的一般适用性。结论采用我们的方法,若干测试水平的结果与要求相关联,并能够评估复杂的测试执行。通过这种方式,从业者可以轻松评估系统对其规范的表现如何,并且另外,可以推理应用测试阶段的表现力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号