首页> 外文会议>International Conference on Tests and Proofs;Software Technologies: Applications Foundations Conference >Testing, Runtime Verification and Automata Learning: Invited Tutorial TAP 2020
【24h】

Testing, Runtime Verification and Automata Learning: Invited Tutorial TAP 2020

机译:测试,运行时验证和自动机学习:TAP 2020邀请教程

获取原文
获取外文期刊封面目录资料

摘要

Testing and runtime verification are both verification techniques for checking whether a system is correct. The essential artefacts for checking whether the system is correct are actual executions of the system, formally words. Such a set of words should be representative for the systems behavior. In the field of automata learning (or grammatical inference) a formal model of a system is derived based on exemplifying behavior. In other words, the question is addressed what model fits to a given set of words.In testing, typically, the system under test is examined on a finite set of test cases, formally words, which may be derived manually or automatically. Oracle-based testing is a form of testing in which an oracle, typically a manually developed piece of code, is attached to the system under test and employed for checking whether a given set of test cases passes or fails.In runtime verification, typically, a formal specification of the correct behavior is given from which a so-called monitor is synthesised and used for examining whether the behavior of the system under test, or generally the system to monitor, adheres to such a specification. In a sense, the monitor acts as a test oracle, when employed in testing.From the discussion above we see that testing, runtime verification, and learning automata share similarities but also differences. The main artefacts used for the different methods are formal specifications, models like automata, but especially sets of words, on which the different system descriptions are compared, to eventually obtain a verdict whether the system under test is correct or not.In this tutorial we recall the basic ideas of testing, oracle-based testing, model-based testing, conformance testing, automata learning and runtime verification and elaborate on a coherent picture with the above mentioned artefacts as ingredients. We mostly refrain from technical details but concentrate on the big picture of those verification techniques.
机译:测试和运行时验证都是用于检查系统是否正确的验证技术。检查系统是否正确的基本伪像是系统的实际执行,形式上是单词。这样的一组单词应代表系统行为。在自动机学习(或语法推断)领域,基于示例性行为推导了系统的正式模型。换句话说,问题是要解决哪种模型适合给定的一组单词。 在测试中,通常,在有限的一组测试用例(形式为单词)上检查被测系统,这些用例可以手动或自动派生。基于Oracle的测试是一种测试形式,其中将oracle(通常是手动开发的一段代码)附加到被测系统上,并用于检查给定的一组测试用例是否通过。 通常,在运行时验证中,给出正确行为的正式规范,据此可以合成所谓的监视器,并用于检查被测系统或通常要监视的系统的行为是否符合该规范。从某种意义上说,当在测试中使用时,监控器充当测试先知。 从上面的讨论中,我们可以看到测试,运行时验证和学习自动机具有相似之处,但也有差异。用于不同方法的主要伪像是形式规范,模型(例如自动机),但尤其是单词集,在这些单词集上比较了不同的系统描述,最终确定了被测系统是否正确。 在本教程中,我们回顾了测试,基于oracle的测试,基于模型的测试,一致性测试,自动机学习和运行时验证的基本思想,并详细阐述了以上述人工制品为成分的连贯图片。我们主要避免使用技术细节,而是专注于这些验证技术的全局。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号