Testing and runtime verification are both verification techniques for checking whether a system is correct. The essential artefacts for checking whether the system is correct are actual executions of the system, formally words. Such a set of words should be representative for the systems behavior. In the field of automata learning (or grammatical inference) a formal model of a system is derived based on exemplifying behavior. In other words, the question is addressed what model fits to a given set of words.In testing, typically, the system under test is examined on a finite set of test cases, formally words, which may be derived manually or automatically. Oracle-based testing is a form of testing in which an oracle, typically a manually developed piece of code, is attached to the system under test and employed for checking whether a given set of test cases passes or fails.In runtime verification, typically, a formal specification of the correct behavior is given from which a so-called monitor is synthesised and used for examining whether the behavior of the system under test, or generally the system to monitor, adheres to such a specification. In a sense, the monitor acts as a test oracle, when employed in testing.From the discussion above we see that testing, runtime verification, and learning automata share similarities but also differences. The main artefacts used for the different methods are formal specifications, models like automata, but especially sets of words, on which the different system descriptions are compared, to eventually obtain a verdict whether the system under test is correct or not.In this tutorial we recall the basic ideas of testing, oracle-based testing, model-based testing, conformance testing, automata learning and runtime verification and elaborate on a coherent picture with the above mentioned artefacts as ingredients. We mostly refrain from technical details but concentrate on the big picture of those verification techniques.
展开▼