One aspect of self-organizing systems is their desired ability to be self-learning, i.e., to be able to adapt dynamically to conditions in their environment. This quality is awkward especially if it comes to applications in security or safety-sensitive areas. Here a step towards more trustful systems could be taken by providing transparency of the processes of a system. An important means of giving feedback to an operator is the visualization of the internal processes of a system. In this position paper we address the problem of visualizing dynamic processes especially in self-learning systems. We take an existing self-learning system from the field of computer vision as an example from which we derive questions of general interest such as possible options to visualize the flow of information in a dynamic learning system or the visualization of symbolic data. As a side effect the visualization of learning processes may provide a better understanding of underlying principles of learning in general, i.e, also in biological systems. That may also facilitate improved designs of future self-learning systems.
展开▼