Machine Learning (ML) algorithms have recently become one of the most important fields of industrial development efforts. Many companies in the automotive sector see ML methods as an enabler of autonomous driving, due to the promising capabilities of trained ML algorithms to represent complex structures and behavioral models. Consequently, the introduction of ML methods in industrial and safety related applications comes with the requirement of Verification & Validation (V&V) of ML algorithms. In order to validate a trained ML model, one not only needs to be able to interpret its outputs, but also the processes within the model itself. One option is to map the high-dimensional data onto lower-dimensional representations to allow users to interpret and understand the data ML algorithms use, e.g. by applying multi-dimensional scaling or t-distributed Stochastic Neighbor Embedding (t-SNE). Further methods that have led to a recent breakthrough in ML visualization require engineering knowledge to validate the network activations throughout the network. These methods help to gain insights into the fundamental features which the network learns. In the field of image processing these are mainly based on convolutional methods, such as Convolutional Neural Networks (CNNs) or Convolutional Auto-Encoders (CAEs). In this paper, we present these visualization techniques to establish, to a certain extent, the interpretability of ML methods, which in turn supports the validation of the algorithms. We also introduce possible approaches to tackle the problem of V&V for ML algorithms in the automotive sector, which are currently considered black box systems. Our paper attempts to provide an intuition of how validation might be achieved, and on the next steps researchers could take.
展开▼