The training of deep learning models for perception requires large annotated datasets, which are expensive and tedious to generate for real world applications. Small job shops with very low production volumes can often not spend their resources on such data generation tasks. One possible solution is the use of simulated data. But there always exists a discrepancy between simulated and real data, which may severely decrease the real world performance of models trained only on simulated data. Therefore, in this publication, we investigate different methods to account for this: photo-realistic rendering, domain randomization and domain adaptation. We analyze the individual and combined effectiveness of these approaches for an instance segmentation model. The target setting is a flexible assembly cell for low volume production with limited resources for data generation and training. We critically discuss the results and show that even simple rendering techniques, when combined with domain randomization, can lead to good results.
展开▼