Numerous quality assessment metrics have been developed by researchers to compare the performance of different multi-objective evolutionary algorithms. These metrics show different properties and address various aspects of solution set quality. In this paper, we propose a conceptual framework for selection of a handful of these metrics such that all desired aspects of quality are addressed with minimum or no redundancy. Indeed, we prove that such sets of metrics, referred to as 'minimal sets', must be constructed based on a one-to-one correspondence with those aspects of quality that are desirable to a decision-maker.
展开▼