This paper shows how to measure the complexity and reduce the dimensionality of a geometric design space. It assumes that high-dimensional design parameters actually lie in a much lower-dimensional space that represents semantic attributes. Past work has shown how to embed designs using techniques like autoencoders; in contrast, this paper quantifies when and how various embeddings are better than others. It captures the intrinsic dimensionality of a design space, the performance of recreating new designs for an embedding, and the preservation of topology of the original design space. We demonstrate this with both synthetic superformula shapes of varying non-linearity and real glassware designs. We evaluate multiple embeddings by measuring shape reconstruction error, topology preservation, and required semantic space dimensionality. Our work generates fundamental knowledge about the inherent complexity of a design space and how designs differ from one another. This deepens our understanding of design complexity in general.
展开▼