There is still a palpable disconnect between how architecture is designed in the digital realm, and how it is realized in the physical realm. A number of factors contribute to this gap, including a virtual environment's infinite scale, its autonomy from a tangible context, and its lack of physical materiality. This paper addresses such issues through custom vision-based modeling software that uses a 3D scanning/sensing/printing workflow to merge digital processes in architectural design with physical processes in fabrication. The application's internalizes three layers of physical information to simultaneously influence the digital design. The first layer is a physical context that is 3D scanned as the base geometry from which to design. The second layer uses depth camera to sense a designer's hand gestures, and brings it into the virtual environment as a 3D controller. The third layer encodes the material limits of the output device (an FDM 3D printer) into a design of a digital 3D module. The user can then gesture with their hand to digitally model a sculptural form, which is also instantly
展开▼