We present a computational accommodation-invariant near-eye display, which relies on imaging with coherent light and utilizes static optics together with convolutional neural network-based preprocessing. The network and the display optics are co-optimized to obtain a depth-invariant display point spread function, and thus relieve the conflict between accommodation and ocular vergence cues that typically exists in conventional near-eye displays. We demonstrate through simulations that the computational near-eye display designed based on the proposed approach can deliver sharp images within a depth range of 3 diopters for an effective aperture (eyepiece) size of 10 mm. Thus, it provides a competitive alternative to the existing accommodation-invariant displays.
展开▼