In this paper, we propose a new method for generating an image-based 3D panoramic virtual environment (VE). The panoramic VE is generated using 3D depth information estimated from rotating two multi-view cameras. Even though conventional 2D image-based mosaicking methods provide a wide view, they have limitations in providing a user with a navigation-enabled virtual environment. In order to resolve such obstacles, we first estimate the depth of the scene using two calibrated multi-view cameras and then stitch 3D point clouds instead of images. By rotating two cameras using a turn-table it enables users to navigate the resulting 3D virtual environment with HMD.
展开▼