This paper describes a method of estimating the position andorientation of a camera for constructing a mixed reality (MR) system. Inan MR system 3D virtual objects should be merged into a 3D realenvironment at a right position in real time. To acquire the user'sviewing position and orientation is the main technical problem ofconstructing an MR system. The user's viewpoint can be determined byestimating the position and orientation of a camera using images takenat the viewpoint. Our method estimates the camera pose using screencoordinates of captured color fiducial markers whose 3D positions areknown. The method consists of three algorithms for perspective n-pointsproblems and rises each algorithm selectively. The method also estimatesthe screen coordinates of untracked markers that are occluded or are outof the view. It has been found that an experimental MR system that isbased on the proposed method can seamlessly merge 3D virtual objectsinto a 3D real environment at right position in real-time and allowsusers to look around an area in which numbers are placed
展开▼