The most prevalent routine for camera calibration is based on the detection of well-defined feature points on apurpose-made calibration artifact. These could be checkerboard saddle points, circles, rings or triangles, oftenprinted on a planar structure. The feature points are first detected and then used in a nonlinear optimization toestimate the internal camera parameters. We propose a new method for camera calibration using the principleof inverse rendering. Instead of relying solely on detected feature points, we use an estimate of the internalparameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the opticalfeatures. This enables us to compute pixel-wise differences in the image domain without interpolation artifacts.We can then improve our estimate of the internal parameters by minimizing pixel-wise least-squares differences.In this way, our model optimizes a meaningful metric in the image space assuming normally distributed noisecharacteristic for camera sensors. We demonstrate using synthetic and real camera images that our methodimproves the accuracy of estimated camera parameters as compared with current state-of-the-art calibrationroutines. Our method also estimates these parameters more robustly in the presence of noise and in situationswhere the number of calibration images is limited.
展开▼