We propose a new learning approach to determine the geometric and photometric relationship between multiple cameras which have at least partially overlapping fields of view. The essential difference to standard matching techniques is that the search for similar spatial patterns is replaced by an analysis of temporal coincidences of single pixels. This analysis is located on a very low level in the processing hierarchy, since it is hypothesized to be a primary feature of visual perception, useful also for technical vision systems. The proposed scheme yields an array of probability distributions that represent the geometrical structure of these correspondences for arbitrary relative orientations of the cameras, arbitrary imaging geometry (perspective, cata-dioptric, etc.), and under large tolerance for photometric differences in the image sensors.
展开▼