This paper describes a Visual model that gives a perceptual distortion measure between an input image and that of reference based on a human-image representational model. We study an approach in which once a few active recognizers tuned to significant orientation and spatial-frequency components of the reference spectrum are obtained, any input image to be compared with the reference one is passed through an operator designated to compare its excitation levels given by the active recognizers, to the corresponding excitation levels for the reference image. Hence, the distortion between a pair of complex images is measured as the weighted sum of the distortion in each filter of a bank of strongly responding recognizers, each tuned to a certain 2D spatial-frequency data in the reference picture, with the weighting: of each filter modulating its amplitude response. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. [References: 48]
展开▼