We employ an artificial neural network to fuse a triplet of "multi-spectral" brain images from a magnetic resonance imaging systme into a segmented image. The pixel values for the same pixel location from each of T1, T2 and PD images of the same slice of a given brain scan are input to a neural network for training. The three output components each take high for low values to form codewords for different graphyscale classes. Eighty pixel locations from each class are sampled as triplets (T1,T2,PD) and used for backpropagation training. Then the network maps each novel triplet into an output codeword that represnets one of the 6 class grayscales and that grayscale is put into that pixel location in the output image. Othe rresearchers have mapped triplets of representative values, e.g., of medians over small blocks, but this has the effect of oversmoothing and blurring the segmented regions. Our method appears to be more practical.
展开▼