Visual Learning by Integrating Descriptive and Generative Methods

Cheng-en Guo, Song Chun Zhu, and Ying Nian Wu
This paper presents a mathematical framework for visual learning that integrates two popular statistical learning paradigms in the literature: I). Descriptive learning, such as Markov random fields and minimax entropy learning, and II). Generative learning, such as PCA, ICA, TCA, and HMM. We aply the integrated learning framework to texture modeling, and we assume that an observed texture image is generated by multiple layers of hidden stochastic processes with various texture elements called “”textons””. Each texton is expressed as a window function like a mini-template or a wavelet, and each hidden stochastic process is a spatial pattern with a number of textons subject to affine transformations. The hidden layers are characterized by minimax entropy models, and they generate images by occlusion or linear addition. Thus given a raw input image, the learning framework achieves four goals: i). Computing the appearance of the textons, ii). Inferring the hidden stochastic processes, iii). Learning Gibbs models for each hidden stochastic process, and iv). Verifying the learnt textons and models through random sampling. The integrated framework subsumes the minimax entropy learning paradigm and creates a richer class of probability models for visual patterns. Furthermore we show that the integration of descriptive and generative models is a natural path of visual learning. We demonstrate the proposed framework and algorithms on many real images.
2000-09-01