Unsupervised Learning of Probabilistic Grammar-Markov Models for Object Categories

Long (Leo) Zhu, Yuanhao Chen, and Alan Yuille
We introduce a Probabilistic Grammar-Markov Model (PGMM) which couples probabilistic context-free grammars and Markov Random Fields. These PGMMs are generative models defined over attributed features and are used to detect and classify objects in natural images. PGMMs are designed so that they can perform rapid inference, parameter learning, and the more difficult task of structure induction. PGMMs can deal with unknown 2D pose (position, orientation, and scale) in both inference and learning, different appearances, or aspects of the model. The PGMMs can be learned in an unsupervised manner, where the image can contain one of an unknown number of objects of different categories or even be pure background. We first study the weakly supervised case, where each image contains an example of the (single) object of interest, and then generalize to less supervised cases. The goal of this paper is theoretical, but, to provide proof of concept, we demonstrate results from this approach on a subset of the Caltech data set (learning on a training set and evaluating on a testing set). Our results are generally comparable with the current state of the art and our inference is performed in less than five seconds.
2009-09-01