Learning Local and Context Information with Applications to Natural/Medical Image Analysis

Songfeng Zheng
Ph.D., 2008
Advisor: Alan Yuille

Attempting to solve object detection problem in computer vision and medical image analysis, this dissertation proposes a learning based approach which combines local information, short-range context information, and long-range context information.

Via supervised learning, the proposed approach firstly selects and combines features extracted from local image patches which contain local information; this step helps us get a rough idea about where is the ob ject of interest. The second step combines information from different cues obtained from the first step; this step integrates short-range context information, and it enables us to get a better idea of where is the object of interest. Finally, a shape model which contains long-range context information is employed to further clean the detection result.

The proposed approach is applied to classic problems in computer vision, such as articulated ob ject (e.g. horse, cow) boundary detection, foreground/background segmentation, ob ject detection and parsing. On standard datasets in computer vision, the proposed approach outperforms the alternative approaches in the literature. When the proposed approach is applied to medical image analysis tasks (e.g. sulci detection), local information gives us fairly good results, and context information helps us achieve even better results.

AdaBoost based algorithms are employed in the learning stage, however, the resulting strong classifier of AdaBoost is often a non-transparent “black box” which is difficult to interpret and offers little insight into the data. To overcome these limitations, enlightened by the progress in cognition science, this dissertation proposes a new learning method, Compositional Noisy-Logical Learning (CNLL), which is based on noisy-logic representation. This dissertation explores two algorithms to implement CNLL, and tests CNLL on standard dataset in machine learning. The experimental results show that CNLL often gives results better than AdaBoost with far fewer features.

2008