Large Scale Variational Bayesian Inference with Applications to Image Deblurring

Brian Jonathan Verbaken
M.S., 2011
Advisor: Alan Yuille
Huge data sets have now become the norm and put a larger strain on computation time and memory. Within the field of computer vision and compressed sensing this burden is felt because the inference is done on the edges between the pixels of an image. This does not scale linearly and thus the matrices be- ing used grow faster than the computers that can analyze them. This becomes problematic when looking at variances because when exactly calculating variances it is required to invert the data matrix. This is computationally impossible for large images, and thus variances approximations must be used instead. The Lanczos method creates a low-rank matrix approximation by an iterative method similar to a singular value decomposition (SVD). The drawback of the Lanczos method is that it scales badly in terms of computation time and memory for the number of iterations because of the required re-orthogonalization step in each iteration. Rather than using the deterministic approach of the Lanczos method, a fairly new method that takes advantage of sampling from a Markov random field (MRF) inserts a stochastic element into an otherwise deterministic algorithm. When sampling from a Gaussian Markov random field (GMRF) the samples are selected from a perturbed GMRF. The sampling method is not only unbiased, it can estimate very efficiently with relatively few samples, and it scales much better than the Lanczos method to the dimensionality of the data. This thesis will compare these two methods in terms of peak signal-to-noise ratio (PSNR) when deblurring images with a known smoothing kernel. The data used is from a previous paper [6] and consists of 4 images with 8 smoothing kernels per im- age. It will also be examined if these Bayesian techniques are more effective than maximum a posteriori (MAP) estimation.
2011