Songtao Lu, Naweed Khan, et al.
ICASSP 2021
We address the problem of prior matrix estimation for the solu- tion of l 1-regularized ill-posed inverse problems. From a Bayesian viewpoint, we show that such a matrix can be regarded as an influence matrix in a multi- variate l 1-Laplace density function. Assuming a training set is given, the prior matrix design problem is cast as a maximum likelihood term with an additional sparsity-inducing term. This formulation results in an unconstrained yet non- convex optimization problem. Memory requirements as well as computation of the nonlinear, nonsmooth sub-gradient equations are prohibitive for large-scale problems. Thus, we introduce an iterative algorithm to design efficient priors for such large problems. We further demonstrate that the solutions of ill-posed inverse problems by incorporation of l 1-regularization using the learned prior matrix perform generally better than commonly used regularization techniques where the prior matrix is chosen a-priori. © 2012 American Institute of Mathematical Sciences.
Songtao Lu, Naweed Khan, et al.
ICASSP 2021
Shashanka Ubaru, Lior Horesh, et al.
Journal of Biomedical Informatics
Raviv Gal, Eldad Haber, et al.
MLCAD 2020
Michael F. O'Keeffe, Lior Horesh, et al.
New Journal of Physics