matlabimage-processingoptimizationmathematical-optimization

Creating Filter's Laplacian Matrix and Solving the Linear Equation for Image Filtering


I have an optimization problem to solve in order to filter an image.
I created a Linear Equation of the problem which deals with Sparse Matrices.

At first I will show the problem.
First, the Laplacian (Adjacency) matrix of the problem:

enter image description here

The matrix Dx / Dy is the forward difference operator -> Hence its transpose is the backward difference operator.
The matrix Ax / Ay is diagonal matrix with weights which are function of the gradient of the image (Point wise, namely the value depends only on the gradient on that pixel by itself).
The weights are:
enter image description here
Where Ix(i) is the horizontal gradient of the input image at the i-th pixel (When you vectorize the input image).

Assuming input Image G -> g = vec(G) = G(:).
I want to find and image U -> u = vec(U) = U(:) s.t.:

enter image description here

My questions are:

  1. How can I build the matrices Dx / Dy / Ax / Ay effectively (They are all sparse)?
  2. By setting M = (I + \lambda * {L}_{g}), Is there an optimized way to create M directly?
  3. What would be the best way to solve this linear problem in MATLAB? Is there a way to by pass memory limitations (Namely, dealing with large images and still be able to solve it)?
  4. Is there an Open Source library to solve it under limited memory resources? Any library with MATLAB API?

Thank You.


Solution

  • Solving such problem involves solving large scale linear system.
    The system is defined by a sparse matrix which its size is proportional to the input image size.
    In the case above the model matrix is an SPD (Symmetric Positive Definite) matrix. It is a five point spatially inhomogeneous Laplacian matrix.

    There are 2 options to solve this:

    1. Direct Solver
      Build the matrix explicitly, use a direct solver (Preferably one which can be notified about the matrix properties).

    2. Iterative Solver
      For such matrix, using the Conjugate Gradient method.
      The iterative method allows using the operator form of the matrix instead of the explicit matrix.

    See Solving Large Scale Sparse Linear System of Image Convolution, Interpolation by Solving a Minimization Problem (Optimization), Efficient Solver for Solving a Large Linear System Sequentially of a Positive Definite Matrix.

    For large images, usually, the iterative method will be faster.
    For such case, using the operator form with convolution operations will be faster than the sparse matrix product.

    The above could be implemented as:

    function [ mAx, mAy ] = GenMatAxAy( vY, numRows, numCols, paramAlpha )
    
    mY = reshape(vY, numRows, numCols);
    mIx = conv2(mY, [1, -1], 'valid');
    mIy = conv2(mY, [1; -1], 'valid');
    
    mAx = exp(-(mIx .* mIx) / (2 * paramAlpha* paramAlpha));
    mAy = exp(-(mIy .* mIy) / (2 * paramAlpha* paramAlpha));
    
    
    end
    
    
    function [ vX ] = ApplyOpA( vY, numRows, numCols, mAx, mAy, paramLambda )
    
    mY = reshape(vY, numRows, numCols);
    mIx = conv2(mY, [1, -1], 'valid');
    mIy = conv2(mY, [1; -1], 'valid');
    
    mIx = mIx .* mAx;
    mIy = mIy .* mAy;
    
    mIx = conv2(mIx, [-1, 1], 'full');
    mIy = conv2(mIy, [-1; 1], 'full');
    
    vX = vY + (paramLambda * (mIx(:) + mIy(:)));
    
    
    end
    

    I used Julia to implement is as it allows allocations free implementation.
    Applying the operation using explicit matrix multiplication is ~20 times slower than the use of convolution.

    For large enough images it suggests that the solution using the iterative method will be faster.


    The code is available on my StackOverflow GitHub Repository (Look at the StackOverflow\Q24016744 folder).