pythonnumpyimage-processingsignal-processinghighpass-filter

Why do results from adjustable quadratic Volterra filter mapping not enhance dark/bright regions as in the paper?


Based on this paper Adjustable quadratic filters for image enhancement, Reinhard Bernstein, Michael Moore and Sanjit Mitra, 1997, I am trying to reproduce the image enhancement results. I followed the described steps, including implementing the nonlinear mapping functions (e.g., f_map_2 = x^2) and applying the 2D Teager-like quadratic Volterra filter as outlined.

More specifically, the formula for the filter used here is formula (53) in the paper "A General Framework for Quadratic Volterra Filters for Edge Enhancement". Formula (53) and the formulas of the two mapping functions are used as shown in the image below.

map2, map5, and Teager's filter

My pipeline is: normalize the input gray image to the range [0, 1], then map it using predefined functions (specifically the definition of f_map_2 and f_map_5 please see in the image), then pass it through the Teager filter (which is the formula (53)), multiply it by an alpha coefficient and combine the original image for sharpening (unsharp masking), finally denormalize back to the range [0, 255].

import cv2
import numpy as np
from numpy import sqrt
import matplotlib.pyplot as plt

def normalize(img):
    return img.astype(np.float32)/255.0

def denormalize(img):
    """Convert image to [0, 255]"""
    return (img * 255).clip(0, 255).astype(np.uint8)

def input_mapping(x, map_type='none', m=2):
    """Apply input mapping function according to the paper"""
    if map_type == 'none':
        return x  # none (4b)
    elif map_type == 'map2':
        return x**2  # f_map2: x^2 (4c)
    elif map_type == 'map5':
        # piece-wise function f_map5 (4d)
        mapped = np.zeros_like(x)
        mask = x > 0.5
        mapped[mask] = 1 - 2*(1 - x[mask])**2
        mapped[~mask] = 2 * x[~mask]**2
        return mapped
    else:
        raise ValueError("Invalid mapping type")

def teager_filter(img):
    padded = np.pad(img, 1, mode='reflect')
    out = np.zeros_like(img)
    for i in range(1, padded.shape[0]-1):
      for j in range(1, padded.shape[1]-1):
        x = padded[i,j]
        t1 = 3*(x**2)
        t2 = -0.5*padded[i+1,j+1]*padded[i-1,j-1]
        t3 = -0.5*padded[i+1,j-1]*padded[i-1,j+1]
        t4 = -1.0*padded[i+1,j]*padded[i-1,j]
        t5 = -1.0*padded[i,j+1]*padded[i,j-1]
        out[i-1,j-1] = t1 + t2 + t3 + t4 + t5
    return out

def enhance_image(image_path, alpha, map_type='none'):
    """Enhance images with optional input mapping"""
    # Image reading and normalization
    img = cv2.imread(image_path, 0)
    if img is None:
        raise FileNotFoundError("No image found!")
    img_norm = normalize(img)
    
    # Input mapping
    mapped_img = input_mapping(img_norm, map_type)
    
    # Teager filter
    teager_output = teager_filter(mapped_img)
    
    enhanced = np.clip(img_norm + alpha * teager_output, 0, 1)
    
    return denormalize(enhanced)

input_path = r"C:\Users\tt\OneDrive\Desktop\original_image.jpg"
original_image = cv2.imread(input_path, 0)
alpha = 0.1

enhanced_b = enhance_image(input_path, alpha, map_type='none')
enhanced_c = enhance_image(input_path, alpha, map_type='map2')
enhanced_d = enhance_image(input_path, alpha, map_type='map5')

plt.figure(figsize=(15, 5)) 

plt.subplot(1, 4, 1)
plt.imshow(original_image, cmap='gray')
plt.title('Original')
plt.axis('off')

plt.subplot(1, 4, 2)
plt.imshow(enhanced_b, cmap='gray')
plt.title('No Mapping (b)')
plt.axis('off')

plt.subplot(1, 4, 3)
plt.imshow(enhanced_c, cmap='gray')
plt.title('Map2 (c)')
plt.axis('off')

plt.subplot(1, 4, 4)
plt.imshow(enhanced_d, cmap='gray')
plt.title('Map5 (d)')
plt.axis('off')

plt.tight_layout()
plt.show()

However, my output images from using mappings like f_map_2 and f_map_5 do not resemble the ones shown in the paper (specifically, images (c) and (d) below). Instead of strong enhancement in bright and dark regions, the results mostly show slightly darkened edges with almost no contrast boost in the target areas.

So this is my results: my result

And this is paper's results: paper's result

Maybe this is helpful, so I'll also post a picture of the raw output of the above Teager filter, without multiplying by alpha and adding to the original image, as below Teager output (before unsharp masking)

I tried changing the alpha but it didn't help, I also tried adding a denoising step in the normalization function, still didn't help, the image still looks almost identical to the original. I also tested the filter on other grayscale images with various content, but the outcome remains similar — mainly edge thickening without visible intensity-based enhancement.

Has anyone successfully reproduced the enhancement effects described in the paper? Could there be implementation details or parameters (e.g., normalization, unsharp masking, or mapping scale) that are critical but not clearly stated? I will provide the original image as below, if anyone wants to reproduce the process I did.

Input image

Any insights, references, or example code would be appreciated.


Solution

  • I think I found your error. In enhance_image() where you compose the final image, i.e.

    enhanced = np.clip(img_norm + alpha * teager_output, 0, 1)

    you accidentally use your normalized image img_norm instead of the mapped image mapped_img.

    Replacing this line by

    enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1)

    produces something useful: gray wedge sample image

    Note that the teager filter only enhances high frequency components of your image. It would make no strong difference in teager_output whether you pass mapped_img or img_norm to it. Thus, upon composing low pass and high pass, you have to use the mapped_img in order to keep the applied mapping.

    I would also suggest to keep file I/O outside your image processing functions, this makes it easier to inject other data for debugging purposes.

    def enhance_image(img, alpha, map_type='none'):
        """Enhance images with optional input mapping"""
        
        img_norm      = normalize(img)                     # Image normalization
        mapped_img    = input_mapping(img_norm, map_type)  # Input mapping
        teager_output = teager_filter(mapped_img)          # Teager filter
    
        # Compose enhanced image, enh = map(x) + alpha * teager
        enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1)
    
        return denormalize(enhanced)  # Map back to original range