dm-script

How can I improve the efficiency of Gaussian blur in dm-script compared to Python's SciPy implementation?


I'm using the following dm-script code to generate a Gaussian-blurred image. For a 4K image, the process takes about 2 seconds. However, when I perform a similar operation using Python's SciPy package, it takes only about 0.5 seconds.

Image gaussian_conv(Image src_img, Number std_dev)
{
    // Use the Warp function to stretch the image to fit into the revised dimensions
    number src_img_sx, src_img_sy
    src_img.GetSize(src_img_sx, src_img_sy)
    Image warp_img = RealImage("", 4, src_img_sx, src_img_sy)
    warp_img = Warp(src_img, icol * src_img_sx / src_img_sx, irow * src_img_sy / src_img_sy)

    // Create the gaussian kernel using the same dimensions as the srced image
    Image kernel_img := RealImage("", 4, src_img_sx, src_img_sy)
    Number xmidpoint = src_img_sx / 2
    Number ymidpoint = src_img_sy / 2
    kernel_img = 1 / (2*pi() * std_dev ** 2) * exp(-1 * (((icol - xmidpoint) ** 2 + (irow - ymidpoint) ** 2)/(2 * std_dev ** 2)))

    // Carry out the convolution in Fourier space
    ComplexImage fft_kernel_img := RealFFT(kernel_img)
    ComplexImage fft_src_img := RealFFT(warp_img)
    ComplexImage fft_product_img := fft_src_img * fft_kernel_img.modulus().sqrt()
    RealImage invFFT := RealIFFT(fft_product_img)

    // Warp the convoluted image back to the original size
    Image filter = RealImage("", 4, src_img_sx, src_img_sy)
    filter = Warp(invFFT, icol / src_img_sx * src_img_sx, irow / src_img_sy * src_img_sy)
    return filter
}

Image img := get_front_img()
Number std_dev
String prompt = "Input standard deviation. 0=no blurring, 1=minimal blurring, 3=mild blurring, 10=severe blurring"
GetNumber(prompt, 3, std_dev)
Number time1, time2
time1 = GetCurrentTime()
Image blurred_img := gaussian_conv(img, std_dev)
time2 = GetCurrentTime()
Result(CalcTimeUnitsBetween(time1, time2, 1) + "\n")
blurred_img.ShowImage()
import DigitalMicrograph as DM
import scipy.ndimage.filters as sfilt
import time

img = DM.GetFrontImage()
img_np = img.GetNumArray()
start=time.perf_counter() 
img1_np = sfilt.gaussian_filter(img_np, 3)
end=time.perf_counter()
print("Processing Time= "+str(end-start))    
img1 = DM.CreateImage(img1_np)
img1.ShowImage()

What I’ve Tried:

Questions:


Solution

  • Not answering your original question - see the comments - but pointing out:

    Why do you reinvent the wheel? There are inbuild filter functions you can use in most cases.

    Have you checked out the F1 help examples on filtering? enter image description here

    F.e. you can set up any filter in the "Image Filtering" UI and then use it in script: Filter UI

    image test := realImage("test raw",4,512,512)
    test = random()
    test.ShowImage()
    OKDialog("Now filter")
    string filtername = "MyLP"  // if your filter is saved as `MyLP`
    test.IFMApplyFilter(filtername).showimage()
    

    F1 help