I want to keep a relative constant blur size across images of different resolutions using GPUImageGaussianSelectiveBlurFilter of Brad Larson's GPUImage.
Say I have two images for sizes 1000x1000 and 2000x2000, and I want the blur of 2000x2000 to appear the same size as the 1000x1000. So I set blur size to 1.0 on the 1000x1000 and 2.0 on the 2000x2000. The blurring I desire often requires the blur size on the larger image to be far above 1.0.
((GPUImageGaussianSelectiveBlurFilter *)self._selectiveFocusFilterSmall).blurSize = 1.0;
((GPUImageGaussianSelectiveBlurFilter *)self._selectiveFocusFilterLarge).blurSize = 2.0;
Then I force processing at size. Without this the blurring won't be normalized:
[self._selectiveFocusFilterSmall forceProcessingAtSize:CGSizeMake(1000, 1000)];
[self._selectiveFocusFilterLarge forceProcessingAtSize:CGSizeMake(2000, 2000)];
Small:
(source: kevinharringtonphoto.com)
Large:
(source: kevinharringtonphoto.com)
Large up close:
(source: kevinharringtonphoto.com)
How do I get rid of the boxing in the larger blurred image while maintaining that blur size? I'd love to know if there's a better approach to normalizing the blur size across multiple images with GPUImage.
Those box-like artifacts you see at high blurSize
settings are a byproduct of the way that Gaussian blurs are handled in GPUImage. In order to ensure optimal performance, a fixed number of samples (9) are used in the Gaussian blur kernel I employ. The blur is separated into horizontal and vertical passes, operating over an 81 pixel area using only 18 texture reads.
The blurSize
parameter adjusts the between-sample spacing. At 1.0, it's one pixel / texel, but higher values start to lead to pixels being skipped as the blur radius is expanded. Beyond 1.5 or so, artifacts like the one you see above begin to appear due to larger blocks of pixels being skipped over by the blur kernel.
I've hardcoded 9 samples, with their weights and locations calculated in the vertex shader, for performance reasons. Supplying precalculated values to the fragment shader avoids dependent texture reads, and can lead to more than a tenfold increase in shader performance on iOS devices. Adding a for loop with a variable number of Gaussian samples would slow this down even further.
Still, there might be a way to generalize the blur to use a variable number of precalculated Gaussian samples for a few smaller blurs, then the more expensive for loop for larger blur sizes.