I'm thinking of chucking motion blur into my 2D program, but I doubt the results of my current algorithm.
My approach looks like this at the moment:
What would cause the "motion blur" effect is obviously the blending, as objects in motion will leave a fading trail.
This is clearly not very demanding on the hardware, the double buffering will be done anyway and the only extra step is the alpha blending, which is a simple calculation. However, the trails will be very sharp, and not blurry at all which may looks a bit strange. I could do a box blur on the back buffer before the blending step, but it feels like it could be very taxing on low-end systems like the Nintendo DS.
Are there any solutions that let me do it more efficiently or yield a better looking result?
Really you should render many intermediate frames and blend them into one result. Say, for example, that your output frame rate is 50 fps. You'd probably get a reasonable result if you rendered internally at 500 fps and blended groups of ten frames together before showing that to the user.
The approach you are using at the moment simulates persistence, as if you were rendering onto an old CRT with slow phosphor. This isn't really the same thing as motion blur. If your frame duration is 20ms (1000/50) then a motion blurred frame consists of renderings spanning the 20ms frame duration, all with the same alpha weighting. The persistence solution consists of renderings from 20, 40, 60, 80, 100ms ago, gradually fading out.