Given an function F that performs a numerical computation on 32-bit IEEE-754 floating point numbers, what would be the (best) way to test if F is numerically stable? Is there a black-box test that does not need to know more about the function other than its argument types?
Well, you could cycle through all the floating point numbers, do higher-order forward differencing, and look for regions where the resultant derivative approximation gets really big. Ultimately, though, it would be impossible to prove that the roughness was the result of instability, as opposed to actual features of the function being modeled. After all, every black box is a perfect model of some function.
If you had 32-bit and 64-bit versions of the same black box, you could specifically look for areas where the 64-bit version's forward differences were smoother than the 32-bit version's.