c++testinggoogletestdata-driven

Is data-driven testing bad?


I've started using googletest to implement tests and stumbled across this quote in the documentation regarding value-parameterized tests

  • You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it!

I think I'm indeed "abusing" the system when doing the following and would like to hear your input and opinions on this matter.

Assume we have the following code:

template<typename T>
struct SumMethod {
     T op(T x, T y) { return x + y; }   
};

// optimized function to handle different input array sizes 
// in the most efficient way
template<typename T, class Method> 
T f(T input[], int size) {
    Method m;
    T result = (T) 0;
    if(size <= 128) {
        // use m.op() to compute result etc.
        return result;
    }
    if(size <= 256) {
        // use m.op() to compute result etc.
        return result;
    }
    // ...
}

// naive and correct, but slow alternative implementation of f()
template<typename T, class Method>
T f_alt(T input[], int size);

Ok, so with this code, it certainly makes sense to test f() (by comparison with f_alt()) with different input array sizes of randomly generated data to test the correctness of branches. On top of that, I have several structs like SumMethod, MultiplyMethod, etc, so I'm running quite a large number of tests also for different types:

typedef MultiplyMethod<int> MultInt;
typedef SumMethod<int> SumInt;
typedef MultiplyMethod<float> MultFlt;
// ...
ASSERT(f<int, MultInt>(int_in, 128), f_alt<int, MultInt>(int_in, 128));
ASSERT(f<int, MultInt>(int_in, 256), f_alt<int, MultInt>(int_in, 256));
// ...
ASSERT(f<int, SumInt>(int_in, 128), f_alt<int, SumInt>(int_in, 128));
ASSERT(f<int, SumInt>(int_in, 256), f_alt<int, SumInt>(int_in, 256));
// ...
const float ep = 1e-6;
ASSERT_NEAR(f<float, MultFlt>(flt_in, 128), f_alt<float, MultFlt>(flt_in, 128), ep);
ASSERT_NEAR(f<float, MultFlt>(flt_in, 256), f_alt<float, MultFlt>(flt_in, 256), ep);
// ...

Now of course my question is: does this make any sense and why would this be bad?

In fact, I have found a "bug" when running tests with floats where f() and f_alt() would give different values with SumMethod due to rounding, which I could improve by presorting the input array etc.. From this experience I consider this actually somewhat good practice.


Solution

  • I think the main problem is testing with "randomly generated data". It is not clear from your question whether this data is re-generated each time your test harness is run. If it is, then your test results are not reproducible. If some test fails, it should fail every time you run it, not once in a blue moon, upon some weird random test data combination.

    So in my opinion you should pre-generate your test data and keep it as a part of your test suite. You also need to ensure that the dataset is large enough and diverse enough to offer sufficient code coverage.

    Moreover, As Ben Voigt commented below, testing with random data only is not enough. You need to identify corner cases in your algorithms and test them separately, with data tailored specifically for these cases. However, in my opinion, additional testing with random data is also beneficial when/if you are not sure that you know all your corner cases. You may hit them by chance using random data.