I ran into another anti-pattern today at work: configuration-dependent tests. No, this isn’t tests that are configured, but rather testing of code who’s outputs depend on configuration.

The issue at hand is I needed to turn off some relatively obscure feature in some code we have. No problem, right? I just edit the config and do a manual test of the resultant system to make sure things work. Test results in hand I submit the code.

The next day (Monday, in this case) we have tests that are broken.

The problem with this is the tests are not testing code directly, but rather a combination of code plus configuration.

Here’s roughly the war the code is right now:

if (checkConfig()) {
    // Not needed
    return null;
}
// Otherwise...
step1();
step2();
// ...

The problem is that this links both the conditional and the test itself. You’d be far better served in making a test for the tests rather than the function that contains both of them.

It would be better to do something like this:

if (checkConfig()) {
    // Not needed
    return null;
}
else {
    // Otherwise...
    runTests();
}

What you can do is make unit tests for runTests() apart from the containing code that both checks the config and runs the tests.

If you wanted to be even more paranoid, you could ever write tests for checkConfig().

On the other hand, I would never write tests for the aggregate function. I have faith that the compiler can run an if and call functions. If it can’t, well, then you have bigger problems.