I’ve been quite vocal in the past about the overall uselessness of most (not all!) unit tests. Mocked tests I feel are especially useless in moving anything other than that “percent lines covered” metric — of course I also believe that mocked tests are generally useless anyway. But another time.

If I don’t think that unit tests (going for strict coverage) are useful, do I think that automated testing is all useless? No. A resounding no in fact.

The way I look at it is that 99% of the time you spend writing unit tests you’d be far better off writing integration (AKA regression) tests. The problem with unit tests is that they run in complete isolation from the system as a whole. Just because you have passing unit tests does not mean that the system will actually function. This, I believe, is the dirty secret of the cult of the code coverage unit test aficionados.

Now, on the other hand, if you write integration tests that run against a real life running system you actually gain some real confidence that your system is actually working as designed. If you can at least test the happy cases (and some common fault cases as they might occur) then you can have real faith that the system is running.

This all came up in context of a discussion of setting up a continuous deployment (CD) for a system. CD means that every commit should (assuming tests pass) goes directly to production. I could not give a rat’s ass if you have 110% coverage in unit tests; I would still not want to push to production with that. Passing unit tests don’t give me faith that a system works. Sure, code is running, but “code running” does not a working system make.

Now if you had integration tests… then I would press the “I Agree” button on the pipeline being CD.

Thoughts?