Test-Driven Development Done Right

A couple of years ago, I had at least two misconceptions about Test-Driven Development: (1) you should write all your tests up-front and (2) after verifying that a test case can fail, you should make it pass right away. To better understand TDD, I got a copy of the book “Growing Object-Oriented Software Guided by Tests” by Freeman and Pryce (now one of my absolute favorites). Although the book does a great job explaining the concepts, it took me ten chapters to admit I had been wrong. Let’s never do that again. :)

Let me walk you through my misconceptions so that you don’t have to repeat my mistakes:

Misconception 1: write all tests up-front. Thinking about potential test cases up-front is not a bad thing. It will exercise your imagination and with some luck, many of them will still be applicable after the code is written. But don’t waste energy trying to compile an exhaustive list of tests. At least for me, this approach didn’t work since my imagination appears too limited to come anywhere near the final list. But foremost, I would like to get going writing some test code!

You are better off writing a few happy-path test cases. Filling in the test code will get you started working on the user interface of your classes. When the test code starts acting as a “user” of your interface, it will be obvious to you whether the API is okay or awkward to work with. The tests will drive you to improve your user interface. Creating the user interface will invariably make you think about error cases and how the API can be abused or misunderstood. You will come up with more test cases, and implement these. With some effort, but surprisingly little so, you will grow your test suite.

Misconception 2: fail the test, then make it pass right away. When you have written your test code, filled in the production code to get it all to compile and seen the test fail, it is very tempting to just fix things. Make the changes to have the test pass. Actually, you can do that. But there are at least two reasons not to.

First, I strongly prefer an incremental approach. I fix only the problem reported by the test! If the test says “null pointer exception”, I will fix it. Running the test again, you will get another failure and fix that. This is the convenient/lazy approach, you just let the test drive you. Also, it will result in minimal increments, which is very helpful if another test case would break while changing the production code.

Second, fixing the failed test case right away will throw away a lot of information in the process. When the test fail, it will provide you with valuable information on what went wrong. If you cannot immediately understand what the problem is, maybe you should improve your test or production code? For example, if you get a “null pointer exception”, maybe error handling or an assert earlier in the production code could make sure your program never gets into that kind of a corrupted state. Alternatively, you could improve your test code with all kinds of helpful diagnostics. The idea is, if it takes time for you to understand what went wrong today, imagine how much time will be wasted when the same test case fails in six months. “Growing Object-Oriented Software Guided by Tests” says you should “let the tests talk to you”. There you go, you are test-driven.