ArticleS. UncleBob.
BastardChild [add child]

The Bastard Child

I was recently consulting for a client who told me that they tried TDD, but had trouble with it. The production code kept on getting messier and harder to understand. It was getting littered with interface arguments forced upon them by the tests. When they tried to refactor the production code they often found that they were changing dozens and dozens of tests. The burden of maintaining the tests was so great that they finally abandoned TDD. I found this puzzling, because it's not what happens to me, or others of my clients.

Later, in talking to another group at the same company I learned what was going on. That group told me that their company-wide policy was to treat tests as throw-away code. In tests they were allowed to break all the rules. They didn't have to follow their coding standards, they didn't have to maintain good design rules, they didn't have to eliminate duplication. They could just throw any old garbage code into the tests. Indeed, they were encouraged to do this.

SMy rule has always been to keep the test code to the same level of quality as the production code. Test code is not throw-away code. Test code is not somehow less important than production code. Indeed, test code may be more important than production code since you can recreate the production code from the tests, but you can't recreate the tests from the production cdoe.

When tests are ugly, maintaining them is hard; and that makes maintaining the production code hard because the tests have to be maintained along with the production code. There is no escape from this. If you want your production code to be easy to maintain, you must keep your tests easy to maintain. And that means that the tests must be kept to the same level of high quality as the production code. The tests need to be well structured, well designed, and well coded. They need to be readable, understandable, and flexible. Duplication must be eliminated from them.

Duplication in tests is particularly insidious. Many unit tests consist of many very similar test cases. These are often created with copy-paste. It's not uncommon to see many dozens of such cases, each subtly different from the next. It is important to identify and eliminate this duplication in tests, whenever it appears. For an example of this see TheBowlingGameKata.

Test code is what we depend upon for design documentation, correctness verification, design support, and so many other things. Our tests support our refactoring. Our tests give us the courage to make changes. Our tests tell us what has gone wrong whenever we break something. Given that we depend on our tests for so much; doesn't it make sense to treat them well?

 Fri, 7 Oct 2005 18:21:18, David Chelimsky, The bastard setup
I've noticed that sometimes, in an effort to minimize duplication in tests, the tests become more difficult to understand. This occurs mostly when part of the setup is in a setUp method and part of it is in the test. Now you have two places to look for context in order to understand the test.

I've seen a couple of blogs and mailing list posts suggesting that setup is the root of all evil in tests, and that to eliminate duplication you should extract method from your test just as you would in the code being tested. I've been experimenting with this and it seems to work quite well in terms of being able to look at a test and understand its context without having to examine the whole file. Anybody else try this? Any thoughts on this?
 Mon, 10 Oct 2005 04:55:01, Nat,
Test code that is easy to understand is somehow different from production code that is easy to understand. I'm not sure exactly how. I suspect it's that test code is a specification for production code, and so should feel declarative. Production code actually does something, and so is imperative.
 Tue, 31 Jan 2006 10:58:10, Dokta Towell, Hoare and Turing on assertional method of program proving
I ran across the following quote from C.A.R. Hoare's 1980 ACM Turing Award Lecture []

"Just recently, I have discovered that an early advocate of the assertional method of program proving was none other than Alan Turing himself. On June 24, 1950 at a conference in Cambridge, he gave a short talk entitled, "Checking a Large Routine" which explains the idea with great clarity. "How can one check a large routine in the sense of making sure that it's right? In order that the
man who checks may not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows." [p. 6 - right column]

The original Turing paper is online at:

Looks like they're talking about unit tests to me. Am I wrong about this?
Thanks - DrT[?]
 Tue, 31 Jan 2006 18:17:09, Chaz Haws, Setup
David, can you share some resources on the evils of setup? I'm finding that in some of my tests, I'm actually evolving the setup quite separately from the tests, so I'm inclined to agree. I haven't yet found the right balance there myself. I may try some Managed C++ tests for my C# code and see if MI can help me out a bit, but while I'm at it I'd like to see how others are solving it.
 Fri, 3 Feb 2006 13:04:00, Uncle Bob, Turing and Hoare as Test Infected.
I think Turing's and Hoare's comments could just as easily be applied to Design by Contract as well as Unit Testing. The two systems approach the same goal from two very different directions, but are both based upon assertions made by programmers.

I find the discovery (by Larman) that the programmers for the Mercury Space Capsule wrote their unit tests in the morning and made them pass in the afternoon to be a much more compelling foreshadow of TDD. That team was directly connected to Weinberg and Von Neumann.

It's fun to look back in history and see the foreshadowings of our current position.
 Mon, 27 Mar 2006 05:01:55, Michael Feathers, Setup
David, I avoid setup also. I like to make well-named factory methods which return an object of a cluster of objects in known state.