ArticleS. TimOttinger.
YesForDebuggers [add child]

Yes For Debuggers


Debuggers get a lot of bad press in the TDD community. I frankly don't understand it.

Debuggers have one really important feature that you can't get anywhere else: the stack trace. It doesn't get fooled by delegates and decorators. It doesn't get involved in interfaces and other dependency-management additions. It gives you the only clear and unadulterated view of the runtime structure of the system -- objects talking to objects.

That doesn't sound like much to you? Try to get that information by tracing through the code or writing tests. You're not going to get the same quality of information in the same period of time. Bang for buck, you can't beat the debugger for a quick answer.

One answer I hear is that you shouldn't have to code for debugging (debug-ability). At the same time, I hear people saying that code for test is fine. Why not both? I am ready to hear some good alternatives.


!commentForm
 Thu, 29 Dec 2005 00:03:55, Mike, the stack trace
umm... I can get the stack trace from printStackTrace...
 Thu, 29 Dec 2005 01:16:25, PhlIp[?], myths of TDD include "no debugging"
When something abuses you too long, you start to bond with it and defend it. So when people hear that TDDers avoid debugging, they might think we mean "never touch the debugger". So then they trot out the reasons you might debug, and these are typically the reasons you can't increment. Legacy code situations, changing a library, interfacing to hardware, etc.

There's no rule "don't debug". The goal is this: Debugging should never be your only option. You always have the option to not debug.

So this leaves you free to debug as much as you like. Test cases make an excellent platform for debugging, and to review live code.
 Thu, 29 Dec 2005 03:06:58, Dennis van der Stelt, myth or not?
Phlip says there's no such rule as "don't debug". I think he's partially right. Perhaps the whole Agile and TDD community should make more clear about what they mean. When the Agile Manifesto says they value working software over comprehensive documentation, people seem to actually read Agilists don't write documentation! :)

Perhaps the same goes for the debugger, although I believe there are people who will first go through all there tests, before actually starting up the debugger. Be pragmatic, Agile and TDD are principles, not a set of rules!
 Fri, 30 Dec 2005 08:21:01, David Chelimsky, Disciplined TDD over debugging
I think the idea is not the debuggers themselves are to be avoided. It's that debugging itself should be avoided. Why? At my old company, we'd get bug reports and the project manager would ask me how long it would take to fix. I'd tell him with some confidence that it would take somewhere between 2 hours and 6 weeks. I was almost always right. And therein lies the problem. Debugging is inherently unpredictable and therefore significantly increases risk on a project. It's much easier to predict the cost of making something new than debugging.

So we should *avoid* debugging. But that doesn't mean don't debug. And as far as I'm concerned, adding print statements to your code is debugging - and riskier than using a debugger because you're changing code. It's rare, but I've seen new bugs introduced in debugging sessions due to typos. This is not to say that I don't ever add print statements :)

So how do we avoid debugging? Not simply by committing to not debug. But by committing to practices that minimize the need. There's an inverse relationship between the level of discipline with which you apply the rules of TDD and the number of debugging sessions you'll find yourself in. I think *that* is the point.
 Fri, 30 Dec 2005 12:26:35, Uncle Bob, Debuggers are a two edged sword
I have been outspoken about my avoidance of debuggers. My attitude is that every time I must fire up a debugger, I have failed. Perhaps I have failed to make my code so clear that I don't need a debugger to understand it. Perhaps I have failed to work in cycles that are so small that I don't need a debugger to find out what went wrong. Whatever the reason, when I am forced to use a debugger it means that I need to adjust my practices so that I can avoid using a debugger next time.

Having said that, I will use a debugger if I must. A good debugger is an invaluable tool that can help me find out what's going on in my code. Having used that debugger to find a problem, I will then try to figure out why I had the problem in the first place, and then adjust my practices so that I doesn't happen again.

As a result, I almost never use a debugger. I consider this to be a good thing.
 Fri, 30 Dec 2005 13:29:28, Tim Ottinger, What if it's not failure?
Bob, what if it's not YOUR code you're using the debugger on? Maybe someone else didn't make it clear enough, or maybe there's code you've never seen before. Can you adjust the practices of other people so that you never need a debugger to figure out what they're doing? And what if you're in an unfamiliar platform?

So you walk into iPodder, which is written in Python using the WxWindows[?] framework, and written by people you don't know and who may not share a common background. Say you don't know much about bittorrent and RSS feeds. Do you think that you have failed if you need to fire up a debugger? Do you feel that you should be able to simply sit down and write tests first in PyUnit[?] without delay or lost productivity? If you have to fire up the debugger, is it evidence of some personal fault of yours?
 Fri, 30 Dec 2005 16:20:47, Mike, If it's not your code
then of course it's not your "fault", and you may often need the debugger. But I agree with Bob that needing a debugger is a signal, in that case, to ask yourself what it is about the code that makes you need a debugger, and how you could avoid that next time you're writing new code, and if there's some improvement that's worth making to the current code to avoid the debugger in the future.
 Fri, 30 Dec 2005 16:24:26, Mike, On an unfamiliar platform
I still prefer to write "spike" tests to explore unknown behaviour, rather than using a debugger extensively. With the tests, I have a permanent record I can go back to, to see how an unfamiliar API method behaves, for example, rather than trusting my increasingly untrustworthy memory about what I saw in a debugger session several weeks ago.
 Fri, 30 Dec 2005 16:30:23, Mike, BTW, Tim, are you trying to get fired from OM?!
 Fri, 30 Dec 2005 16:32:10, Mike, Just kidding!!!
 Mon, 2 Jan 2006 16:45:07, Chiaroscuro, Debugging as a Smell
Right now I downloaded an IDE to start debugging a piece of code. I feel like *it is* an admission of failure. I need to debug because my code is not readable. I need to debug because I have abstracted early and in the wrong places. This is my little story and my humble suggestion: don't start out by producing DRY code, let it DRY on its own.
 Tue, 3 Jan 2006 18:42:14, Paul Pagel, last resort
I find the debugger to be a last resort. I can misread the need for a debugger, since I believe there is no place for a debugger in the normative agile process, just in exceptional cases. First I look at:

  • there might be something wrong with the test rather than with the production code.
  • the test tests too much at once.

Most importantly, it means I should look at the design with regaurds to patterns/principles since debuggers are a symptom of some smell.
 Wed, 4 Jan 2006 08:09:41, Tim Ottinger, I think that's the wrong way to think here
I don't think that's a productive way to think. The debugger is a tool. It's not the only one, but it is a tool. But as a tool, we need to figure out what it's good for and what it's not good for. It's not a replacement for testing or clear coding, but I never said it was. I said it was good for code spelunking. There are good reasons and bad reasons to do that.

With the debugger, you might find an answer to a question about the nature of an already-existing error or of a routine that you could only find out indirectly with some effort. It's mighty easy. Now, this ease is also the siren song, no? We might find it too easy to just fire up the debugger and have it show us our problems? I've not found this to be the case. People who use debuggers don't write worse code than people who do (though those who write confusing code have to resort to the debugger for issues that shouldn't require it).

Also, it's a good device for exploring code that you did not write and do not understand. It's hardly a personal failing on your part if someone else wrote confusing code, though it could be your flaw (if not your fault) for not knowing the platform API well enough. If you don't know what a routine is doing, then you can find it quite enlightening to walk through it in a debugger and find out what that mysterious API call is returning. The ability to set watches and examine variables is quite helpful, and can save an awful lot of time.

A debugger is a great alternative to guessing and second-guessing. That kind of uncertainty is a time waster (I can prove this only anecdotally, but you probably can as well). Not knowing is a crippling thing and leads us to make a lot of false starts. We shouldn't be guessing when we can just look.

This of course is second-best, since the better route is to not use languages unless they allow you to do exploration at a command line (ruby (irb), groovy (groovysh), boo (booish), python (python), etc). But if you MUST use a compiled language, then the debugger can help fill the gaps. There's no reason to assume that the use of a debugger is wrong, unless you are using it to compensate for poor coding practices. If you are doing that, then it's not the use of the debugger that's a problem. Once you're done with the your spelunking, you need to fix the real problem.


 Thu, 5 Jan 2006 04:23:47, Tobias, Debugging
Debuggers are just tools - and IMHO very effective ones. When all your tests pass but the application fails, starting the debugger is one way to track down the problem. And often it's the shortest way. When figuring out the cause of the problem with the debugger, you just shouldn't miss to add a test that reveals this problem.

A very common problem in environments like C++ is the use of invalid pointers, causing segfaults. Such things aren't easy to reveal in unit tests. Starting up gdb and doing a backtrace is matter of minutes to find the problem in the code.

But there's another side of the story. Debugging your failing tests are often a design smell. The tests may be too complex or cover to much of your code and it's probably time to extract another class or method somewhere.

So maybe my advice would be: If your tests pass but the applications fail, run the Debugger, find the problem, make tests reveal problem and finally make the code pass the new tests. If your tests fail and you don't see the problem, try to write smaller tests covering smaller portions of your code - if you need to use the debugger here, something may be wrong with your design.
 Thu, 5 Jan 2006 10:15:30, Tim Ottinger, Something You Don't Understand
Well, a debugger is a great then when there is Something You Don't Understand (SYDU(tm)). Sometimes it's your fault you don't understand. Sometimes it's someone else's. Sometimes it's not your fault, but just your situation.

Yesterday I wrote a test and didn't provide an object (mock or real) where one was expected. My unit test failed, but it was darned hard to tell why -- what was missing. The line of code was pretty simple where the null reference was, but it was any one of at least four different objects that was null. It took time. Without a debugger (printf, etc) it was ugly. With a debugger it became obvious. I didn't understand the test framework I was writing in - but the SUT wasn't written for debugging so it had a complex line
x.object = y[key];
That's not complex looking, until you start to consider the many failure modes. I'll cover that in a separate blog.

But I didn't understand, and the debugger did in seconds what not-debugger could not do easily at all.
 Thu, 5 Jan 2006 13:10:40, Michael Feathers,
If that's a complex line for a debugger, I don't think I'd want anyone to change it to make debugging easier. The thought scares me to my core (no pun intended).
 Sun, 8 Jan 2006 16:28:18, Guillermo Schwarz, Using the debugger to debug the tests
Debugging a server process takes too long, the reproduction is clumsy and can get affected by other people doing their own tests, and 30% of the times you do that, it will take too long to reproduce the bug.

If your server fails, write down the steps that make the server fail. Then create a test that shows that the operation fails at the client level. Make the same test run at the server level, to show that it is not a communication problem.

Then split the tests in mini tests, that show that mini operations are correct. If one of them should conceptually work, but it fails, you can debug that mini test to see what went wrong. Usually the information you get from that debugging session is worth a thousand examples.

Debugging for days at a time to stumble in a bug is a huge waste of time, specially if the application is too big.

A drawing is worth a thousand words.
An example is worth a thousand drawings.
A test case (of a class) is useful as a example of its use (of that class).

A test is worth a million words.

A debugging session of a failing test that conceptually should work is worth a thousand tests.

A debugging session of a failing test that conceptually should work is worth a billion words.

There are two types of defects: random and systematic. Systematic errors are present everytime that you write something simmilar. So to avoid systematic errors:

1. Apply DRY: Do not Repeat Yourself. If you don't repeat the code you don't have a chance to repeat your mistakes all over the place.
2. Apply code reviews or even better apply Pair Programming. Your systematic errors are probably not the same of the people you pair with. Even better if you rotate your partners form time to time.
3. Write tests. Tests may detect your systematic errors. Even better if you can reuse your tests. And it gets better if you debug simple tests that conceptually should work, because you end up with even better tests.

Specially important in these days is writing tests for the services you use and create, because most applications use lots of legacy services. Writing mocks for those services and making sure the whole system runs under thos mock services, then reusing those test as integration tests. I can reuse the tests of the application so that when we run against mock services, everyting works, because that's quick and easy. Then I switch to the real services, make sure all service tests pass, and make sure the same tests used for testing the UI against the mock services are used against the real services.

Cheers,
Guillermo.
 Sun, 6 Aug 2006 11:33:13, echo, aha!