ArticleS. MichaelFeathers.
IterationSlop [add child]

Do You Have Iteration Slop?


Iteration Slop, what is it?

Iteration Slop is the time you spend before an iteration, preparing for it, and the time you spend after an iteration, finalizing your work. Some forms of iteration slop are traditional. Most teams do story writing and some pre-planning one iteration ahead. This makes sense if that work doesn't occupy everyone on the team. It keeps the team from being idle, although one could argue that there are often high value things that the rest of the team could do while this work is happening.

The worst form of iteration slop is what I call trailer-hitched QA. When your team does trailer-hitched QA the developers work until the last possible moment within the iteration and when the iteration ends, they hand off to the QA team. The QA team gets the "final" build and they work with it for a while, running more automated and manual tests. Often this takes close to the length of an iteration. At the end of the process, QA gives their approval and the development team looks back at them, in the distance, and says "Great! We really did finish that work!"

Effectively, trailer-hitched QA doubles the amount of time that it takes for a team to really know that they finished an iteration. It can work, but it is kind of counter-productive in a process which aims to shorten the amount of time it takes to get feedback.

Reducing iteration slop is hard work, but it is worthwhile. Planning can be shortened, and some of it can even be pulled into an iteration. The QA problem, however, is tougher. The QA team is often seen as the last line of defense before release. After all, the reasoning goes, if they don't catch the bugs who will? As a result, QA-ing a piece of software becomes an exacting and tension filled activity, particularly if the quality of the software is already poor. Some older applications, applications without many tests, are like old river beds. You step into what appears to be clear water, but when you step in, the silt comes up instantly, clouding everything. In code like this, bugs are discovered in the process, but they are often old bugs churned up from the deep. When things like that are possible, it's hard to bite the bullet and attempt to pull QA activities into the iteration. What if something is missed? There's no safety net.

I've seen many teams with this form of post-iteration slop, and the teams that get out of it -- the teams that are able to start pulling a lot of the QA work for an iteration into the iteration, usually end up reducing it slowly when they find their bug count falling. When you notice that you aren't producing as many bugs it's easier to have the courage to spend a little less time doing post-iteration QA. However, there are some cul-de-sacs that can trap teams. Some teams end up relying on their post-iteration slop. It may not be a conscious thing, just a little bit of decreased diligence. But, unfortunately, often that is enough to completely muddle any sense of done on the team. "Yes, we are done", the team says at the end of the iteration, "we passed all our tests", but the bug backlog silently increases, quality goes down and the project heads toward ruin.

There are many things that can be done with teams that end up relying on their post-iteration slop. Developers can learn how to write tests as they go and work very carefully to avoid bugs, and QA can concentrate on better techniques, but sometimes the best way of dealing iteration slop is to just slowly reduce it; slowly decrease the amount of time you spend QAing the application outside the iteration. When you do this, it means that QA ends up adopting a new role. They aren't gatekeepers anymore but rather expert resources who work with the developers, in the current iteration, to help them avoid injecting bugs in the code to begin with, and get them feedback as quick as possible. The team works clean and confidence increases.

In the end, agility is about many things, but shortening the feedback cycle and finding out about problems when they occur are two of its key components. When you have iteration slop that feedback loop is far longer than it needs to be. Fortunately, it's something that can be remedied.



 Wed, 24 Aug 2005 06:55:27, Jason Yip, Move the slop forward
I'm not sure what you mean by "slowly decrease the amount of time you spend QAing the application outside the iteration"? My gut reaction to post-iteration slop which I've also seen a lot of, is to move the slop forward. Is that what you mean?
 Wed, 24 Aug 2005 08:22:12, MichaelFeathers, Forward into the iteration
Yes, that's what I meant. To me, slop is activity that falls outside the iteration so the slop goes away when we move the activity into the iteration. There's usually a lot of reluctance toward doing this. Many QA people think that unless you are testing the final build of an iteration, the work is pointless because developers can change what you've worked on during the iteration. But, the fact is, for all iterations except the last in a release, that's true anyway. The only question is when the team will get the feedback... how long will they have to wait for it. Working in-iteration is a big shift.
 Wed, 24 Aug 2005 20:04:11, Ron Jeffries, and now ...
Good stuff, Michael. I look forward to some expansion on different kinds of slop and ways to address it. Thanks!
 Thu, 25 Aug 2005 00:31:30, Steve Donie, Just In Time
My team (Jeremy Miller and I) have been doing scrum-type agile development together since January. We are now starting to use FitNesse[?] and also starting to try to introduce agile practices to other folks in the company working on some legacy code. You book is going to be helpful in that regard. And this post is right on time - we just spent the day in a meeting trying to decide where we were going to put various things including expanding stories into 'acceptance test sentences', creating fit tables, creating fixtures, and moving QA closer to our iterations. Great stuff.
 Tue, 13 Sep 2005 01:36:08, Andrew Phillips, Testing is not QA
First, I agree with the gist of your article. However, I don't like your use of the name "QA team" which sounds like a traditional test team. Post-iteration inspection is *not* quality assurance since you can't "add" quality, it must be built in. I really don't even like the idea of a separate QA team at all since quality should be something that everyone is involved with.

There are many reasons that it should be the responsibility of the creator of the software to check that it behaves correctly. First, the creator understands the internals and will have a much better understanding of what to look for - post-inspection can only ever hope to find a fraction of defects. Second, it is easier to detect defects immediately - this makes it much easier to fix (and test) things when they are fresh in the mind. Third, giving consideration to testing during the design/coding can make better designs - for example, error conditions are better considered and thought is given to testability (ease with which defects can be detected) and debugability (ease with which defects can be tracked down).

Also, having a test team makes defects (much?) more likely - since finding bugs is their responsibility many programmers will spend less effort on finding and preventing them. Lastly, many people find testing fulltime to be tedious or even demeaning.
 Tue, 4 Oct 2005 07:43:09, Tim Haughton, Continuous QA
Michael, I'm curious to know what Continuous QA would look like. Most projects that I work on, I try to have a daily build that could be released if the iterations was chopped for some reason. I wonder what QA would be like if every morning QA installed the previous day's build and used it all day.