Where do you find problems during refactoring?

I’m in the midst of reading Oren’s latest post about Working Software Over Comprehensive Documentation.  While I generally agree with Oren (and I realize he’s a big enough man that I don’t want to piss him off), I did see one statement in that post that scared the shit out of me.  The following is made in the discussion context of refactoring to a previously tried and proven-to-fail (for whatever reason) implementation.

And then I would run the tests and they would show that this causes failure in another part, or it would be caught in QA, or I would do the responsible thing and actually get a sense of the system before I start messing with it.

Okay, Oren.  Here’s where I get someone large in stature (not JP or Bellware) between you and I.  What the hell man?  Sure you run the tests and they might show the ensuing failures.  Might is the operative word there.  What if there is no test for that implementation that you’ve just switched to?  Well, according to your statement it “…would be caught in QA”.  Based on that statement I’m understanding that you, Oren, are claiming that it is okay to defer finding issues, which are the result of refactoring, until QA.

I understand that we are always sending bad, buggy and sometimes plain crap code to QA.  If there were no defects in the code that we sent to them, what would they have to do during the day?  Having them there as a second sober look (the first being the development team) is not carte blanche to be sending code off to them willy-nilly.  If we allow ourselves, as developers, to accept the thought that “…it will be caught in QA”, we’re standing at the top of a very, very slippery slope.

If it’s okay to say “My tests didn’t catch it so it must be good.  Send it to QA.” when performing refactoring, what will be the next task that its acceptable to say this for?  Perhaps when I build that screen and I don’t really feel like checking the tab order I should be allowed to make that same statement.  Now you’re firmly on the slippery slope, sliding head first to the bottom where you will be greeted by a giant pile of manure.

We developers are pushing techniques such as TDD, BDD and Continuous Integration through tools like xUnit, ReSharper, CruiseControl.NET and many more.  We push this stuff because we claim to want to ensure that we can build and maintain code at the highest levels of quality possible.  Sometimes striving for those lofty peaks of quality we will have to pause and write some documentation about the code.  We will have to make notes about techniques and past experiences that have failed.  If we don’t we are doomed to repeat our own history.

Like Oren (this is where I suck up), I’m firmly in the camp of Working Software.  That said, I like to be pragmatic about the tradeoff between Working Software and Comprehensive Documentation.  Note that the trade off is with comprehensive documentation.  I’m currently seeing a system where every finite detail must be documented prior to releasing features to development.  With as much paper as is generated, I’d think that the next generation of developers (namely maintenance) would have no problems figuring out what is going on with the system.  That said, they could probably figure it out in about the same amount of time given 50% of the documents that we’re generating.  That’s where you need to make the trade off.

Are we giving the next developer to look at this enough to get things figured out in an acceptable time frame?  If you can take away some of the documentation and spend the time providing business functionality to the client, everyone will be a lot happier.