Found in 4 comments
by defined
> An objection I hear to this is that you're not just writing tests for yourself, you're writing tests for the others who will need to help maintain your code, perhaps after you're gone. I'm somewhat sympathetic to this, but I would also say that if someone else needs to modify my code, they damn well better first understand it well enough such that they could write tests before changing it (if they deem it necessary). Anything else is just irresponsible.

As someone who has had to fix plenty of legacy code, I have truly appreciated the people who have left me at least some working test suites to run - or just look at - and cursed many others. At the same time, I have generally been handed over code bases with tens or hundreds of thousands of lines, some of which had no useful tests.

If it is irresponsible to try to refactor or fix a codebase without first understanding all of it, it may be even more irresponsible to expect that those who follow in our footsteps will be able to do that, even if they are "gods of programming".

The reason why this is so was hammered home very strongly in Peter Naur's "Programming as Theory Building"[1].

Unless a code base is trivially small or simple, leaving it without meaningful tests instantly creates legacy code. I'll close with an excerpt from the back cover of "Working Effectively with Legacy Code[2].

> Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts.



Original thread
by michaelfeathers
> That doesn't even address the hard part.

Fair enough. I did write a book on the other parts, though.

Original thread
by d4mi3n

Looking for a good book? Subscribe to the weekly newsletter.