Working Code Trumps Everything
Computer programming is such a young discipline that it is still very fast moving. New techniques and practices are being discovered regularly. (Unfortunately, also because of the rapid progress, many techniques and practices are lost in the rush and have to be rediscovered on a regular basis.) So many good things. Procedural, object and functional programming paradigms, virtual machines, bytecode, virtualization, concurrency oriented programming, grid programming, containerization and web services (micro or otherwise).
All of these things are wonderful and the certainty of more to come doesn't hurt, but in the thrill of the chase for new technologies, I fear that we have lost sight of a common-sense principle: that is that working code trumps everything.
I've been a computer programmer for a while now (I got my first computer 40 years ago), I love to write code and I love to re-write old code, but I have learned that there is enough work to go around, without the gratuitous re-writing of old code. If it works, and usually it does, if we can be honest with ourselves, then we should leave it well enough alone.
I realise that this is close to the old adage about "don't fix what isn't broken", yet I feel that it's sufficiently different to deserve specifying individually. I'm not saying that we should ignore bad or faulty code, just because it mostly works. Any measurable error rate in code means that it doesn't work to my satisfaction. I'm talking mostly about legacy code, often written in COBOL and running on a mainframe, although while it boggles my mind, even old Java code can be considered legacy. We tend to look at older systems and insist that they should be re-written in insert name of trendy programming language here. This is the time to remember that working code trumps everything. I don't care how buzzword compliant your latest language is. If I have working, proven, code running, then I say leave it alone. Figure out new and interesting ways to interface to it if you have to. Even better would be to just get on with writing some of the boatload of other systems that people are asking us for.
One of the classic essays in the IT blogsphere is Joel Spolsky's Things You Should Never Do, Part I. Mr. Spolsky rejoins us with a number of accounts of what he calls the big re-write and how it has sunk or nearly sunk companies in the past and continues to threaten to do the same with companies today who are unwilling to learn the lessons of history writ large (or at least blogged large).
While it is easy to see that I lean strongly in favor of the no re-writes rule, others bring carefully considered opinions to the table. David Heinemeier Hansson or DHH to his friends, writes in the Signal vs. Noise blog about The Big Rewrite, revisited, where he argues that we should not be afraid to replace old, slow and technologically constrained code and systems. I generally agree with much of what DHH writes, even if he doesn't use any of the technologies that I normally do. His approach to most technology and people in IT is generally well thought out and expressed cogently. So to hear that he was in favor of big re-writes was surprising. But, then I read his blog entry and it all made so much more sense. The key here is while he initially sounds like he's advocating ripping out old systems and replacing them with a newer version, upon full reading, he built the new system alongside the old one, now renamed the classic version, and started building a whole fresh customer base on the new one. This is not the same as a complete re-write. It is writing a new system with a new focus and lessons learned from the old system. This is in reality a validation of the no big re-write rule because even DHH knew better than to touch code that was still working.