In this entry, I‘m going to break - in spirit at least - one of my golden rules: I‘m going to mention one of my hobbies. This is a slippery slope. Intelligent, interesting people (as well as me) are generally interested in more than one thing in one life - no matter the relationship between those things. Some of the people who most elegantly write about programming have, in my opinion, gone off the rails when they‘ve made fatuous comparisons between their hobby and computing of the type programmers like X because it is like programming in the sense Y. Often the only thing relating X - be it painting, poetry, guns, or red squirrels - to programming is the person writing about it. Good luck to them I say - but generalising from a case of one is frequently unenlightening. With that warning to myself - and the reader - in my ears, off I go.
I recently read an article on music that, boiled down to its essentials, talked about the lack of a meaningful relationship between technical virtuosity (roughly ‘how well can you play’) and musical quality (roughly ‘how enjoyable is this piece of music’). Sometimes 3 chords and a 3 note melody played slowly and imperfectly will be exactly what is required; sometimes precise fast scales and subtle melodic variations will do the trick. When a song that should be played slowly, sloppily, and minimally is played fast, precisely, and with extraneous notes - or vice versa - the balance between technical ability and musical quality is lost. Remembering this balance is a problem many musicians have. As their technical abilities grow over time, it is common to equate technical difficulty with musical quality. Despite my extremely limited musical abilities (it is difficult to dispute one persons assessment of my playing as ham-fisted), I have on occasion suffered this problem as I have got slightly more proficient: fortunately, since many of my favourite songs are - not to put too fine a point on it - brainless three and a half minute rockers, I generally get brought back down to earth at some point.
Reading this made me realise that a similar balance applies to programming. Sometimes a heavily engineered solution is right for the job; sometimes something much simpler is just the ticket. Let me give an example of each.
First, an example of something heavily engineered. One of my favourite things that I’ve done is
extsmail which is a traditionally constructed, robust Unix daemon for mail delivery. In
extsmail, I cared about every possible detail: the program is carefully decomposed into its constituent parts; every error condition is explicitly handled and dealt with appropriately; the documentation is complete; I audit the code regularly; and I’ve run it through every static analyser I’ve been able to get my hands on. In short, its 1,600 lines of code are probably the highest quality I’ve ever personally written. Given its tiny potential audience, you could well argue that it is grossly over-engineered, but it solves a real problem that I had for years. It performs its task - delivering e-mail to a remote machine via SSH - admirably and hasn’t crashed in almost 2 years. Anything less than that wouldn’t have been right for what I needed - it has to do the job as near to perfect as is possible, every time. Heavy engineering was the right choice here.
Second, an example of something simple. A colleague once dismissed a program I’d written with an unrepeatable 4 letter word, saying that it was poorly organised and unreadable. He was right in one way. The program in question was about 4000 lines of code, in an area of which both of us had previously been entirely ignorant - the fact that I had to learn a new programming language was one of the lesser challenges in the whole thing - and the result was slapped together in varying stages of incomprehension over roughly 3 or 4 man weeks. I pointed out to him two things about the result: first, it showed that what we wanted to do was possible and plausible, when we both had doubts originally; second, that he’d written precisely zero lines of code in the same period. That code, incidentally, is now part of one of the more widely used pieces of software I’ve been involved with. No user will ever know that it’s the product of someone learning on the job, and that, internally, things could have been done much better. For the job it does it works reliably enough and - more importantly - it exists. If I’d taken the high-road, it still wouldn’t exist now, and nobody would have been able to benefit from it. Quick and dirty was the right choice.
As my tactless colleague shows, there is a bias in programming against imperfect solutions. I can understand where this bias comes from: the worst of all worlds is an inappropriate imperfect solution. Trying to ensure that this doesn’t happen has some odd side-effects. One is a tendency to assume that one language, one tool, one technique should be appropriate for all tasks. Different communities fixate on different things. For the last 10 years, industrial management has often thought Java was the answer; web-sites PHP; and academics strongly typed pure functional languages. In my opinion, they are all wrong today - and, I suspect, they will all be wrong in the future, even as their favourite things change. There is no one size fits all. Different tasks demand different approaches. For example:
- If I need to do a sysadmin job across my servers, I’ll use the Unix shell or Python. If it goes wrong, it’s not a huge problem. What is important is having something now. Having it in a form that can be easily altered to suit changing needs is also useful.
- If I’m writing a Unix daemon I’ll use C. I prefer it not to go wrong, but I can tolerate very occasional failures. If it takes a while to create, I can live with that. Having it in a form that can be tweaked a bit is useful, though I don’t expect it to change a great deal in the future.
- If you’re writing software for a plane or nuclear reactor, I hope you use the strongest statically typed language you can, every static analyzer available, and as many formal techniques as you can shake a stick at. Failure is not an option, since the result will be loss of human life. Nor is cost - if it takes hundreds of man years of effort, it’s probably worth it. Since such software must be precisely specified up-front, there is relatively little need for it to amenable to change.
As this suggests, when it comes to the needed quality of software, there are several shades of grey. There are many people out there who promulgate the need for highly-engineered solutions. I’d like to wave a little banner for imperfect software. In many cases, having something, even if it’s not quite perfect, is better than having nothing. In academia, in particular, we often fall into this trap - we know that, given sufficient time and resources, we can produce high quality (albeit not quite perfect) software. The problem is that the time and resources needed are nearly always prohibitive and, often, complete overkill.
No one in their right mind would say that the highly practiced virtuosity of a classical orchestra could replace the raw emotion of a simple folk singer - both have their own merits and their place in the musical landscape. So too should we give imperfect, pragmatic software the respect that it is due.