How can C Programs be so Reliable?

Recent posts
Structured Editing and Incremental Parsing
How I Prepare to Make a Video on Programming
pizauth: HTTPS redirects
Recording and Processing Spoken Word
Why the Circular Specification Problem and the Observer Effect Are Distinct
What Factors Explain the Nature of Software?
Some Reflections on Writing Unix Daemons
Faster Shell Startup With Shell Switching
Choosing What To Read
Debugging A Failing Hotkey

Blog archive

C is, today, a unique programming language. Surprisingly few people can really program in C and yet many of us have quite strong opinions about it. Buffer overflows, stack smashing, integer overflows — C has many well publicised flaws, and these terms are often bandied about confidently, even by those unfamiliar with C. Personally I shied away from C for a decade, for one reason or another: originally, compilers were expensive (this being the days before free UNIX clones were readily available) and slow; the culture was intimidatory; and, of course, all the C scare stories made me think that a mere mortal programmer such as myself would never be able to write a reliable C program.

Discounting a couple of tiny C modules that I created largely by blindly cutting and pasting from other places, the first C program I wrote was the Converge VM. Two things from this experience surprised me. First, writing C programs turned out not to be that difficult. With hindsight, I should have realised that a youth misspent writing programs in assembler gave me nearly all the mental tools I needed - after all, C is little more than a high-level assembly language. Once one has understood a concept such as pointers (arguably the trickiest concept in low-level languages, having no simple real-world analogy) in one language, one has understood it in every language. Second, the Converge VM hasn’t been riddled with bugs as I expected.

In fact, ignoring logic errors that would have happened in any language, only two C-specific errors have thus far caused any real problem in the Converge VM (please note, I’m sure there are lots of bugs lurking - but I’m happy not to have hit too many of them yet). One was a list which wasn’t correctly NULL terminated (a classic C error); that took a while to track down. The other was much more subtle, and took several days, spread over a couple of months, to solve. The Converge garbage collector can conservatively garbage collect arbitrary malloc’d chunks of memory, looking for pointers. In all modern architectures, pointers have to live on word-aligned boundaries. However, malloc’d chunks of memory are often not word-aligned in length. Thus sometimes the garbage collector would try and read the 4 bytes of memory starting at position 4 in a chunk - even if that chunk that was only 5 bytes long. In other words, the garbage collector tried to read in 1 byte of proper data and 3 bytes of possibly random stuff in an area of memory it didn’t theoretically have access to. The rare, and subtle, errors this led to were almost impossible to reason about. But let’s be honest - in how many languages can one retrospectively add a garbage collector?

My experience with the Converge VM didn’t really fit my previous prejudices. I had implicitly bought into the idea that C programs segfault at random, eat data, and generally act like Vikings on a day trip to Lindisfarne; in contrast, programs written in higher level languages supposedly fail in nice, predictable patterns. Gradually it occurred to me that virtually all of the software that I use on a daily basis - that to which I entrust my most important data - is written in C. And I can’t remember the last time there was a major problem with any of this software - it’s reliable in the sense that it doesn’t crash, and also reliable in the sense that it handles minor failures gracefully. Granted, I am extremely fussy about the software I use (I’ve been an OpenBSD user for 9 years or so, and software doesn’t get much better than that), and there are some obvious reasons as to why it might be so reliable: it’s used by (relatively) large numbers of people, who help shake out bugs; the software has been developed over a long period of time, so previous generations bore the brunt of the bugs; and, if we’re being brutally honest, only fairly competent programmers tend to use C in the first place. But still, the fundamental question remained: why is so much of the software I use in C so reliable?

After a dark period of paper writing, I’ve recently been doing a little bit of C programming. As someone who, at some points, spends far too much time away from home, reliably sending e-mail has always been an issue. For several years I have sent e-mail by piping messages to a sendmail process on a remote machine via ssh. While this solves several problems (e.g. blacklisting), it has the problem that on many networks (particularly wireless networks) a surprising number of network connections get dropped. Checking that each e-mail has been sent is a frustrating process. So, having mulled on its design for a little while, I decided to create a simple utility to robustly send e-mail via ssh. The resulting program - extsmail - has more features than I’d originally expected, but the basic idea is simply to retry sending messages via an external command such as ssh, until the message has been successfully sent. I also wanted the utility to be as frugal with resources as practical, and to be as portable as possible. This inevitably led to extsmail being written in C. I then decided, as an experiment, to try and write this, as far as possible, in the traditional UNIX way: only to rely on features found in all sensible UNIX clones and to be robust against failure. In so doing, I made two observations, new to me, about writing software in C.

The first observation is semi-obvious. Because software written in C can fail in so many ways, I was much more careful than normal when writing it. In particular, anything involved in manipulating chunks of memory raises the prospect of off-by-one type errors - which are particularly dangerous in C. Whereas in a higher-level language I might be lazy and think hmm, do I need to subtract 1 from this value when I index into the array? Let’s run it and find out, in C I thought OK, let’s sit down and reason about this. Ironically, the time taken to run-and-discover often seems not to be much different to sit-down-and-think - except the latter is a lot more mentally draining.

The second observation is something I had not previously considered. In C there is no exception handling. If, as in the case of extsmail, one wants to be robust against errors, one has to handle all possible error paths oneself. This is extremely painful in one way - a huge proportion (I would guess at least 40%) of extsmail is dedicated to detecting and recovering from errors - although made easier by the fact that UNIX functions always carefully detail how and when they will fail. In other words, when one calls a function like stat in C, the documentation lists all the failure conditions; the user can then easily choose which errors conditions he wishes his program to recover from, and which are fatal to further execution (in extsmail, out of memory errors are about the only fatal errors). This is a huge difference in mind-set from exception based languages, where the typical philosophy is to write code as normal, only rarely inserting try ... catch blocks to recover from specific errors (which are only sporadically documented). Java, with its checked exceptions, takes a different approach telling the user you must try and catch these specific exceptions when you call this function.

What I realised is that neither exception-based approach is appropriate when one wishes to make software as robust as possible. What one needs is to know exactly which errors / exceptions a function can return / raise, and then deal with each on a case-by-case basis. While it is possible that modern IDEs could (indeed, they may well do, for all I know) automatically show you some of the exceptions that a given function can raise, this can only go so far. Theoretically speaking, sub-classing and polymorphism in OO languages means that pre-compiled libraries can not be sure what exceptions a given function call may raise (since subclasses may overload functions, which can then raise different exceptions). From a practical point of view, I suspect that many functions would claim to raise so many different exceptions that the user would be overwhelmed: in contrast, the UNIX functions are very aware that they need to minimise the amount of errors that they return to the user, either by recovering from internal failure, or by grouping errors. I further suspect that many libraries that rely on exception handling would need to be substantially rewritten to reduce the number of exceptions they raise to a reasonable number. Furthermore, it is the caller of a function who needs to determine which errors are minor and can be recovered from, and which cause more fundamental problems, possibly resulting in the program exiting; checked exceptions, by forcing the caller to deal with certain exceptions, miss the point here.

Henry Spencer said, “Those who don’t understand UNIX are doomed to reinvent it, poorly”. And that’s probably why so many of the programs written in C are more reliable than our prejudices might suggest — the UNIX culture, the oldest and wisest in mainstream computing, has found ways of turning some of C’s limitations and flaws into advantages. As my experience shows, I am yet another person to slowly realise this. All that said, I don’t recommend using C unless much thought has been given to the decision - the resulting software might be reliable, but it will have taken a significant human effort to produce it.

Newer 2008-11-11 08:00 Older
If you’d like updates on new blog posts: follow me on Mastodon or Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Comments



(optional)
(used only to verify your comment: it is not displayed)