I love computing, in particular software: since I was young, I have devoted the major portion of my life to it. In return, I have gained access to a never‐ending source of challenges; met many great people; and had my ego smashed on the jagged rocks that are off‐by‐one errors. None of this makes me very well qualified to write an occasional series of articles on Problems with Software, but I will do my best. The spirit in which I write these is a positive one, perhaps akin to a parent admonishing a child for his own long‐term benefit. Of course, I am, and always be, a student of software, not its parent; searching, learning, and in awe of this ever‐evolving subject. With that in mind, let us begin.
I start with what I consider to be the most common mistake in software; a mistake that no other subject I know of commits so frequently. It is the confusion of problems whose solutions are easy to state with problems whose solutions are easy to realise. A non‐computing example may help explain this. The problem is war: around the world, it ruins the lives of countless people. The easily stated solution is that we should stop people from fighting each other. Regretfully, we know that this solution, while correct, is not worth the space it takes on the page. Telling people they should stop fighting changes nothing, and physically stopping them requires resources far beyond that which can be mustered. None of this is to say that it is not worth tackling the problem of war: but most of us have enough common sense to realise that solutions (for there will surely need to be more than one) will be expensive, long‐term, and come with no guarantee of success.
Alas, in software, such common sense is an uncommon thing. In our subject, unworkable solutions are frequently proposed to deep problems; funding is acquired; manpower deployed; and, after an exhausting death march, failure guaranteed. The problem most frequently invoked is the difficulty of creating good software. This clearly is a problem: too much software is expensive, unreliable, and ill‐suited to the task it was created for. The many easily‐stated solutions proposed have had, at best, a tiny effect; at worst, they have impeded progress by suppressing less exciting, but more realistic, truths. Two concrete examples exemplify this.
The first example is Aspect Orientated Programming (AOP). The problem that AOP addresses is this: developing a program is hard because all of its constituents aspects — even those which are in no way related — must be considered together at all times. Imagine a health‐care system which records patient data and has separate modules for doctors and patients. One aspect is server communication; both doctor and patient units need to communicate with a central server and cope with certain error conditions. This aspect is jumbled up with, and scattered across, the rest of the program, making it hard to check that all server interactions will be performed as expected. The AOP solution is then simple: rather than developing software monolithically, it should be developed as separate aspects, which can be worked on in isolation, before being composed together to produce the end result.
There is no doubt that the AOP solution is intuitive and appealing: the jumbled and scattered nature of current software is a curse, with many deleterious effects. I can remember when I first came across AOP. My response then is my response now (though my argument, I hope, is somewhat better stated these days): it can’t work. Furthermore, a simple analogy shows why it can’t work.
Imagine a hill, composed, horizontally, of different rock strata. Strata can
be taken out and put back in, without ever changing the inherent
‘hillness’ of the hill—a hill is still identifiably a hill
even if one strata is taken out (though it might have a distinct kink in it)
and each strata makes sense in and of itself, whether it is in the hill or not.
In this analogy, aspects are the strata; for AOP to work, it is necessary to
imagine being able to take a piece of software, pull out an aspect, and have
that aspect be coherent in and of itself. It is a beautiful vision, and one
which would change programming. Let us now take a different analogy. Instead of
a hill, imagine a gravel drive, consisting of hundreds of thousands of tiny
stones of varying colours: grey, honey, black, white, and so on. Even Don
Quixote would blanch if I asked him to extract all the honey coloured stones,
put them to one side, and then later to put them all back in the same place.
The unfortunate reality is that software necessarily resembles a gravel
drive—just as different coloured stones exist alongside each other, so do
‘aspects’ of a program. The server communication code in the health care system
is likely, for example, to form parts of nested AND
expressions—there is no general way to pull such an entangled aspect out
in such cases (or to have developed it from scratch) and expect to be able to
put it back in.
The second example is Model Driven Architecture (MDA). For my sins, I played
a small part in this monstrosity; in my defence I was young, naive, and needed
the money, though such pitiful excuses are unlikely to help me come the day of
reckoning. The problem that MDA aims to address is the following: developing
software is hard because programming languages force one to work at a level of
abstraction that is not much above that of machine code. Indeed, this is a
problem; while a high‐level description of a business’s requirements for
its new system may be a few pages long, the corresponding software may have
tens or hundreds of thousands of lines of code. Relating the mess of a
low‐level program back to the high‐level requirements is a skill
that few people successfully acquire. The MDA solution is then simple:
development should be done with high‐level (mostly UML) diagrams. Common
sense quickly suggests that this approach can never work. Diagrams are
wonderful for expressing static relationships, but, overall, terrible for
expressing most dynamic behaviour. Looping behaviour is elegantly captured by a
textual FOR
loop but, as anyone who has ever seen a ‘visual’
programming language in action will know that is a daunting, unmanageable, mess
when represented graphically—and a language which makes looping behaviour
difficult is not very useful.
What’s most frustrating about AOP and MDA is that while both correctly identify real problems, the solutions they present are appealing in their simplicity yet self‐evidently unviable. Huge amounts of money and time have been invested in both, with little prospect of meaningful reward. Indeed, I am not sure that a good solution could exist for either problem: they seem to me largely intractable and inevitable. Of course, this is not to say that people shouldn’t try to come up to solutions to these and other such problems—if whinging old codgers like me always had our way, we’d still be living in dark, damp caves and laughing at Barry from the cave next door while he works on his silly sounding ‘wheel’ concept. Ours is a ‘new’ subject, a vast, unexplored continent, and optimism is a vital component of the makeup of successful settlers. But as vital as optimism is realism—expending precious resources on hopeless causes is a guaranteed way to doom settlement. A little dose of common sense — thinking through a couple of common cases, at the very least — when presented with a wonderful sounding solution to a hard problem would do our subject a lot of good. Other subjects seem to manage it, and I don’t see why in software we can’t too.