Predicting the Future of Computing

Blog archive

Recent posts
Some Reflections on Writing Unix Daemons
Faster Shell Startup With Shell Switching
Choosing What To Read
Debugging A Failing Hotkey
How Often Should We Sharpen Our Tools?
Four Kinds of Optimisation
Minor Advances in Knowledge Are Still a Worthwhile Goal
How Hard is it to Adapt a Memory Allocator to CHERI?
"Programming" and "Programmers" Mean Different Things to Different People
pizauth: First Stable Release

The British Computer Society recently published a document entitled Grand Challenges in Computing - Research. A grand challenge is billed as the pursuit of ‘a goal that is recognized one or two decades in advance; its achievement is a major milestone in the advance of knowledge or technology, celebrated not only by the researchers themselves but by the wider scientific community and the general public.’ The BCS is a well respected organization, and the report was edited by Tony Hoare and Robin Milner, two of the best respected people in the field.

Given its pedigree, the failure of this document to deliver on its intriguing premise is perhaps surprising. Yet I can not honestly say I was surprised by this failure. Subsequently I was surprised by my lack of surprise. Some thought has led me to think that there are two different factors for my reaction.

The first is the somewhat predictable nature of the grand challenges. For example consider the grand challenge ‘scalable ubiquitous computing systems’ - this might sound fascinating, and is certainly an imposing title for those unfamiliar with the subject. But in reality it packages up what is an interesting, reasonably cutting edge, existing work - ubiquitous computing - and tacks on a vague wish (scalability). This is hardly a grand challenge - even I (as someone who doesn’t work in that area) can see that enough work is ongoing in this area to suggest that it’s already some way to becoming a reality. Then there are vague grand challenges such as ‘dependable systems evolution’, a large part of which appears to involve a few people doing some security analysis on code. This isn’t a grand challenge - the OpenBSD boys have set the standard on this in my opinion, and what’s more their stuff is practical and available now. The grand challenge’s ultimate aim of building a ‘verifying compiler’ to solve all of our security and reliability problems is simply silly, as anyone who has heard of the halting problem would surely know. Then there is the standard cyborg-esque nonsense. I am quite happy to believe that investigating the nematode worm might aid medical science, but ‘In Vivo-in Silico (iViS): the virtual worm, weed and bug’ suggests that such work might influence computer systems; almost renowned sci-fi author Kilgore Trout couldn’t have dreamed of such a bizarre idea. It’s the sort of fluff that leads to truly awful, and misleading, terms such as artificial intelligence and genetic algorithms.

All in all, the grand challenges really consist of vague wishes, trivial extrapolations of the likely trajectory of current work, recycled Utopian visions, and one or two ideas so bad that it’s an embarrassment that they made it as far as the drawing board, let alone beyond. There is little that is original in this document; I don’t consider that a problem in and of itself, but little of what is proposed is particularly interesting or likely to significantly affect our lives.

This leads me to my second thought. The grand visions come from some of the fields most esteemed and established figures but the document feels like it should have been presented twenty years previously. Why is this? Well, we all know that computing is a field which is developing at such an incredible pace that few can keep up with it. I spend a considerable amount of my time attempting to keep abreast of just a few sub-areas, and even then I feel like I’m only ever scratching a small part of an ever growing surface. Sometimes the prospect of spending the start of my day trawling web sites, mailing lists, personal communications and so on seems almost too much but I force myself to do it because the only alternative is to fall further behind in the areas I currently feign knowledge of. I wonder though, is this something that one can keep up for the decades that might constitute ones professional life? Frankly I think for most people (and I’m not excluding myself from this) the answer is probably no. Experience thus becomes one factor amongst many in a subject whose founders were young enough when they were creating the subject to still be alive today in some cases - yet they could not possibly have predicted the future we inhabit even when they were working towards it.

The conclusion I have drawn from this is that it is pointless to predict the future of computing; worse still to use ones own limited knowledge of its current state to attempt to actively dictate its future. We’ve got a long way already by a process that can only be described as Darwinianism in its most glorious form. It doesn’t matter if one is the Queen of England, without complete omnipotence to understand the breadth and depth of the subject, I can not imagine how any small group of people can have sufficient knowledge to better the Darwinianism that has already got us so far.

Newer 2005-01-31 08:00 Older
If you’d like updates on new blog posts: follow me on Mastodon or Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Comments



(optional)
(used only to verify your comment: it is not displayed)