Problems with Software 2: Failing to Use the Computing Lever

Blog archive

Recent posts
Some Reflections on Writing Unix Daemons
Faster Shell Startup With Shell Switching
Choosing What To Read
Debugging A Failing Hotkey
How Often Should We Sharpen Our Tools?
Four Kinds of Optimisation
Minor Advances in Knowledge Are Still a Worthwhile Goal
How Hard is it to Adapt a Memory Allocator to CHERI?
"Programming" and "Programmers" Mean Different Things to Different People
pizauth: First Stable Release

[This is number 2 in an occasional series ‘Problems with Software’; read the previous article here.]

We all know at least one and I make no apologies for the picture I now paint. Hunch backed, tongue out, index fingers locked rigidly into an uncomfortable ‘r’ shape, and with no sense of the appropriate level of force to put into their actions. The neanderthal figures I refer to are, of course, two fingered typists. In my experience, a typical two fingered typist will struggle to type 30 Words Per Minute (WPM). An average touch typist will manage at least 70 WPM; and for those prepared to put in a bit of effort in, 90-100 WPM is eminently achievable.

Whether high typing speeds are useful depends on the person and the tasks they need to achieve. Agricultural workers, for example, probably won’t see a huge return on investment in typing skills. What astonishes me is that I know people who spend their entire working day in front of a computer, day in, day out, yet who still use this grossly inefficient technique. The slow typists I know easily lose a few hours every week due to their poor technique — yet suggesting that a few days spent learnt touch typing will quickly pay off in increased productivity is inevitably met with a reply of “I don’t have the time”.

Poor typing technique is a concrete example of a common human tendency — to stop learning as soon as one has a way of performing a task, regardless of whether that is the best way. In typing terms, the slowest typist is perhaps 5 times slower than the fastest: in other areas of human activity, the ratio can be much greater. One of those fields is computing and, by implication, software.

At this point, it is instructive to ask oneself a simple question: what is a computer? The standard answers run along the lines of “a CPU, some RAM, a disk”, “a keyboard and a monitor”, or “an operating system and user software.” While correct in their own way, such answers report the mechanisms involved without considering their purpose. The answer I prefer to this question is more abstract: computers are levers. Archimedes, the first person to explain how levers work, famously said “Give me a place to stand, and I will move the Earth.” When asked to prove this by his King, Archimedes used a lever to move a ship single-handedly, something impossible for even the strongest man to do unaided.

One of the first real uses of a computer shows how effective a lever they can be. The semi-programmable British Colossus computer of the mid 1940s was able to perform thousands of comparisons per second, cracking the seemingly unbreakable German Enigma code. What the code-breakers of Bletchley Park had realised was that a computer could perform simple, repetitive tasks at a speed impossible for humans. Operations that had taken a group of people several weeks to complete could be done by Colossus in half an hour: such was Colossus’s speed that, once operational, it played a significant part in shortening the course of the war.

It is little exaggeration to say that computers have developed into the longest levers available to man. Since Colossus, the rate of progress has been little short of astonishing, and computers have been used to do things that were previously unimaginable: from weather prediction to visual special effects, from DNA sequencing to search engines.

Yet, a lever is only truly effective if used from its end: gripped near the pivot, a lever is, in effect, shortened, and its force magnifying effect reduced. Regrettably, most people grip the computing lever extremely close to the pivot. There are many reasons for this and enumerating them all would take a small book. However, a few examples give an indication of the problem.

  1. Doing tasks manually instead of automatically.

    Computers excel at the simple, repetitive tasks which humans are terrible at. I remember seeing someone change a document which used the American English idiom of placing a comma after e.g. (i.e. “e.g., X”) to the less stilted British English idiom without the comma (i.e. “e.g. X”). The person editing did the change by scanning the (rather large) document and manually changing each occurrence, a task which took a couple of rather dispiriting hours. Inevitably, in a large document, some occurrences were missed. I fixed the remaining occurrences with 100% accuracy by using the ‘search and replace’ function in my text editor — which took less than 5 seconds.

  2. Permanent short-term thinking.

    Whenever I am confronted by a computing task, I ask myself “will I need to do this again?” If yes, I generally spend time working out how to automate the task. Consequently I have built up a considerable suite of small tools (a few of the more polished are publicly available) and techniques over the years to call upon. Most computing professionals I know have not a single such tool (and shockingly few techniques); their thinking never extends beyond trying to bumble through the task at hand. A task that, by hand, takes 30 minutes might take 90 minutes to automate; over the years, as that task recurs every 6 months or so, the automated solution pays for itself many times over.

  3. Using one tool for every task.

    Many people are prepared to invest time in learning one tool, but one tool only; it then becomes the hammer used to treat every task encountered as a nail. While this sometimes works well in the short term, in the medium and long term, the effort involved in contorting the task to the tool can be huge. Perhaps the most common example is the misuse of spreadsheets to store database-like information — if I had a pound for every time an individual’s name was spelt differently on separate ‘sheets’ within a single Excel file, I would be a wealthy man.

  4. Not knowing the field and not staying upto date.

    The first hour of my day is generally spent checking computing newsgroups and websites, to ensure that I have some knowledge of the breadth of my subject, and the latest developments. Sometimes, I confess, I wish I didn’t have to do this; but if I didn’t, I would miss out on those surprisingly frequent moments when someone raises a problem and I hear myself saying “I was reading about a new program X a few weeks back which sounds like it might do what you need.”

  5. Not searching for help.

    I learnt the rudimentaries of programming before I had a modem; whatever problems I encountered had to be solved with the help of the one manual and two books I owned. Memorably, one extremely trivial problem took me nearly a full month to solve, working on my own. Now, my first instinct is to search for the problem on Google; 95% of the time, someone else will have had the same problem, and someone else will have pointed them to a solution. Yet, when I ask someone who presents me with a trivial problem “did you try looking it up on Google” 95% of the time the answer is “I didn’t think of doing that.”

To some extent, all of us will recognise some parts of ourselves in the above, or can pinpoint other areas where we fail to use the computing lever effectively. There is no shame in that: the shame comes in not addressing the problem. Unfortunately, ignorance in the basic use of computers is considered not only acceptable but the norm amongst the majority of computing and software professionals. The resulting loss in efficiency is vast — and a rather sad indictment of our field.

Newer 2011-06-07 08:00 Older
If you’d like updates on new blog posts: follow me on Mastodon or Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Comments



(optional)
(used only to verify your comment: it is not displayed)