Blog

Complete blog archive

Can We Retain the Benefits of Transitive Dependencies Without Undermining Security?

One of life’s great pleasures is trust: having confidence in another person to do the right thing warms the hearts of both parties. Despite the cynicism that we sometimes mistake for profundity, modern society would be impossible without a large degree of trust, including trust between people who don’t know each other.

However, we all know that trust has limits. I have a legal and moral right to leave my home for a week’s holiday, jam the doors wide open, pin a sign to the gate saying “back in a week”, and to expect the contents of my house to be untouched. I would be unwise, though, to trust that everyone will respect my rights. Some people, alas, spend their lives looking for opportunities to abuse other’s trust; some may only act when an “opportunity” confronts them and their willpower buckles. Either way, it is sensible for me to acknowledge this reality and to lock my door.

Just as with people, we place a great degree of trust in software, and different pieces of software place a great degree of trust in each other. I trust that, after I log into my bank account, my web browser won’t transfer my money behind my back; my web browser itself trusts an image processing library to decode arbitrary data from the internet without allowing an attacker to take over the computer; and so on...

Read more (2025-01-28 11:15)

Structured Editing and Incremental Parsing

As someone who has from time to time written on parsing, and even published some parsing research, I get asked questions about parsing fairly regularly, some of which I’m even capable of answering.

One thing that I’ve increasingly been made aware of is a widening interest in “fancier” ways of editing text. In essence, most people who’ve been programming for a while come to realise that having an editor that only thinks of a program as a sequence of UTF-8 characters is rather limiting. I suspect the rapid proliferation of the LSP (Language Server Protocol) has opened even more people’s eyes to what the editing of programs could be.

One long-standing approach to better editing is structured editing (sometimes called “projectional editing”). The basic idea is to have an editor which fully understands the syntactic structure of the language you’re editing. This has various benefits: the editor can give instant feedback about what the next thing the user can type is; it makes semantic-based feedback (e.g. about the static typing of a program) much easier; it enables things like “select this function so I can copy it” 100% accurate, instead of using slightly dodgy heuristics; and so on...

Read more (2024-11-27 12:25)

How I Prepare to Make a Video on Programming

Over time, I’ve recorded a few videos on programming topics, most recently on the Computerphile channel. For example, one on implementing a programming language was released last week; another on implementing an HTTP server came out earlier this year; and you can find a few more from before that if you’re willing to hunt around a bit. I’ve now had a number of questions about how these came into being, and I thought it might be worth setting out a few quick points from my perspective.

Scope

Let me start with something that might seem obvious, but is worth making explicit: what I know best is programming, and that’s the topic of the videos I’m considering in this post. However, “programming” is a very wide area, and my ignorance is vastly greater than my knowledge: there are many things one can do with programming that I know little or nothing about. Ask me about programming vector graphics systems, for example, and you’ll very quickly realise that I know nothing more than a person on the street.

When I talk on a topic, I aim to have an understanding of not just the direct topic of interest, but enough of an understanding about surrounding topics that I won’t misrepresent them. It’s amazing how often people have questions that require a broader understanding than my direct topic. I don’t expect to be able to answer every such question, but I’ve found that I need to know more than one might expect to even be able to understand the question. Some people worry that they might have to say “I don’t know”, but I worry that I’ll answer a question other than the one I was asked and not even realise it! ...

Read more (2024-11-25 13:45)

pizauth: HTTPS redirects

I’ve just released pizauth-1.0.6 which introduces one notable new feature: support for HTTPS redirects. New features sound good, but what does this actually mean? My experience is that the way OAuth2 works is sufficiently counter-intuitive that people often struggle to make sense of what it’s doing.

Here’s a very quick, simplified, summary of what OAuth2, and pizauth do . OAuth2 lets you obtain a time-limited token which you can use to access a remote resource (e.g. the ability to send email): pizauth’s job is to get you such a token. If you ask pizauth for a token for a resource and a token isn’t already available, pizauth will create a URL which you then open in your local web browser. That URL will take you to the remote resource (e.g. your email provider) who will then ask you to authenticate yourself (i.e. login). Once you’ve successfully done so, the remote web page then needs to tell pizauth that you’ve been successful: it does so by redirecting you to an HTTP URL that’s actually pizauth running on your local machine.

This last part – i.e. redirection – tends to surprise people, but somehow the remote resource needs to communicate success (or failure…) to pizauth. Piggybacking off HTTP uses a well-known existing protocol that’s tolerated well by most networks — it’s quite a cunning solution to the problem...

Read more (2024-11-10 09:25)

Recording and Processing Spoken Word

What happens if you listen to 60 seconds of your favourite radio station / audiobook and then 60 seconds of a random non-professional podcast? The former will be pleasant to listen to, with good intelligibility — it’s easy to understand everything the speakers say. The latter, however, are often somewhat unpleasant to listen to and have poor intelligibility. To add insult to injury, different episodes of the same podcast will often vary in quality and volume.

Fortunately, it turns out that most of us can get reasonably close to “professional” sounding recordings without much effort or money. In this post I’m going to break this down into two parts: recording and processing. My experience is that most of us eventually work out some recording techniques, but many fewer dive into processing. I’m going to show how one can use widely available command-line tools to process recordings, producing good quality mixes (i.e. the output we release to listeners).

I’m going to show how we can go from recordings that sound like this :

...

Read more (2024-08-21 12:30)

Why the Circular Specification Problem and the Observer Effect Are Distinct

Following “What Factors Explain the Nature of Software?”, a number of people have asked me: “aren’t the circular specification problem and the observer effect the same thing?” I take the blame for not making this clearer previously. In this post, as recompense, I’m going to explicitly explain why I believe these two factors must be considered distinct.

As a quick reminder, in my previous post I defined a triad of factors that I believe go a long way to explaining what the nature of software is:

  1. Software occupies a liminal state between the constraints of the physical world and an anything-goes fantasy world. We frequently mistake the constraints that software faces.

  2. Our ability to specify what a given piece of software should be is limited by the circular specification problem. We nearly always have to fully build the software in order to know precisely what we want it to be.

  3. Software is subject to the observer effect. The act of seeing the software in action changes what we – or more often others – think the software should be, sometimes radically.

...

Read more (2024-05-31 10:10)

What Factors Explain the Nature of Software?

I sometimes find myself asked to give advice on how organisations should go about creating software, but often my advice doesn’t gel with those who sought it. Sometimes that’s because only one answer was ever considered acceptable; sometimes I am ignorant of important wider context and my answer is unworthy of consideration.

But, most often, I believe it’s because both sides have different underlying assumptions about the nature of software. These assumptions are so deeply embedded that most of us rarely, if ever, explicitly think about them. It took me many years to realise I had made such assumptions, and many more to be able to articulate them somewhat coherently.

In this post I’m going to define, as best I can, what I consider to be the triad of interacting factors that best define the nature of software. These are inevitably high-level, but my experience is that they are just about specific enough to allow one to make reasonable quality predictions about a given piece of software. I’m not claiming these thoughts to be original, nor do I claim software to be unique in having these factors as their fundamentals...

Read more (2024-05-14 11:00)

Some Reflections on Writing Unix Daemons

Unix daemons are programs which run in the background, performing tasks on our behalf indefinitely. Daemons are somewhat mysterious, easily overlooked, programs of a kind few of us have experience in writing.

In this post I’m going to describe some of the things I’ve learnt in the course of writing and maintaining 3 daemons (extsmail, snare, and pizauth) over 15 years. I’m going to start, however, with a brief overview of what daemons are, and how we get them up and running, as it will make the later parts of the post easier to understand.

Background

Let’s start with the most important question: why the term “daemon” and why the weird spelling? The term seems to date from the early 1960s when “operating systems” as we know them today started to take recognisable form. The general term is, quite sensibly, based on Maxwell’s demon, but I assume the “ye olde world” pseudo-antiquarian spelling (i.e. the extra “a”) is the result of people at the time finding it amusing. Unix seems to have used “daemon” as the spelling from early in its history. The spellings “demon” and “daemon” are often used interchangeably, probably because the former is both more familiar and in people’s spell checker...

Read more (2024-02-28 11:00)

Faster Shell Startup With Shell Switching

A few days ago Thorsten Ball wrote a post exhorting Unix users to optimise their shell’s startup time. It’s excellent advice and well worth following.

The Unix shell has two major use cases: as an interactive prompt (what we often call “the command line”); and as a “scripting” or “non-interactive command” language. We normally pick one shell (bash, fish, zsh, etc.) and use it for both use cases.

However, we can use different shells for each use case. People don’t normally bother doing so because there is little functional utility in doing so. There is, however, a small performance reason for doing so, which I’m going to look at in this post. I’m going to call the technique I describe in this post “shell switching” since I’m not aware that it has an existing name...

Read more (2024-01-16 14:50)

Choosing What To Read

Like many people, I spend a lot of time reading. Since there is much more that I could read than there is time for, I have to make choices about what I should read. In this short post I’m going to outline what my reading choices have become and why. My aim isn’t to suggest that anyone else should make the same choices – indeed, they’re intensely personal – but, rather, that thinking about such choices is worthwhile.

Unlike many people, these days I intentionally read very little fiction. Saying that I’m a literary philistine out loud is close to a taboo and for many years I downplayed it. These days, I’m much less worried about this admission, in part because the alternative I have gravitated towards has given me greater personal satisfaction and, I hope, insights: history.

Personally, I find that history contains many more surprises than authors can get away with in fiction. We have a strong sense of what is plausible: this constrains writers, even in supposedly fantastical settings, and readers tend to dislike violations of it. Our sense of what is plausible is in large part defined by our life experience — but most of our lives take part within surprisingly narrow parameters...

Read more (2024-01-04 09:50)

Debugging A Failing Hotkey

On Sunday I tried using hk, but it wasn’t working. That surprised me, because hk is a very simple program that allows one to set temporary hotkeys — and a program I use many times a day. Here’s a toy example of hk:

hk Ctrl+Shift+F8 echo Hello

hk waits until Ctrl+Shift+F8 is pressed — when it is, hk executes the command (which prints “hello”) and then exits...

Read more (2023-12-13 13:25)

How Often Should We Sharpen Our Tools?

[there was a] lumberjack who said that if his life depended upon his ability to cut down a tree in five minutes he would spend three minutes sharpening his axe.

Often ascribed, though inaccurately, to Abraham Lincoln, this aphorism neatly captures the idea that, in order to efficiently reach a direct goal we sometimes need to spend time on improving indirect factors such as our tools. In computing, we have access to a wide array of tools, but each takes effort to learn, to customise, and even, if necessary, to create. While our life rarely depends on sharpening our tools, the question implicitly posed by the aphorism is no less important for it. Rather than try and grapple with this question abstractly, I’m going to give an example of how and when I’ve chosen to sharpen some of my tools, and the consequences of doing so.

I spend a huge portion of every day programming or writing in a text editor. For at least 15 years I used NEdit, since that was the closest Unix match to the first (non-Unix) editor I had used, StrongED. When I started using it, NEdit felt modern, powerful, and evolving in the right direction — but, for the last 5 or more years of my use, it was unmaintained and gradually suffering from bit rot. I put some effort into keeping it running on OpenBSD, including fixing crashing bugs. This greatly amused my Vim-using colleagues and I was the butt of many jokes — something that didn’t worry me, because I was secretly convinced that I was right not to waste my time on Vim’s bizarre editing style...

Read more (2023-12-04 15:11)

Four Kinds of Optimisation

Premature optimisation might be the root of all evil, but overdue optimisation is the root of all frustration. No matter how fast hardware becomes, we find it easy to write programs which run too slow. Often this is not immediately apparent. Users can go for years without considering a program’s performance to be an issue before it suddenly becomes so — often in the space of a single working day.

I have devoted more of my life to optimisation than I care to think about, and that experience has led me to make two observations:

  1. Human optimism leads us to believe that we can easily know where a program spends most of its time.
  2. Human optimism leads us to believe that we can easily know how to make the slow parts of a program run faster.

...

Read more (2023-11-14 09:30)

Minor Advances in Knowledge Are Still a Worthwhile Goal

Despite a distinct lack of talent, and occasionally long hiatuses, I’ve enjoyed learning to play the electric guitar as an adult. One of the many variables in the feel and sound of a guitar is its strings, but the major manufacturers’ offerings all seemed rather similar to me. I eventually settled on D’Addario’s EXL offerings, mostly so that I didn’t have to think about it again.

Last year, needing to buy a new batch of strings, I idly looked around to see what is now available, and noticed that D’Addario had a newish “NYXL” range — at double the price of my EXL’s! Despite balking at the price, and knowing from experience that there is a tendency in music circles to ascribe magic to the mundane, the reviews I read were so positive that I decided to risk buying a single set.

I wasn’t expecting to notice much difference between the EXLs and NYXLs, but as soon as I fitted the latter, I was proven wrong. They sounded better, though that’s very subjective; they also appeared to be staying in tune better, which is more objective. But, gradually and more surprisingly, I realised that the NYXLs were taking perhaps three or more times longer to degrade (guitar strings gradually rust, in no small part thanks to our sweaty mitts)...

Read more (2023-10-03 13:00)

How Hard is it to Adapt a Memory Allocator to CHERI?

CHERI is a set of things (adapted CPUs, adapted operating systems, adapted libraries) that, collectively, aim to stop software bugs becoming security flaws. If you’d like to know more, I gave some of my thinking on CHERI in Two Stories for “What is CHERI?”.

In this post I’m going to flesh out an observation I made in that post, which is that some software needs thoughtful adaption to CHERI if we want to get the security advantages we hope for. Exactly what that thoughtful adaption might look like will vary, probably substantially, between different pieces of software. What, for instance, might it look like for critical, widely used, components? In this post I’m going to look at how memory allocators (henceforth “allocators”), one of software’s most fundamental building blocks, can be adapted to CHERI. If you find this interesting, but want greater depth than I’ll go into here, you might be interested in the paper Picking a CHERI Allocator: Security and Performance Considerations that this post is based upon.

A Simple Allocator

It is a truism that virtually every program needs to dynamically allocate memory. Our collective folklore tells us that allocators like dlmalloc or jemalloc are impressive pieces of software that improve on their predecessors, but very few of us can really explain why. We call malloc, realloc, and free and magically chunks of memory are allocated, resized, or freed on our behalf...

Read more (2023-09-18 10:30)

“Programming” and “Programmers” Mean Different Things to Different People

Late last year on my blog I published “How Might Generative AI Change Programming?”. Although I’m slightly biased, I’m still comfortable with that post’s high-level message. However, there is one important aspect that I think I failed to bring out as explicitly as I should have: different communities’ programmers will be affected in different ways.

One reason I made this mistake is that I had overlooked how different communities’ assumptions about “programming” and “programmer” have diverged. As with most instances of incommensurability, the people involved (e.g. me) often don’t realise that they’re talking about slightly different things.

As a concrete example, a couple of weeks back, the Economist – whose coverage of areas I know about I find to be rather good – published an article with the following claim: ...

Read more (2023-08-23 12:30)

pizauth: First Stable Release

In pizauth: dump and restore I said:

However, there was one feature I couldn’t find a good design for: persisting state to disk. This missing feature had been nagging away at me (and a small, though growing, number of other people) for quite a while: recently I finally alighted upon a design that I thought would work well. pizauth-0.2.2 is the first release to support this. Perhaps this will be the last release before 1.0.0!

I was a little too optimistic. One of pizauth’s configuration option names had long bothered me, and addressing that caused a backwards-incompatible change that forced a 0.3.0 release...

Read more (2023-08-13 08:00)

The Need to Explain

One of my in-built quirks – present, I’m told, from a very young age – is spotting patterns. Whenever I see X, I also see Y; there are 3 As for every B; if this happens, then that is more likely to happen afterwards; and so on. It’s not something I do particularly consciously, and few of my observations even reach the lofty heights of banal. Fortunately, it’s not all wasted effort, and a small number of these observations end up making my life a little better. My train home always leaves from platforms 1 or 8. If I arrive at the station as my train is due to leave, my best bet is not to check the departure boards but to run past platform 8 to see if my train is there and, if it isn’t, run straight to platform 1.

As I’ve grown older, I’ve learnt to keep more of the observations I’ve made to myself. This isn’t because my observations very often offend, but because some people view them as creating a void which must be filled, as soon as possible, by an explanation. The problem with spontaneous explanations on topics beyond our purview is that they are often obviously wrong. One of my other quirks is that I find it difficult not to rebut obviously wrong explanations. This can then cause an endless loop of further explanations and rebuttals, which is rarely useful or enjoyable.

To my surprise, I’ve realised that further attempts at spontaneous explanation become less convincing more often than they become more convincing. One obvious lesson to draw from this is that it is easier to disprove incorrect explanations for a phenomenon than it is to generate a correct explanation. As a researcher, I have learnt that lesson the hard way more than once! ...

Read more (2023-07-18 08:50)

Two Stories for “What is CHERI?”

Some readers might have heard of CHERI. Of those who’ve heard of it, most have probably got the rough idea that it’s a “more secure” CPU — but, after that, I find that most people start to get rather confused when they try and nail down “what is CHERI?”.

Having grappled with this myself, and having helped others grapple it, I think that the major reason is that the high-level story requires understanding quite a bit of detail — hardware, Operating Systems (OSs), programming languages (and some end user programs) all have to be adapted for CHERI. Most people tend to focus at first solely on the CPU, but one can’t form a coherent story from the CPU alone. Those who persist and piece together the other parts of CHERI can then get swept up in the novelty of the idea, and assume that CHERI magically secures everything it touches. However, CHERI is just a tool and, like all tools, it’s better at some things than others — and, even then, only when used thoughtfully.

In this blog post I’m going to explain just enough of CHERI to put together two coherent stories for what it is and where it might be used. I’m not going to try and cover all the possibilities, since, as you’ll see, this post is long enough as it is! One reason for that is that CHERI contains overlapping concepts that can be combined in more than one way. Because of that, I’ll warn you in advance that there is a fairly hard switch between the two stories in this post — sorry in advance! I’m also not going to be afraid to diverge from CHERI’s terminology, which has a number of historical quirks that I find add to the confusion...

Read more (2023-07-05 09:15)

My Interview with Eelco Visser on Parsing

In my previous post, I mentioned in a footnote an interview I had done with the late Eelco Visser in 2008. The audio is available here, but there isn’t a transcript available. That’s a pity, because, as far as I know, Eelco didn’t do another interview along those lines, and searching through audio is still a difficult thing to do well. So, I thought I’d produce a transcript, as a little way of honouring Eelco’s memory.

In my opinion, the technical material in the interview speaks for itself. The interview process, however, had a partly comic side — neither of us had a clue what we were doing. I had never been an interviewer before (and only did one further interview after this), and I don’t think Eelco had done a technical interview like this. My understanding of parsing was woefully incomplete, and Eelco hadn’t thought about some of the deeper details in several years (he said afterwards that he was a bit “rusty”), though it clearly came back to him very quickly! We recorded the interview in a dingy college room in Cambridge that, perhaps fittingly, looked it was last decorated during the heyday of parsing research — the 1960s.

Afterwards, I’m fairly certain that we went for a drink together in the college bar. I don’t remember the specific topic we talked about, but it almost certainly would have been like many others we had: he’d talk about deep technical matters while I made stupid jokes, with the two ends of the conversation somehow meeting in the middle from time to time. I unthinkingly expected to have more of those conversations in the future. While that’s no longer possible, I have many happy memories of those we did have...

Read more (2023-05-16 09:30)

Why Split Lexing and Parsing Into Two Separate Phases?

“lex and yacc” are two terms that are used together so often that they can seem indivisible. lex takes a string as input and divides it up into lexemes (roughly speaking “words”) and yacc takes a sequence of lexemes and parses them. Since some parsing approaches such as recursive descent parsing unify these phases, why do lex and yacc split them apart?

Running example

Let’s start with a simple language, which allows us to assign simple expressions to variables:

x = 1;
y = x;

...

Read more (2023-05-02 10:00)

Displaying My Washing Machine’s Remaining Time With curl, jq, and pizauth

A couple of months ago our washing machine produced a considerable quantity of smoke when I opened its door, which I interpreted as a suggestion that a replacement was needed. After rummaging around the internet for advice, a couple of days later a new Miele washing machine took its place in our home.

As well as boggling my mind at how much better the new machine was than the cheap machine it replaced, I was pleased to discover that Miele make available a third party API which one can use to interact with the machine. Putting aside any worries about connecting a high-speed rotating device to the internet, I decided to have a poke around. Although the API is more limited in practise than in theory – the API has support for setting values, but those mostly aren’t used – I eventually realised it can help me solve one recurring problem.

Like most washing machines, ours beeps to notify me that it’s finished. However, I can’t always empty it straight away, the beep is obnoxious and repetitive, and it ends up disturbing everyone in the house. I can turn off the beep, but then I don’t know when the machine has finished. Miele’s app can notify me, but it regularly logs me out, and finding out the time remaining is a chore. What I really wanted is a countdown of the remaining time and notification on my desktop computer. Fortunately, I can do exactly what I want on my desktop computer using basic Unix tools. Here’s what a sped-up version of the result looks like (the middle part is hugely sped up; the end part is sped up slightly less so): ...

Read more (2023-04-11 10:15)

pizauth: HTTPS redirects

I’ve just released pizauth-1.0.6 which introduces one notable feature: support for HTTPS redirects...

Read more (2023-04-03 10:00)

How Big Should a Programming Language Be?

Reading the thought-provoking “Patterns & Abstractions” post reminded me of a long-held opinion I have about programming language design: we have a tendency to keep adding features to a language until it becomes so big that its sheer size makes it difficult to use reliably. Since most of us spend most of our time programming in one language, it can be difficult to see a common trend amongst languages in general.

Language size over time

Let me start with a concrete example. I started using Python around version 1.3, though the first version I really remember was 1.4. Python 1.5 was a real improvement, adding the assert statement, nested packages (“hierarchical modules” in the release notes), and the re regular expression module — things that I suspect nearly every modern Python programmer finds useful. At that point, Python was still a relatively small language, to the extent that you could reasonably expect to store nearly every detail about it (and its implementation!) in your head.

Python 1.6 added support for unicode — support that was clearly useful, but which turned out to be sufficiently awkward that (somewhat) fixing that awkwardness turned out to be a major cause underlying the schism between Python 2 and 3. As Python 1.6 turned to 2.0 and beyond, this formerly small language kept adding features, many of which I find useful (e.g. list comprehensions), but each of which has inevitably made the language bigger...

Read more (2023-03-23 15:00)

Rust’s Two Kinds of ‘Assert’ Make for Better Code

Daniel Lemire’s recent post “runtime asserts are not free” looks at the run-time cost of assert statements in C and shows that a simple assert in a frequently executed loop can cause significant overhead.

My own opinion on assertions has shifted over the years, from “I don’t see the point” to “use them sparingly” to “use them as much as possible”. That last shift is largely due to Rust having two kinds of “assert” statement – assert and debug_assert – which has allowed me to accurately express two different kinds of assertions, largely freeing me from performance worries. If you come from a language that only has one kind of assert statement, this distinction can seem pointless, so in this post I want to briefly explain why it helped shift my thinking.

Background

Let me quickly define what I mean by an “assert”: it’s a programming language statement that checks a property and causes a crash if that property does not hold (conventionally called a “failing assert”). For example, if I have a Python program with a list of people’s ages and calculate the minimum age, I might want to check that the youngest person doesn’t have a negative age: ...

Read more (2023-03-16 16:00)

Scheduling my Electricity Usage

Modern societies function in large part thanks to our ability to use as much electricity as we want, when we want it. As renewable energy (solar, wind) plays a greater part in the electricity generation mix, our opportunities to use “cleaner” energy increase, but renewables are also fickle — sometimes the wind doesn’t blow, and the sun doesn’t shine. Today in the UK, for example, the weather is overcast and still: solar and wind aren’t going to generate much power. In such cases, the electricity we’re using is generated from much “dirtier” sources.

For the last 6 months or so, I’ve been trying to see what I can do to increase the “clean” proportion of electricity I use. I have a lot of limiting factors: I can’t, for example, install solar panels or a wind turbine. But what I can do is change when I use electricity.

My intention is not for or me (or my family!) to live like a stylite or troglodyte: personally, I rather like the conveniences and comfort of modern life. Therefore, there are lots of things where my timings are inflexible — I’m not going to turn the oven on to produce a meal at midnight, nor am I going to switch my computer off when there’s a big football match on...

Read more (2023-03-05 09:45)

Why Aren’t Programming Language Specifications Comprehensive?

In Compiled and Interpreted Languages: Two Ways of Saying Tomato I used this little example to show that we can observe differences in how CPython and PyPy execute code:

$ cat diffs.py
print(str(0) is str(0))
$ python3 diffs.py
False
$ pypy diffs.py
True

Although I explained in that post that programming language specifications and implementations are separate things, I didn’t explain why it’s ever considered acceptable for different implementations of a language to produce different output for a given program. In this post I’m going to attempt to correct that oversight, and to do so in a way that I hope is relatively approachable. As that suggests, I will be as informal as possible, since I’ve found that more formal treatments of this subject can obscure where it has a day-to-day impact on programming...

Read more (2023-02-16 10:00)

Distinguishing an Interpreter from a Compiler

In Compiled and Interpreted Languages: Two Ways of Saying Tomato, I showed how any language can be implemented as an interpreter or a compiler. However, I was deliberately vague when defining how one might distinguish an “interpreter” from a “compiler”, in no small part because I couldn’t think of a crisp definition of the two terms. Although I wasn’t quite as vague as “I know it when I see it”, I was uncomfortably close.

It was thus with great interest that I read a comment on the post from a good friend, and language implementation veteran, Mario Wolczko. In a sense, Mario gave two ways of distinguishing compilers from interpreters, but I’m going to quote the one that made my jaw drop:

How to distinguish between a compiler and an interpreter? One way is to look at the time it takes. A compiler should run in time roughly proportional to the size of the source, whereas the runtime of an interpreter is determined by how much work the source program has to do for a given input. For example, suppose the source program reads a number and then has a loop whose trip count is this number. The time it takes to compile this program will be independent of the number, in contrast to the time it takes to interpret the program.

...

Read more (2023-01-26 12:00)

try_repeat released

One of the great things about Unix is the ability for users to add small tools that fit in seamlessly with those provided by the base operating system. I’ve written a few such tools over the years.

Today I’m releasing another little Unix tool try_repeat, which tries to run a command n times, exiting early if the command exits with a non-zero exit code. Think of it a bit like the repeat command built into some shells (e.g. zsh) but which “exits as soon as an execution of the command fails”. This is particularly useful when trying to find intermittent failures in a command: I can set an upper bound (i.e. “if the command runs n times successfully, I’ll assume there are no problems”) but be notified if there are any problems earlier on.

I’ve had a basic version of this command knocking around in my bin/ directory for quite a while. Earlier today I was trying to debug a non-deterministic failure and fiddling around with unwieldy shell commands. Once I rediscovered try_repeat, it allowed me to quickly hunt down the non-deterministic failure, fix it, and feel confident that I’d fixed it...

Read more (2023-01-25 11:45)

Why We Need to Know LR and Recursive Descent Parsing Techniques

A couple of people have asked me for my thoughts on part of an article from Tiark Rompf called (roughly) “Just Write the Parser” which advocates the use of recursive descent parsing over more “formal” approaches to parsing, at least in the context of teaching compilers. I’m fairly sure I read this article a couple of years ago, but it may have been updated, or simply have been discovered anew. Either way, since I admire both Tiark and his work a great deal, it was useful for me to engage with his ideas and compare them to my own.

In this post, I’m going to summarise Tiark’s argument as I see it, and try to explain where I agree and disagree with that summary. I will inevitably rehash many of the arguments from my 2020 “Which Parsing Approach?” post, but I’m going to try to focus on some factors which I now realise were at best buried, and at worst inadequately explained, in that previous post.

Summarising the argument

In essence I believe Tiark is making two points: ...

Read more (2023-01-17 08:00)

Compiled and Interpreted Languages: Two Ways of Saying Tomato

What we might think to be a settled truth often isn’t. I was reminded of this recently after I publicly stated that there is no such thing as a compiled or an interpreted programming language: many people have made clear to me that they believe strongly that languages clearly fall into one of these two categories. Indeed, a simple web search suggests that the vast majority of pages describing compiled and interpreted languages propose a crisp delineation between the two.

I certainly sympathise with the temptation to classify languages in this way, because most programming languages are commonly implemented using one technique or the other. However, experience has taught me that it’s important not to conflate the most common implementation choice with inherent properties of a language.

In this post I’m therefore going to show, using a series of programming language implementations, why languages shouldn’t be classified as compiled or interpreted. Before I do that, I need to start by clarifying the difference between a programming language specification and an implementation of that specification...

Read more (2023-01-10 08:00)

Software Security Research Position

The soft-dev research team is growing again and we have a position open for a Research Associate / Fellow in Software Security — the full position is listed here with a closing date of Jan 25th.

The position is looking at the security of software systems in the context of CHERI, in particular what we can do to make web browsers more secure using CHERI.

There’s considerable flexibility in how we go about tackling this problem, so if you think this sounds potentially interesting, please consider applying. We’re an open-minded group: you might, for example, be a researcher who wants to work on your programming chops; or a programmer who wants to work on your researcher chops. Most importantly, you need to be enthusiastic about software, partly because the rest of us are, but mostly because, with that, you can learn nearly everything else you need on the job. You do need to be eligible to work in the UK, though we are flexible about where you work within the UK...

Read more (2023-01-09 08:00)

How Might Generative AI Change Programming?

The use of AI (Artificial Intelligence) techniques, specifically ML (Machine Learning) and its various sub-fields, is changing many fields and undoubtedly will change more in the coming years. Most of us are at least generally familiar with the idea of using ML to identify patterns in data. More recently Generative AI (“GAI” for the rest of this post), in the form of systems such as ChatGPT and Stable Diffusion, has made itself more widely known. Rather than simply classify new data, GAI can, as the name suggests, generate new outputs that conform to the underlying patterns contained in the model. Existing ML systems, in general, and GAI systems particularly, are almost certainly the harbingers of further advances. This inevitably leads to speculation about “what’s next?”

From my perspective, the obvious question is: how might ML and GAI change programming? In particular, the rapid advances in GAI have led many to assume that we will gradually do away with programming as a human activity. Although it’s rarely spelt out explicitly, this implies that a GAI system can take in a human description (or “specification”) and produce usable software from it. At a small scale, this is already possible, with the best known example being CoPilot.

When it comes to the generation of software, GAI and programming seem surprisingly similar: they both take in a specification and produce software as output. In each case, both fill in gaps in the inevitably incomplete specification in order to make complete software. However, the two approaches take a fundamentally different approach to understanding and filling in gaps in specifications: programming forces humans to use their knowledge and experience to manually fill in the gaps; GAI uses the model it has learnt from existing programs to automatically fill in the gaps. This might seem like a small, perhaps even a pedantic, difference, but I believe it has significant consequences...

Read more (2022-12-15 08:00)

pizauth: differentiating transient from permanent errors

As and when I’ve found some spare moments, I’ve been working towards getting pizauth in a state where I think it’s ready for a stable release. I’m now making what I hope is the final alpha release available.

Since the last alpha release I have been in many more situations where pizauth only has intermittent internet access. Previously pizauth treated a wide category of errors as transient — that is, the sorts of errors that generally resolve themselves given a little bit of time (e.g. a temporary loss of internet access). However, while some errors (e.g. DNS lookup failing) are probably benign, some (e.g. HTTP requests returning non-200 codes) show something serious has definitely occurred.

The new alpha release of pizauth first differentiates (possibly) transient error from (definitely) permanent errors. If a permanent error occurs when refreshing an access token, then the token is invalidated, and the error logged. The reason that the user isn’t explicitly informed is that the true error (e.g. “your account is no longer valid”) is generally masked by another more generic error (e.g. “server refused to refresh token”). By invalidating the token, the user will then be asked to reauthenticate, at which point the true error is more likely to be properly reported. In the (hopefully) rare cases where there are persistent permanent refresh errors, users can look through logs to find out what happened...

Read more (2022-12-14 08:00)

November Links

...

Read more (2022-11-30 08:00)

More Evidence for Problems in VM Warmup

Some readers may remember some work I and others were involved in a few years back, where we looked at how programming language VMs (Virtual Machines), mostly those with JIT (Just-In-Time) compilers, warmed-up — or, often, didn’t. In this blog post I’m going to give a bit of background to this area, and then look at a newly published paper (not written by me!), which gives further evidence of warmup problems.

Background

VMs start programs running in an interpreter where they observe which portions run frequently. Those portions are then compiled into machine code which can then be used in place of the interpreter. The period between starting the program and all of the JIT compilation completing is called the warmup time. Once warmup is complete, the program reaches a steady state of peak performance.

At least, that’s what’s supposed to happen: our work showed that, on widely studied small benchmarks, it often doesn’t. Sometimes programs don’t hit a steady state; sometimes, if they do reach a steady state, it’s slower than what came before; and some programs are inconsistent in whether they hit a steady state or not. A couple of examples hopefully help give you an idea. Here’s an example of a “good” benchmark from our dataset which starts slow and hits a steady state of peak performance: ...

Read more (2022-11-15 08:00)

What is a Research Summer School?

If I say “summer school” to you then you’ll probably think of extra sessions in summer holidays for children; depending on where you grew up, you might expect such sessions to be either exciting non-academic activities or catch-up lessons. What about summer schools for research students (what I’ll call “research summer schools” for the rest of this post, though in research circles they’re unambiguously referred to as just “summer schools”)?

I’ve co-organised four in-person research summer schools, most recently as part of the Programming Language Implementation Summer School (PLISS) series, and spoken at two others, and one thing that I’ve realised is that many people don’t really know what they involve. Indeed, I didn’t fully realise what they are, or could be, even after I’d been involved in several! This post is my brief attempt to pass on some of what I’ve learnt about research summer schools.

Let’s start with the obvious high-level intent: research summer schools are intended to help research students better understand a research area. “Research students” is an umbrella term: the majority of attendees tend to be PhD students, but there is nearly always a mix of people from other stages of life, particularly postdocs and BSc/MSc students, but sometimes including people who are coming at research from a less common career path...

Read more (2022-11-10 08:00)

October Links

...

Read more (2022-11-01 08:00)

pizauth: another alpha release

A few weeks back I made the first alpha release of pizauth, a simple program for requesting, showing, and refreshing OAuth2 access tokens. I’ve been pleasantly surprised at the number of people who’ve tested pizauth since then, mostly successfully. Inevitably, pizauth hasn’t quite worked perfectly for everyone, so I’m making available the next alpha release, 0.1.1. One new feature, and one portability fix, are particularly worth mentioning.

Running pizauth on a remote machine

A valid use case for pizauth is to run on a remote machine (e.g. to download email in a cron job on a server), but for authentication to happen locally. I had assumed, but hadn’t tested, that using pizauth with ssh -L would work. Indeed, it did work, but it required users to work out what port that pizauth’s HTTP server (a vital part of the OAuth2 authentications sequence) was listening on. Users could discover the port number by carefully examining the authorisation URL:

https://login.microsoftonline.com/common/oauth2/v2.0/authorize?access_type=offline&code_challenge=thZhkonKVClNmeyg8ahkah9A*£JZoGJ5AGSGjSZBEWw&code_challenge_method=S256&scope=https%3A%2F%2Foutlook.office365.com%2FIMAP.AccessAsUser.All+https%3A%2F%2Foutlook.office365.com%2FSMTP.Send+offline_access&client_id=<your-client-id>&redirect_uri=http%3A%2F%2Flocalhost%3A8765%2F&response_type=code&state=Y5f1CUxNUbw

...

Read more (2022-10-20 08:00)

UML: My Part in its Downfall

In the last year or two there has been a slow but steady trickle of articles attempting to account for UML’s lack of long-term success (if you have time for only one, I suggest Hillel Wayne’s article). As fate would have it, I had some involvement in UML standardisation in the early 2000s, so I saw some of the going-ons from the “inside”. Although I’ve touched on this before, I’ve never written about my experiences in detail because I was worried about offending people who I like. 17 years after the fact, I hope that the likelihood of anyone being offended is fairly low.

In this post I’m going to try and explain some of the factors that I think contributed to UML’s downfall. To some extent this is a historical document, at least of my perspective. But, with the benefit of hindsight, I feel there are general lessons to be drawn about how both group dynamics and standardisation can develop in unfortunate ways. Be forewarned that I only saw part of what was going on (so I will be unaware of possibly important details), I didn’t write a diary (so I will recall things incorrectly), and my recollections are bound to reflect my biases (my ego will inevitably encourage me to relay the story in a way that presents me in a better light than I deserve).

Background

People had long created diagrams to document important aspects of software. Over the course of the late 80s and early 90s, three diagramming styles or, more grandly, “methods”, had started to become popular: the Booch, Jacobson, and Rumbaugh methods. After two merges, these three methods were combined to create UML (hence the “U” standing for “Unified” in UML), released as a standard through the OMG (Object Management Group), a standards body that had previously been best known for the CORBA standard...

Read more (2022-10-03 08:00)

September Links

...

Read more (2022-10-01 08:00)

pizauth, an OAuth2 token requester daemon, in alpha

One Monday a few weeks back, I found myself unable to send emails via my work account. I had received no advanced warning and there was no clue in the error message I was seeing as to what the cause might be. I was briefly tempted to view my cut-off from work as a good thing before, irritatingly, the sense of duty my parents instilled in me kicked in. After a while I realised that the problem was that my msmtp setup was using basic authorisation – what the rest of us think of as a traditional username and password – to send emails. Not only does my work’s Microsoft Exchange server no longer accept basic authentication, but Microsoft are killing it off for everyone before the end of the year — more people are thus about to experience the same confusion I experienced.

A common alternative to basic authorisation is OAuth2. From the perspective of this post , OAuth2 requires you to use a web browser to authenticate yourself (which might then involve two-factor authentication e.g. via your mobile phone) after which you can obtain time-limited access tokens. You can use access tokens to authenticate yourself to, for example, an IMAP or SMTP server. As irritating as I sometimes find two-factor authentication , there is no doubt that OAuth2 is more secure overall than basic authentication.

In my case, I solved my immediate problem with email-oauth2-proxy, which has a very cunning approach to the problem (it’s a semi-transparent proxy that provides the illusion of basic authentication to user clients while performing OAuth authentication to users) but it occasionally stalled and required me monitoring an xterm for authentication links which doesn’t sit neatly in my email setup. I then tried mailctl but it segfaulted on me (perhaps because of an incompatibility with OpenBSD) and I’m not familiar enough with Haskell to debug such things...

Read more (2022-09-28 08:00)

A Week of Bug Reporting

If you use software regularly, and have your wits about you, you will often realise that you’ve encountered a bug — in other words, that the software has done (or not) something that it shouldn’t (or should have). When this happens, the vast majority of people moan to the nearest bystander about how incompetent the people behind the software are and then carry on with whatever they were doing — often stumbling into the same bug repeatedly. Only a minority of people think of reporting bugs and, if they do think of it, they often fail to do so either because they think it’s too time-consuming or that they might end up looking silly.

It’s unfortunate that so few people report bugs. First, surprisingly few bugs affect a single person: by the time someone does report a bug, it can have affected hundreds or thousands of other people. We can’t all be freeloaders — someone has to take the plunge and report the bug. Second, bugs frequently appear in contexts unfamiliar to the software’s developers. Most obviously, we all have slightly different software installations, and those variations can cause odd effects. Less obviously, users often use software in ways unimagined by its creators, such that some previously untested combinations of operations work incorrectly. Either way, we cannot assume that the software’s developers will encounter the same bugs that we do: in many cases, if we don’t report bugs, the developers simply won’t know they exist.

Reporting bugs in someone else’s software is different from finding and fixing bugs in one’s own code, because we understand those other systems much less well. The sense of confusion I have with bugs in my own code is magnified when using other systems. I’m less confident that I’m actually seeing unexpected behaviour; less sure what the cause might be; and have lower trust than normal in any fix I propose. That means that we must acknowledge an important trade-off: a full understanding of the cause of a bug in an unfamiliar system is often beyond any reasonable investment of time; but if we just say “it doesn’t work”, no-one can be expected to replicate, let alone fix, the bug. Somewhere in the middle is a sweet-spot where the bug reporter invests the minimum amount of time to generate a good enough bug report that a developer can use to fix the bug...

Read more (2022-08-31 08:00)

August Links

...

Read more (2022-08-30 08:00)

Making a Video of a Single Window

I recently wanted to send someone a video of a program doing some interesting things in a single X11 window. Recording the whole desktop is easy (some readers may remember my post on Aeschylus which does just that) but it will include irrelevant (and possibly unwanted) parts of the screen, leading to unnecessarily large files. I couldn’t immediately find a tool which did what I wanted on OpenBSD but through a combination of xwininfo, FFmpeg, and hk I was able to put together exactly what I needed in short order. Even better, I was able to easily post-process the video to shrink its file size, speed it up, and contort it to the dimension requirements of various platforms. Here’s a video straight out of the little script I put together:

...

Read more (2022-08-09 08:00)

Two researcher jobs in soft-dev

The soft-dev research team is growing and there are two open jobs:

Both jobs are looking at the security side of systems in the context of CHERI. Roughly speaking, the CapableVMs position is looking to secure programming language virtual machines, and the Chrompartments position is looking to secure web browsers...

Read more (2022-08-08 08:00)

Another Reorganisation

To every idea there is a season, though some ideas seem to have purpose only under faulty assumptions. In April I decided to rethink how I went about my “informal” writing, which had previously been highly intermittent, rather formal, and interminably long. In When is a Blog a Blog? I renamed my old blog to “essays” and aimed for more frequent, less formal, shorter updates in this blog.

After 4 months it’s time to evaluate how this split is working. I have managed rather more frequent updates, in a less formal style, and they are shorter in length. But – and this is perhaps unsurprising for someone who a school report once pinpointed as prone to “prolixity” – I’d hardly call many of my recent posts short in an absolute sense. Now I’ve met the new boss, I realise that he acts rather similarly to the old boss.

I have thus been forced to acknowledge that it makes little sense to divide my writing into separate “blogs” and “essays”. There is more that unites my writing, for better and worse, than divides it. I have therefore merged all my old “essays” back into my blog. Suddenly my blog archive has grown from 18 entries to 74. At some point I intend creating a separate list of what I consider the posts which are most likely to be of longer-term interest, because there is now a definite divide between more substantial and ephemeral posts...

Read more (2022-08-01 08:00)

July Links

...

Read more (2022-07-31 08:00)

What’s the Most Portable Way to Include Binary Blobs in an Executable?

I recently needed to include an arbitrary blob of data in an executable, in a manner that’s easily ported across platforms. I soon discovered that there are various solutions to including blobs, but finding out what the trade-offs are has been a case of trial and error. In this post I’m going to try and document the portability (or lack thereof…) of the solutions I’ve tried, give a rough idea of performance, and then explain why I’ll probably use a combination of several solutions in the future.

Outlining the problem

Let’s assume that I want to embed an arbitrary null-terminated string into an ELF executable and that I’m OK with that string having the fixed symbol name string_blob. My C program may then look as simple as:

#include <stdio.h>
extern char string_blob[];
int main() {
    printf("%s\n", string_blob);
    return 0;
}

...

Read more (2022-07-25 08:00)

CHERITech22 and PLISS 2022

One of the little pleasures of my job is organising events where people can come together to share knowledge and form new bonds. Unsurprisingly, covid put something of a dampener on in-person events, but I’m happy to announce two upcoming in-person events I’m co-organising:

CHERITech22 is free; PLISS does have a registration fee, though we are able to subsidise some attendees who otherwise might not be able to attend. In both cases you have to fill out a form expressing your interest in attending, and we’ll notify the successful applicants later...

Read more (2022-07-19 08:00)

How I Clean my Glasses

I’ve been wearing glasses since I was 4 years old: ever since, I’ve put them on as soon as I wake up and take them off just before I flop into bed. In all that time, I’ve never broken a pair of glasses, but I still get a new pair every 2 years: sometimes the frame starts to fall apart, but more often the quality of vision decreases. Often I notice increasing problems in night vision, though my last pair degraded in such a way that my eyes kept losing focus even in daylight, which was disconcerting to say the least.

Becoming frustrated, I guessed that the way I clean my glasses might be causing the lenses to degrade quicker than necessary. I used to just breathe on them, and wipe them with whatever came handy (a shirt or whatever). Then I switched to using a dry microfibre cloth which I would wash whenever it started leaving horrible smear marks on the lens. Then I used small quantities of soap and water with a microfibre cloth. Each of these changes seemed to slightly improve the situation, but I wondered if I could do better. The internet seemed mostly full of people telling me to do what I was already doing, which wasn’t what I was hoping for. I eventually realised that one group of people whose profession is based on lenses might have better advice: and, indeed, photographers have developed a number of techniques for cleaning camera lenses.

For the last 12 months I’ve been using an adapted version of what seems to be the “standard” advice for cleaning camera lenses. My lenses still seem as good as new (I can’t see any abrasion at all, nor any problems at night), which suggests this is a decent approach. Caveat emptor: I have not conducted a scientific test, so some of what comes next might be placebo, or the results of good fortune, or possibly even dangerous to your health. With that in mind, here’s how I now clean my glasses (note: I don’t use affiliate links, so you don’t have to worry that I’m trying to make money off you): ...

Read more (2022-07-13 08:00)

June Links

...

Read more (2022-07-01 08:00)

What Metric to Use When Benchmarking?

What is the right metric to use when measuring a program’s performance? There are various possibilities, from memory usage to branch prediction hit rates, but I’m going to pick on two that I see widely used: CPU instructions executed (what modern CPUs call “instructions retired”) and wall-clock time (i.e. “how much time has elapsed in the real world?”). In this post, I’m going to try and compare both, showing why each has fundamental weaknesses, before explaining why I use each in different circumstances.

Measurement instability

Let’s start by benchmarking a very simple program that runs on every Unix OS I have access to:

awk 'function fib(n) { \
  return n <= 1 ? 1: fib(n-1) + fib(n-2) \
} BEGIN { fib(30) }'

...

Read more (2022-06-30 08:00)

Chance, Luck, and Risk

Every so often, I’m asked to talk about my “career”, generally to people younger than I am. I realised early on that the path I’d taken was so full of unexpected turns that it would be ludicrous to talk about it as if it was the result of hard work and deliberate choices. I therefore chose to emphasise how lucky I’d been. Not only did this have the virtue of modesty but it genuinely seemed the most plausible explanation.

However, luck as an explanation didn’t get me very far when I looked at other people. The first hint was after I’d seen several instances where, for a given group of people, of a similar age, background, and talents, some ended up being more successful than others. What surprised me was how often my gut feeling about who would go on to do well, or not, turned out to be correct — far too often for luck, mine or theirs, to be a convincing explanation. The second hint was when I realised from reading history that some people were successful multiple times in their lives: it didn’t seem plausible to me that all of them had done so purely through luck.

As I currently think of things, there are at least three concepts that interact with each other. At the risk of overloading commonly used terms I think of them as: ...

Read more (2022-06-15 08:00)

What Makes a Good Research Proposal?

Research is rarely quick: it takes time, people, and (in most cases) equipment to complete. To convince those in control of the purse strings that we should be given the necessary resources to tackle a research problem, we create a research proposal. This might be: a written document, a presentation, or a series of conversations; presented to academia or industry; and aimed at convincing a manager or an external funding agency (or even oneself!).

Whatever the situation, I believe that good proposals share much in common. Unfortunately, many guides I’ve seen to creating proposals focus, sometimes very cynically, on the quirks or biases, real and imagined, of funders. In contrast, in this post I’m going to try to focus on the ideal core of a proposal, because that has inherent value in its own right, and also because it can be easily adapted to the specifics of a given funder.

By focussing on this core, I’m hoping that I might help you answer two questions: “do I have an idea that’s worth asking for resources to tackle?” and “how do I present that idea in a way that other people will best understand?” To make my life easy, I’m going to frame this discussion in terms of “readers” and “writers”, though I hope the ideas generalise beyond written proposals, and “funders” as a shorthand for those in control of resources...

Read more (2022-06-08 08:00)

May Links

...

Read more (2022-05-31 08:00)

Multiplicity Choices Are Hard to Model and Change

Programming involves continually making choices, whether we realise we are doing so or not. We hope that each choice advances us towards our intended goal of making a program that does what its users will want it to do. However, because the problems we are tackling are nearly always bigger than our ability to fully comprehend them, we often make choices that we later have to correct.

In my experience, the most common type of correction I have to make is where I realise that a piece of information that I’d localised to one component is also required by another component. Depending on the nature of the components involved (functions, classes, etc.), there are various ways of fixing this. Sometimes I might explicitly pass state through all the paths between the two components; sometimes I might place that state somewhere where it’s directly accessible to both components. Either way, the changes involved are invariably tedious, and often make the program’s structure slightly worse, but are rarely intellectually challenging. It can be frustrating to make such changes in a statically typed language , when one can spend an inordinate amount of time with the program not compiling, but when the program does compile again, there tend to be few if any resulting bugs to fix.

There is, though, a type of correction that makes me break out in a cold sweat: when I have made an incorrect choice about the multiplicity of a piece of information. The changes I have to make are typically far less mechanical, giving me many more opportunities to introduce bugs. In this post I’m going to look at this in more depth in the context of programming languages with static type systems...

Read more (2022-05-26 08:00)

Using a “Proper” Camera as a Webcam

It’s 2022, so many of us now have 2 years experience of online meetings. My experience is that the better that I can see and hear someone in an online call, the more natural the conversation feels. Put another way, I think one does a courtesy to the other person by improving the quality of how you appear in a video call.

Unfortunately, most webcams are terrible cameras that take terrible pictures. The market for decent quality webcams seems to be non-existent, with the available choices seeming to sit somewhere between bad and mediocre. Laptop webcams have the additional problem that thin lids mean that the camera is fighting a losing battle against physics that software can only partly overcome.

Fortunately there is a solution: you can use a “proper” camera, of the sort you associate with earnest tourists, as a webcam. In particular, most modern mirrorless cameras from the last few years can do an excellent job if you know what you’re doing...

Read more (2022-05-17 08:00)

Programming Style Influences

If you ever talk to, or read an interview with, a musician, they will inevitably, and quickly, end up talking about their influences. While there is an element of nostalgia to this, perhaps even of acknowledging debts, I see its main purpose as helping spread knowledge about who the most interesting set of musicians are.

In programming, in contrast, we rarely talk about our influences, other than a frequently expressed allegiance to a single programming language. This seems a shame to me, because it denies new people to our field helpful pointers to programmers and systems whose style might be a useful influence.

As a modest attempt to rectify this situation, I’m going to show how one particular system, OpenBSD, has had a big influence on my programming style over time. It’s far from the only system that’s influenced me – not to mention various programmers and communities who’ve also been an influence – but I need something concrete to use as an example...

Read more (2022-05-10 08:00)

snare: a Minimalistic GitHub Webhooks Runner

Starting with extsmail I seem to have developed the accidental hobby of occasionally developing new tools in the Unix tradition. In this post I’m going to introduce snare, a minimalistic GitHub webhooks runner daemon for those cases where something like GitHub actions don’t give you sufficient control. I’ve been using snare in production for over 2 years for tasks from running CI using specific hardware to sending out email diffs to automatically restarting Unix daemons when their configuration has been changed. It’s a simple, but now stable, tool.

What is a webhooks runner? Well, whenever something happens to a GitHub repository – for example, a new PR is raised, or a commit is pushed to a PR – GitHub can send an HTTP request to a specified URL informing it of what happened. Configuring a webhook for a given GitHub repository is relatively simple: go to that repository, then Settings > Webhooks > Add webhook.

snare is a simple program which listens for GitHub webhook events and runs Unix shell commands based on them. You need a server of your own to run snare on. When you add a GitHub webhook, you then need to specify http://yourmachine.com:port/ , a secret (in essence, a shared password between GitHub and snare) and then choose which events you wish GitHub to deliver. For example, the default “Just the push event” works well if you want to send email diffs whenever someone pushes commits to a repository...

Read more (2022-05-04 08:00)

April Links

...

Read more (2022-04-30 08:00)

Where do Research Problems Come From?

“Research” is a much broader term than most of us who consider ourselves to be researchers realise. My occasional interactions with people in very different disciplines (e.g. biology) have tended to involve a series of culture shocks to both parties. Even within computing, there are significant differences between sub-disciplines. These seem to mostly go unnoticed or, at least, uncommented on, which is a pity: at the very least it’s worth knowing that the way we do things isn’t the only possible way. One of the most surprising differences, at least to me, is where people expect to find the problems that they then work on.

What is a research problem?

Before I go further, it’s worth defining what I mean by “research problem”. At the most basic, I mean “the problem I’m trying to solve” or “the thing I’m trying to make better”. Research problems tend to come at multiple levels. For example, at the highest level, I want to make software better, but that’s such a vague aim that it’s difficult to convert into meaningful action. What I have to do is find lower-level, more concrete, problems where I can better define what direction I want to head in and what success might look like. For example, I might say “programs written in programming language X are so slow that programmers twiddle their thumbs waiting for programs to run; if I can make programs in language X run twice as fast as they currently do, programmer productivity will increase”.

I’ve actually done two separate things in that small example. Not only have I defined a problem (programs in programming language X run too slowly) but I’ve also motivated why that problem is worth trying to solve (it’s degrading programmer productivity). The requirement to have such a motivation is a common expectation in the parts of computing I work in, but not everywhere. For example, in some areas of mathematics, it’s considered worthwhile to solve a problem irrespective of any possible real-world use. In A Mathematican’s Apology, G. H. Hardy said : ...

Read more (2022-04-28 08:00)

Practising Programming

When we see a world-class musician flawlessly play challenging music, it can be tempting to imagine that they were always able to do so. A moment’s thought makes it obvious that they must have had to spend huge amounts of time practising basic techniques in order to reach that level. What’s less obvious is that they have to continue spending considerable amounts of time simply to maintain that technique, let alone expand it.

In contrast, in programming, we have only a haphazard notion of how one should go about obtaining sufficient technique to become good enough to write good software; and we have almost no notion of continued practise to maintain or expand that technique. Often overtaken by life events – notably, though not only, the requirement to earn money to support a family – many programmers implicitly finish the bulk of their learning early on, and some stop learning entirely.

As someone of rather limited talent, I have long been aware that if I’m not proactive about learning, I will fall even further behind where I would like to be. While some people enjoy practising programming with small puzzles (the programming equivalent of musical scales perhaps?), I am too grumpy to find them enlightening or enjoyable. Instead, I’ve found that real-world tasks that require me to do something new are where I most improve, and most enjoy improving, my programming skills. Fortunately, and at first accidentally, I’ve found that it’s relatively easy for me to continue practising programming on real-world tasks as part of my day-to-day life...

Read more (2022-04-20 08:00)

Making Rust a Better Fit for CHERI and Other Platforms

Recently, Aria Beingessner published an interesting essay “Rust’s Unsafe Pointer Types Need An Overhaul”. There are several parts to this essay, all of which address issues which cause me pain in Rust, and which I would love to see fixed. I want to focus on one issue: Rust’s usize conflates address, data, and object widths into a single static type. While this conflation is unproblematic on many major platforms, it causes problems on others such as CHERI. In my essay from last year on static integer types, I went into some depth on this, and you might find the motivation therein a useful additional explainer to Aria’s essay.

The basic problem with Rust and CHERI is that capabilities – think “an address alongside additional data” – require additional room over traditional pointers, which contain just an address. Modern CHERI capabilities are typically twice the width of what is often called the “native machine word size”. However, Rust’s integer type hierarchy mandates that usize must be wide enough to allow any pointer type to be cast into it without information loss. That means that the obvious way of making Rust work with CHERI is to double the width of usize so that capabilities can be cast to a usize — but this wastes 50% of bits when usize is used to represent sizes or addresses alone.

Aria then makes an intriguing suggestion for how to push Rust in the right direction. The basic idea is to: break the link between pointers and usize so one can not directly cast between the two; make *mut () the root of the integer/pointer part of the type system (in other words, as well as being a pointer type, *mut () also serves roughly the same purpose as C’s uintptr_t); and to make the address portion of a pointer a usize to allow bit-fiddling operations on that portion. On common platforms (e.g. x86_64) this scheme makes everything work much as it does now, but on systems like CHERI, pointers can now be wider than usize...

Read more (2022-04-13 08:00)

When is a Blog a Blog?

At some point in 2004 I added to my website the sort of thing that most normal people would have called a “blog”. For reasons best forgotten, I called it a “bliki”, before quickly realising that I had no interest in creating anything resembling a wiki. I soon renamed it “technical articles / blog”, an awkward hybrid that reflected my lack of clarity about what I wanted it to be. In late 2012 I gave up and shortened the name to just “blog” — except it never really was a blog, at least not in the sense that other people use that word. Really, what I’ve been doing for the last 18 years is infrequently writing lengthy essays.

“Better late than never” is a phrase that trips off my tongue worryingly easily. In yet another instance, I have finally relabelled my previous “blog” as “essays”. I enjoy writing essays, because they invariably force me to investigate things I’d previously glossed over, and rethink others from scratch. However, that means that essays are hard work — each typically takes about 3 to 5 days of my time. As the years have passed, I’ve found it more difficult both to carve out, and to justify, the time they take — too often, there seems to be something more important to do.

Perhaps the fundamental problem with essays is that, to my mind at least, they come with a contract: they should be on fairly meaty subjects, and have been subject to enough thought that they can stand up to a reasonable level of scrutiny. That makes essays an appropriate medium for getting across mature ideas (or ideas that I think can be matured), but it makes them a poor medium for new, incomplete, or simple ideas...

Read more (2022-04-11 08:00)

Static Integer Types

Over several years I’ve had several conversations with people about static integer types in programming languages. Given the number of subtleties in the design, and pitfalls in the use, of integer types it’s not surprising that most programmers are unaware of at least some of them. However, what is perhaps more surprising is that most programming language designers and implementers aren’t fully aware of some of the subtleties and pitfalls either. That leads to languages and their implementations baking in unintended, and sometimes unfortunate, design decisions. At the end of conversations on this topic, I’ve sometimes said “this stuff is hard and someone should think more about this and write it up” but it’s an unsatisfying conclusion. So even though I don’t have all the answers – indeed, I’m fairly sure that I don’t know all the questions – I’m going to jot down my current thinking on this subject.

The basics

In this post I’m considering the design of fixed-width integer types in statically typed languages. By “statically typed” I mean types which are enforced at compile-time whether they are explicitly spelt out in source code or inferred by the compiler. By “fixed width” I mean integers which fit within a number of bits determined at compile-time. These come in two flavours: those where the number of bits is the same on all platforms (e.g. u32 typically means “an unsigned 32-bit integer); and those where the bits are parameterised by the platform (e.g. Rust’s usize might be 16-bits, 32-bits, 64-bits or bigger depending on the underlying platform). I won’t be considering floating point numbers, arbitrarily-sized integers (i.e. those whose storage size varies at run-time depending on the number they are storing), or the like.

An obvious question – and one that I imagine most new programmers have – is why do we have more than one integer type? Why, on a 64-bit machine, do I not always use 64-bit integers? There are various possible explanations, but to me two stand out. First, we often have to interact with things (from external libraries to the hardware we’re running on) that require integers of certain widths. Second, even though our computers often have large quantities of DRAM, memory is not infinite, and even when there’s plenty of free memory, using it wastefully can slow programs down. For example, imagine that I’m writing a system which records students’ marks out of 10: I only need 4-bits to represent all the valid marks, meaning that I need 16 times less memory relative to a 64-bit integer. If my program records lots of students’ marks, then the memory saving between 4-bits and 64-bits will probably have a noticeable effect on the program’s performance, because the caches in modern CPUs are tiny relative to DRAM. For example, my desktop computer has 32GiB DRAM but the L1 cache (split into two equal-sized sub-parts for instructions and data) is a mere 64KiB (about 0.000002% the size of DRAM): saving memory is an effective way of improving program performance...

Read more (2021-06-22 08:00)

Automatic Video Editing

Amongst the many consequences of COVID-19 has been the suspension of in-person talks: suddenly, people like me have had to think about how to produce prerecorded videos. In this article I’m going to explain what I’ve come to understand about video recording and the “automatic video editing” technique I’ve used for videos such as Virtual Machine Warmup Blows Hot and Cold.

To give you an idea of how unfamiliar I was with video production, 12 months ago I not only had no experience of creating videos but I hadn’t even watched enough videos to form an opinion of what was possible or desirable. However, within a couple of hours of attending my first online conference in June 2020, the challenges of video production had become clear to me. To be blunt, most of the videos I was watching were worse than the equivalent live talk: they were mostly bullet point laden PowerPoint slides with poor quality audio. I’ve long been of the opinion that bullet point presentations are unengaging, as they’re designed for the speaker’s benefit, not the audience’s. However, they become even less engaging when there’s no person to look at and all you can hear is a disembodied, poorly amplified, and fuzzy-sounding mumble. Fortunately for me, I saw a talk by Sam Ainsworth that showed me that a different, rather more engaging, type of talk was possible.

Realising that I’d need to produce videos of my own at some point, I immediately started looking into how one might go about recording videos. Unfortunately, it soon became obvious that common video production tools such as OBS hadn’t been ported to OpenBSD. Undeterred, the following day, I experimented with using FFmpeg to record and mix videos. I’ll go into more detail about FFmpeg later, but at this point it’s enough to know that FFmpeg is a powerful command-line tool that can record and edit videos...

Read more (2021-03-23 08:00)

The Evolution of a Research Paper

As far as I can recall, the first time that I truly tried to read a research paper was when I started my PhD. I don’t remember which paper was the first recipient of my incompetent gaze, but I do remember how completely out of my depth I felt when trying to make sense of it. Over many subsequent papers I tried during that period, I made very little progress in meaningfully digesting any of what I was reading. I quite often found a fog descending on my brain, a curious mix of tiredness and bewilderedness, that left me feeling defeated.

In retrospect, I faced three main issues: I didn’t have a good technique for reading papers ; I was trying to read papers that far exceeded my rather narrow technical knowledge; and I wasn’t reliably selecting papers that were well written. To a reasonable extent, sheer repetition tends to fix all of these issues.

However, I eventually realised there was a much deeper issue: I did not understand how papers come into being and certainly had no idea how I should go about writing my own. From my naive perspective, papers seemed to come fully formed as 20 page PDFs, full of dense prose, foreboding formulae, and complex tables. I had little idea how they got to that state and, for whatever reason, nobody I bumped into ever seemed to talk in a meaningful way about the mechanics of writing a paper. Even now, I’m not sure how most people or groups go about writing their papers...

Read more (2021-01-19 08:00)

Automatic Syntax Error Recovery

Programming is the best antidote to arrogance I’ve come across — I make so many errors that I am continually reminded of my own fallibility. Broadly speaking, I think of errors as severe or minor. Severe errors are where I have fundamentally misunderstood something about the system I am creating. Each severe error is a bespoke problem, often requiring custom tooling to understand and fix it. Minor errors, in contrast, are repetitive and quickly fixed. However, they’re also much more numerous than severe errors: even shaving a couple of seconds off of the time it takes a programmer to fix a class of minor errors is worthwhile when you consider how often they occur.

The most minor of minor errors, and also I suspect the most frequent, are syntax errors. They occur for three main reasons: mental sloppiness; inaccurate typing ; or an incomplete understanding of the language’s syntax. The latter case is generally part of a brief-ish learning phase we go through and I’m not sure what a good solution for it might look like. The former two cases, however, are extremely common. When I’ve made a small typo, what I want is the parser in my compiler or IDE to pinpoint the location of the syntax error accurately and then recover from it and continue as if I hadn’t made an error at all. Since compilation is often far from instantaneous, and I often make multiple errors (not just syntax errors), good quality syntax error recovery improves my programming efficiency.

Unfortunately, LR parsers – of which I am particularly fond – have a poor reputation for syntax error recovery. I’m going to show in this article that this isn’t inevitable, and that it’s possible to do surprisingly good automatic syntax error recovery for any LR grammar. If you want to know more details, you might be interested in the paper Lukas Diekmann and I recently published called Don’t Panic! Better, Fewer, Syntax Errors for LR Parsers. The paper also has a fairly brief accompanying talk, if you find that sort of thing helpful:

...

Read more (2020-11-17 08:00)

Stick or Twist?

All of us have points in our lives where we have to decide whether we should continue down our current path or change to another. As a researcher, I often face a constrained version of this problem: should I continue on my current research path or change to another? For a long time I wasn’t aware that I was being faced with such decisions; and, when I did become aware, I wasn’t sure how best to make a decision. Over time I’ve realised that a simple “stick or twist?” heuristic mostly works well for me. I don’t claim that anything in this article is novel, nor do I think I’m describing an approach that’s applicable to every situation — but it might provide some useful food for thought.

Let’s start by dismissing a common heuristic: wait and see. When faced with a hard decision, most of us have a tendency to hope that the underlying problem magically disappears: sometimes we consciously delay making a choice, but often we try and pretend the choice doesn’t exist at all. Although it sounds like a cliché, I’m a firm believer that, in general, not making a decision is equivalent to making a decision. If I dither over whether to buy a shop’s last cake or not, someone else will buy it: I’m then, involuntarily, in exactly the same situation as if I’d decided not to buy the cake. If used too often, “wait and see” turns us into corks on the ocean, entirely at the mercy of events. Except for a small number of exceptionally poor decisions, I’ve found that I personally regret decisions I didn’t take more than decisions I did take.

Next, let’s dismiss the two extreme heuristics for making decisions: never change (which is similar, though with a very different motivation, to “wait and see”) or always change. Because we live in a continually changing world, it is inevitable that even a once-optimal path will become suboptimal over time: not changing guarantees irrelevance in the long run. At the other extreme, change for its own sake means that even when we stumble onto an optimal path, we’ll change off it for no good reason...

Read more (2020-10-07 08:00)

Which Parsing Approach?

We all know that parsing is an important part of designing and implementing programming languages, but it’s the equivalent of Brussels sprouts: good for the diet, but a taste that only a select few enjoy. Unfortunately, I’ve come to realise that our general distaste for parsing is problematic. While many of us think that we’ve absorbed the advances of the 1960s into our collective understanding, I fear that we have regressed, and that we are often making inappropriate decisions about parsing. If that sounds accusatory, I don’t mean it to be: I spent over 20 years assuming that parsing is easy and that I didn’t need to understand it properly in order to use it well. Alas, reality has been a cruel teacher, and in this post I want to share some of the lessons I’ve been forced to slowly learn and acknowledge.

Let’s start with the basics. A grammar encodes the syntax rules for a given language. Parsing is the act of taking in an input (e.g. a source file) and determining if, and how, it corresponds to a grammar. At its most basic level, parsing just says “this input does/doesn’t correspond to the grammar”. That’s rarely useful for programming languages, so we normally execute semantic actions while parsing, allowing us to, for example, build a parse tree that represents the input as a tree. If I have a simple calculator grammar and the input 2-3*4 I might get back a tree that looks like the following:

...

Read more (2020-09-15 08:00)

Alternative Sources of Advice

With slowly increasing frequency, I am asked for my advice on life or careers, or something similar. Let us put aside worries about this sign of my increasing decrepitude or the poor taste that some people show in those from whom they seek advice. The brutal truth is that most of the advice that I’ve received as an adult – nearly all of which has been well intentioned – has not been right for me. I therefore assume that any advice I might give others would be equally wrong, and so I try and avoid doing so. In this blog post, I will try to enumerate the problems I have with advice as it is commonly given and look at one way that we can do things differently.

In the context of this article, what I mean by “advice” is the guidance given by one adult (generally older) to others (generally younger). Advice can be solicited or unsolicited and it can come through many mediums such as one-on-one conversations, listening to a talk, or reading a book.

A common way that advice is given is as a series of commands such as “you should do X” or “you should never do X”. Commands can be expressed bluntly (“you must always do X”) or diplomatically (“in my opinion it is always wise to do X”); they can be general (“you should travel to as many places as possible”) or specific (“you should avoid person P”); and so on...

Read more (2020-05-06 08:00)

Minimum Times Tend to Mislead When Benchmarking

Most of us want, or need, to benchmark software at some point, either because we’re proactively trying to avoid performance problems, or because we’re trying to narrow down the cause of a specific performance problem. Benchmarking seems like it should be simple, but there are countless ways to go wrong: indeed, as things stand, none of us really knows how to do it well, at least not in general. In this short post, I want to make a simple argument against using the minimum time of a benchmark as a proxy for that benchmark’s performance. Doubtless I am recasting other people’s arguments, poorly, but I’ve seen a resurgence in people advocating for the use of the minimum time recently, so it seems like a good time to make the opposing case.

The basics

There are all sorts of things that one might wish to measure about a program: how much memory it consumes, the number of files it opens, and so on. By far the most common measure is “wall-clock time” – i.e. how much time elapses in the real world while the program is running – and that’s what I’m going to concentrate on exclusively here.

Once upon a time, measuring wall-clock time was fairly simple. You read the system clock, ran a program, read the system clock again, and reported the delta between the two clock measurements. There were some minor annoyances to do with clocks but measurements tended to be fairly predictable and repeatable. The weasel word in that statement is “fairly”. Noise has always been an issue in benchmarking: except in rare circumstances, there are always external factors (e.g. other processes; even the kernel itself) that can interfere with your benchmark’s execution which, collectively, we tend to refer to as noise...

Read more (2019-08-15 08:00)

A Quick Look at Trait Objects in Rust

Rust is a genuinely interesting programming language: it has a number of features which are without precedent in mainstream languages, and those features combine in surprising and interesting ways. In many cases, it’s a plausible replacement for C : it leads to fairly fast code; and because it doesn’t have a garbage collector, its memory use is more predictable, which can be useful for some programs. I’ve been doing an increasing amount of programming in Rust for the last 3 or 4 years, the biggest thing being the grmtools set of libraries (a good starting place for those is the quickstart guide). However, there isn’t any doubt in my mind that Rust isn’t the easiest language to learn. That’s partly because it’s a fairly big language: in my opinion, language complexity grows exponentially with language size. But it’s also partly because a lot of Rust is unfamiliar. Indeed, my first couple of years in Rust reminded me of my early fumblings with programming: a lot of activity rewarded with only limited progress. However, it’s been worth it: Rust has proven to be a good match for many of the things I want to do.

One of the things that baffled me for quite a long time are Rust’s “trait objects”: they felt like an odd part of the language and I was never quite sure whether I was using them or not, even when I wanted to be. Since I’ve recently had cause to look into them in more detail, I thought it might be helpful to write a few things down, in case anyone else finds my explanation useful. The first part of this blog post covers the basics and the second part takes a look at the performance implications of the way trait objects are implemented in Rust.

The basics

In general, Rust has a very strong preference for static dispatch of function calls, which is where the function matching a call is determined at compile-time. In other words, when I write a function call f(), the compiler statically works out which function f I’m referring to and makes my function call point directly to that function f. This stands in contrast to dynamic dispatch where the function matching a call is only determined at run-time. Dynamic dispatch is common in languages such as Java. Consider a class C which defines a method m which one or more subclasses override. If we have an object o of type C and a function call o.m(), then we have to wait until run-time to determine whether we should call C’s implementation of m or one of its subclasses. Simplifying slightly, static dispatch leads to faster performance while dynamic dispatch gives one more flexibility when structuring a program. These features can combine in a variety of ways: some languages only offer static dispatch; some only offer dynamic dispatch; and some require users to opt-in to dynamic dispatch...

Read more (2019-02-12 08:00)

Why Aren’t More Users More Happy With Our VMs? Part 2

In the first part of this two-part blog, I showed results from an experiment on VM performance that I’d been part of. Although it wasn’t our original intention, we eventually ended up trying to find out how often mainstream programming language VMs warm up as per traditional expectations. If you consider only individual process executions, then about 70% warmup; if you consider (VM, benchmark) pairs, then about 40% warmup consistently; and if you consider (VM, benchmark, machine) triples, then about 12% warmup consistently.

However you choose to look at it, these numbers aren’t pretty. In trying to make sense of them, I’m going to try and provide some answers to the following three questions. First, are there simple explanations for the performance problems we encountered? Second, how did we get into this situation? Third, how can we do better in the future? To answer the first question, I’m going to draw on our paper. In providing suggested answers to the second and third questions, I’m going to be make extensive use of my personal opinions and experiences, so please don’t blame anyone else for them!

Are there simple explanations for poor performance?

There are all sorts of factors that might be responsible for odd performance, but, beyond those covered in the first part of this blog post, there are two that seem both particularly likely and relatively easy to test for: garbage collection and JIT compilation. For example, garbage collection is, to state the obvious, memory intensive: it could conceivably have all sorts of odd effects on processor caches and thus performance. Similarly, JIT compilation can happen at any point, and there’s no guarantee that it makes things faster...

Read more (2018-09-19 08:00)

Why Aren’t More Users More Happy With Our VMs? Part 1

Programming language performance is something that nearly everyone cares about at some point, whether they realise it or not. If you’re a programmer, you sometimes have to make your programs run fast; if you’re a user, you might be frustrated at a program that runs slowly, without knowing that it’s caused by a poor programming language implementation. Those of us who work on, or around, programming language Virtual Machines (VMs) tell a good story about performance, but a surprising number of users seem unhappy with the performance of their programs. Sometimes, yes, they’re unrealistic, but are they always so? In this first blog post (based on this paper) of two, I’m going to show that programs running on VMs often don’t follow the simple performance patterns that nearly all of us expected. Perhaps that’s why users aren’t as happy as VM developers think they should be?

A typical claim

Every so often there is a spate of blog posts along the lines of “Language XYZ is faster than C”. Language XYZ varies, of course, whilst C is always… C. Love it or hate it , C is, overall, probably the fastest language there is. At the very least, it’s the bar by which all other languages measure themselves.

For example, a claim made about HotSpot (the standard Java VM, often called just “the JVM”, though that there are other Java VMs) is that it makes Java programs as fast as C programs compiled with gcc -O2. At first glance this statement seems reasonable enough: gcc -O2 produces fast code, as, most people agree, does HotSpot. But what’s actually being compared here? A static (or “Ahead of Time”) compiler such as GCC compiles a program once; the resulting executable/binary can be run as many times as you want, even on another machine without GCC. In contrast, VMs such as HotSpot observe a program running and dynamically compile it into machine code. Dynamic compilers are thus subject to a warmup cost: it takes programs running on them a while to reach a steady state of peak performance. VM developers are very keen on the steady state of peak performance and would rather forget that warmup exists: VM performance is universally reported relative to the steady state of peak performance. Thus the claim at the beginning of this paragraph entirely ignores warmup costs...

Read more (2018-09-05 08:00)

What I’ve Learnt So Far About Writing Research Papers

I long flattered myself that I’m only good at one thing: programming. Experience has taught me that I’m not as good at it as I thought – indeed, that there are people much better at it than I – but it’s fair to say that I’m not terrible at it. The problem is that, as a researcher, programming can only ever be part of my job: it’s not enough just to build something interesting, I need to explain it to people too. And explaining it to those further away than my voice can carry requires writing a paper.

There are many people who dismiss paper writing as a distraction from building real systems. I sometimes wondered if it was worthwhile myself but, when you’ve been knocking around as long as I have, the importance of writing things down becomes clear. I’ve lost track of how many conversations I’ve had along these lines: “We saw that problem in System B, and we had a really interesting solution.” ”Where can I read about it?” “We never wrote it up.” “Where can I download it then?” “We never released the source code, the binaries are lost, and it runs on hardware that no longer exists.” I encountered this a lot when I was investigating existing language composition, systems with tantalising references ranging from Lisp systems with different macro expansion semantics to novel text editors. Even when the person I was talking to was passionate and knowledgeable about the system in question, I was rarely able to gain the insights I suspect I could have if I’d had a paper to pore over. Doubtless, many of these systems weren’t good enough to worry about them slipping from our memory; but the loss of the best, which may have involved tens or even hundreds of skilled person years of effort, is a tragedy. Writing things up does not guarantee that people in the future will be able to understand everything about the work , but at least they stand a sporting chance. Besides, even if people of the future can’t understand things, writing things down increases the number of people who can understand the work today.

How should one go about writing a paper? When I started my PhD, I had no idea of how one might go about this task, and it took me many years to find an approach that worked for me. In the rest of this blog post, I’m going to try and drill down into some of the things I’ve learnt about writing papers, and I’m going to give some examples of where I’ve learnt from my mistakes. However, I’m deliberately not phrasing anything herein as “advice” — indeed, several parts of my approach seem to run contrary to most of the advice I’ve seen. I don’t see this as a problem: people work in different ways, and if you’re unlucky enough to have a mind similar to mine, you might find something useful in here...

Read more (2018-01-10 08:00)

What Challenges and Trade-Offs do Optimising Compilers Face?

After immersing oneself in a research area for several years, it’s easy to feel like the heir to a way of looking at the world that is criminally underrated: why don’t more people want what I want? This can be a useful motivation, but it can also lead to ignoring the changing context in which most research sits. So, from time to time, I like to ask myself the question: “what if everything I believe about this area is wrong?” It’s a simple question, but hard to answer honestly. Mostly, of course, the only answer my ego will allow me is “I can’t see anything I could be wrong about”. Sometimes I spot a small problem that can be easily fixed. Much more rarely, I’ve been forced to acknowledge that there’s a deep flaw in my thinking, and that the direction I’m heading in is a dead-end. I’ve twice had to face this realisation, each time after a gradual accumulation of unpleasant evidence that I’ve tried to ignore. While abandoning several years work wasn’t easy, it felt a lot easier than trying to flog a horse I now knew to be dead.

All of this means that I’m probably a little better disposed to contrarian thinking than I was when I started as a researcher. And so it is that I’m going to write about some thoughts that were initially inspired by a (somewhat controversial) talk by Daniel Bernstein. The talk itself, entitled “Death of Optimizing Compilers” (slides and audio), was presented at ETAPS 2015 in London. I happened to be present for the talk and have engaged in mild advocacy both immediately afterwards and in the two years since. The problem is that the case against is quite easy to state but the case for rather hard — or, at least, I haven’t managed to frame an argument good enough to convince anyone. But I hope I might now have sharpened my argument enough to make another attempt worthwhile.

Suitably condensed, the talk’s thesis is fairly simple. As software systems become bigger and bigger, an ever smaller percentage of code dominates execution time , especially as computers are asked to crunch ever bigger chunks of data. However, optimising compilers (e.g. gcc -O2 or even a VM like HotSpot) are outclassed by humans on the most performance critical code. Because of that, optimising compilers are operating in a region between a problem that’s not worth solving (most parts of most programs) and a problem they can’t solve (the really performance critical parts). It doesn’t take much thought to realise that this last point has been the source of controversy: how can someone belittle the huge amounts of effort put into optimising compilers? ...

Read more (2017-06-22 08:00)

Fine-grained Language Composition

Programming languages are anti-social beasts: while they’re happy to communicate extensively with the operating system that launched them, they’re reluctant to talk to other programming languages. Most, reluctantly, make an exception for low-level languages, which have a natural mapping onto the low-level system ABI, most notably C. Indeed, used without qualification, the term ‘Foreign Function Interface’ (FFI) is typically assumed to refer to an interface to C, as if it was the only “foreign” programming language in existence.

Programming languages therefore resemble islands: each language defines its own community, culture, and implements its own software. Even if one wants to travel to another island, we often find ourselves trapped, and unable to do so. The only real exception to this are languages which run on a single Virtual Machine (VM), most commonly a Java VM. These days, many languages have JVM implementations, but it can sometimes seem that they’ve simply swapped their expectations of an FFI from C to Java: non-Java JVM languages don’t seem to talk to each other very much.

We’re so used to this state of affairs that it’s difficult for us to see the problems it creates. Perhaps the most obvious is the huge effort that new languages need to expend on creating libraries which already exist in other languages, an effort far greater than the core language or compiler require. Less obvious is that the initial language used to write a system becomes a strait-jacket: we almost always stick with our original choice, even if a better language comes along, even if no-one is trained in the original language, or even if it simply becomes difficult to run the original language...

Read more (2016-09-21 08:00)

Debugging Layers

A couple of weeks back, I joked to someone that one day I would write a condensed retrospective of all my failed research ideas but that, until I did, people could read my research papers for the long version. It seemed a clever thing to say at the time, and I dutifully slapped myself on the back. I then found myself discussing Converge’s approach to error tracking and reporting with someone to whom I thought it might be vaguely relevant, later pointing my poor victim to a chunk of a paper and – when I later remembered its existence – a blog article. The core idea is simple: generated code can be related to one or more source locations. This means that when a user layers one piece of code atop another, each of those layers shows up in stack traces, making debugging significantly easier. When, with a fresh mind, I read over the paper and the blog article, I realised that it’s hard to extract this, and a few related, albeit minor, ideas out of those bits and pieces: they assume the reader knows more about Converge than is realistic.

Normally I wouldn’t necessarily mind too much if old ideas are a little hard to grasp, but I’ve come to view Converge’s error tracking and reporting as probably the only useful piece of language design I managed in the five or so years I spent on Converge. I therefore found myself deciding that I needed to write a brief retrospective on Converge and this feature so that some other poor soul doesn’t spend five years of their life on a feature than can be explained in ten minutes. The aim of this article is to give enough background to understand the core idea of debugging layered programs, but what is enough depends on your point of view. I’ve therefore tried to aim this article at two audiences: one who just wants to quickly get a sense of the important idea (if that’s you, skim the background material, but don’t worry too much about the details); and another who wants to get a deeper sense of how the important idea really works (if that’s you, it’s worth spending some time getting a reasonable appreciation of the background). I’m sure I won’t fully succeed in satisfying either audience, but hopefully both might find a little something of interest herein.

A quick introduction to Converge

For our purposes, Converge can be thought of as Python with a mildly modified syntax and somewhat different standard library: it’s dynamically typed, has an indentation syntax, and a mostly similar world view. One immediately obvious difference is that programs must have a main function and that printing isn’t a builtin statement or function. Here’s a minimal hello world program: ...

Read more (2015-07-28 08:00)

An Editor for Composed Programs

A few of you might remember a blog post of mine from 2011 titled Parsing: The Solved Problem That Isn’t. My hope with parsing has always been that I can take an off-the-shelf approach and use it without having to understand all the details. In that particular post, I was looking at parsing from the angle of language composition: is it possible to glue together two programming language grammars and parse the result? To cut a long story short, my conclusion was none of the current parsing approaches work well for language composition. I was nervous about going public, because I felt sure that I must have missed something obvious in what is a diffuse area (publications are scattered across different conferences, journals, and over a long period of time). To my relief, a number of parsing experts have read, or heard me talk about, the technical facts from that post without barring me from the parsing community.

When I wrote the original blog post, I did not know if a good solution to the problem I was considering could exist. I ended by wondering if SDE (Syntax Directed Editing) might be the solution. What I’ve discovered is that SDE is a wonderful way of guessing someone’s age: those under 40 or so have rarely heard of SDE; and those above 45 tend to either convulse in fits of laughter when it’s mentioned, become very afraid, or assume I’m an idiot. In other words, those who know of SDE have a strong reaction to it (which, almost invariably, is against SDE). This blog post first gives an overview of SDE before describing a new approach to editing composed programs that we have developed.

Syntax directed editing

Imagine I want to execute a program which adds 2 and 3 times x together. I fire up my trusty text editor, type 2 + 3 * x in, and pass it to a compiler. The compiler first parses the text and creates a parse tree from it, in this case plus(int(2), mul(int(3), var(x)), and then uses that tree to do the rest of the compilation process. In a sense, when programming in a text editor I think in trees; mentally flatten that tree structure into sequences of ASCII characters that I can type into the text editor; and then have the parser recreate the tree structure I had in my head when I wrote the program. The problem with this process is not, as you might be thinking, efficiency—even naive parsing algorithms tend to run fast enough on modern machines. Rather it is that some apparently reasonable programs fail to parse, and the cause of the errors can be obscure, particularly to novice programmers...

Read more (2014-08-20 08:00)

The Bootstrapped Compiler and the Damage Done

These days, it’s not uncommon to hear that a language’s compiler is written in itself. This phrase often confuses non-language implementers: how can a language’s compiler be written in itself? The answer is surprisingly simple: what’s being referred to is the second compiler created for the language. In such cases, the first compiler created is a bootstrapping compiler with just enough functionality to allow a compiler to be written in the language being created. In other words, when creating language NL, a basic compiler is first written in existing language OL, allowing a compiler for NL to be written in NL itself. When the new compiler is completed, the language is said to be bootstrapped, and the bootstrapping compiler is thrown away, leaving only the bootstrapped compiler written in NL. Not every language bootstraps its compiler, but it is increasingly common to do so, especially for those languages which target VMs . Indeed, it’s become sufficiently common that I’ve sometimes seen new languages which haven’t been bootstrapped looked on with slight suspicion (the argument seems to boil down to either or both of you clearly don’t think much of your language if you’re not prepared to use it to write its own compiler or your language is so underpowered, it can’t even be used to write its own compiler).

There are obvious advantages to creating a bootstrapped compiler. Most notably, it means that the language being designed and implemented is also used early in the implementation process. What might otherwise be abstract musings about the use of the language are made concrete by the need to have the language be good enough to write the bootstrapped compiler in. As the compiler is written, missing or undesirable features in the language are discovered and the language’s design extended and changed as appropriate. Bootstrapping thus tends to see the design and implementation of a language progressing in unison.

Traditionally, bootstrapping is seen as an almost universal good (apart from occasional griping over performance issues, which can be ignored). I have bootstrapped languages myself, and I will do so again (attacks by random buses allowing). However, I have recently become aware of a real problem with bootstrapping: it can, and does, distort language design. In other words, by having a bootstrapped compiler as the first – and, often, for some time the biggest – program in a language, the eventual language design is often unintentionally altered and rarely for the better. I suspect this has two root causes...

Read more (2013-12-04 08:00)

Relative and Absolute Levels

Recently, I’ve seen a number of discussions along the lines of what makes a language declarative or imperative? (see, in chronological order, William Cook, Bob Harper, and Neel Krishnaswami). Trying to get a crisp definition of vague terms often involves endlessly going around circles. Just in case that isn’t bad enough, in this specific case, some believe that their favourite term connotes superiority, thus setting otherwise reasonable people against each other. Intuitively, the two terms are talking about the level of abstraction of a programming language: declarative means that something provides more what and less how; while imperative is more how and only implicitly what. We have a vague feeling that the terms should be precise opposites, but in practice we keep stumbling across examples which are hard to fit precisely into exactly one of the terms. William and Bob are, I believe, both correct in saying that, if the terms declarative or imperative once had a specific meaning, they have long been robbed of it.

I would suggest that what’s really going on is that we’re trying to use the terms to express absolutes when they can only meaningfully express the relative relation of one thing to another. Before tackling the original point directly, let me try and give an example that is less likely to offend those of us immersed in programming languages. Imagine there are two people competing against each other in a 4km cycle race. Jim takes 5 minutes and Tim takes 4 minutes 50. We might say Tim is quick but we would be incorrect. Tim is only quick relative to Jim. When Kim — who can do the race in 4 minutes 30 — comes into the room, we can clearly see that Tim is not quick compared to Kim. Thus, while relative comparisons between individuals are safe, absolute statements are not: we can’t be sure if we will encounter a person who changes our notions of quickest or slowest. Put more carefully, the absolute statements are only correct for the set of individuals we’ve sampled. Relative statements (e.g. Tim is quicker than Jim) remain correct even when we encounter new individuals.

With that in mind, let’s look at a few examples from programming languages: ...

Read more (2013-09-24 08:00)

General Purpose Programming Languages’ Speed of Light

Having recently returned from a week of talks on programming languages, I found myself wondering where general purpose programming languages might go in the future; soon I wondered where they could go. The plain truth about programming languages is that while there have been many small gains in the last 20 years, there have been no major advances. There have been no new paradigms, though some previously obscure paradigms have grown in popularity; I’m not even aware of major new language features (beyond some aspects of static type systems), though different languages combine features in slightly different ways.

The lack of major advances in a given time period may not seem surprising. After all, major advances are like earthquakes: they occur at unpredictable intervals, with little prior warning before their emergence. Perhaps we are on the cusp of a new discovery and none of us – except, possibly, its authors – are aware of that. That is possible, but I am no longer sure that it’s likely. If it doesn’t happen, then it seems clear to me that we are getting ever closer to reaching general purpose programming language’s speed of light—the point beyond which they cannot go.

Programming languages cannot grow in size forever

The basis of my argument is simple: ...

Read more (2013-04-09 08:00)

Another Non-Argument in Type Systems

I find most static vs. dynamic debates to be painful (as I’ve said before) and, in general, I try to ignore them. However, every so often a new argument is raised which needs consideration. About 18 months ago, Bob Harper asserted that a dynamic language is a straightjacketed static language that affords less rather than more expressiveness. Part of the underlying argument is correct, but it avoids an important part of the story. I’ve recently seen several restatements of this assertion, so, unfortunately, I feel compelled to write a brief rebuttal explaining why I believe it’s yet another non-argument in the ongoing debate about which class of languages is better.

My firm belief is that statically and dynamically typed languages overlap in some ways, but differ in others. Each has tasks they are better suited to, and neither is inherently better than the other. Bob shows in his article one area in which statically typed languages allow things that dynamically typed languages can’t; in this article, I’m simply going to point out that this leads to a mirror-image where dynamically typed languages allow things that statically typed languages can’t. In that sense, the argument I make here is a restatement of ideas in my dynamically typed languages paper.

Before we can go any further, we need a shared understanding of a concept from statically typed languages such as ML and Haskell (i.e. not Java or its ilk): sum types. Since this concept is unfamiliar to many programmers, I’m going to present a simple definition of the core features; sum types have a lot of extra features (tagging, pattern matching, etc.) that aren’t germane to this article’s argument. The basic idea is that sum types are a super type that ranges over other types. Whereas in OO languages a sub-class explicitly chooses its super-class(es), sum types (the super class equivalent) choose the types they range over (the subclass equivalent) — the latter have no idea which sum types they are ranged over by. I can therefore take the types T1 and T2 and make a sum type over them S. Whenever my program wants to do anything with a value of static type S, I must then use what I’ll call a typecase statement to differentiate whether the dynamic type of the value is actually a T1 or a T2 (readers familiar with the concept will realise that this is intended to be a very simple form of pattern matching). Writing this out in pseudocode can help visualize the important relationships between these concepts: ...

Read more (2012-10-25 08:00)

Server Failover For the Cheap and Forgetful

I’ve long been of the opinion that real computer people should run their own servers. Most obviously, doing so gives you huge flexibility to do unusual things and share them easily with other people. Less obviously, I think it’s hard to be a good programmer unless you’re also a competent sysadmin. And so, for the last 12 years or so, I’ve hosted my own e-mail, web sites, and so on. This has proved to be a good choice: my systems have been more flexible and reliable than my various employer’s, for example. I hope that I’m not dogmatic about doing everything myself, so when a better alternative appears, I switch to it. For example, after several years hosting git repositories for different projects myself, I now use github (and its ilk) almost exclusively, because they do a better job of multi-user repositories than I can on my own. But, in general, I prefer to do things myself.

However, there is one undeniable truth when hosting everything yourself: when things go wrong, there’s no-one else to blame. If software crashes, hardware dies, or - and this has happened to me twice - the water company cuts through the power cable, it’s your sites that are down. After the water company cut through the power cable for a second time, I realised that not having my main server running for three days wasn’t very much fun. I eventually managed to route things through a cobbled together backup server, but it was a huge pain. I broke out in a cold sweat every time I wondered how long the next downtime might be: a week? or more? Even worse, I realised, I might not be available at the time to fix things.

I therefore started looking at ways to put in place a reasonable failover system. Importantly, I wanted the most important services on the main server to failover to a backup until the main server came back to life. I naively assumed there would be a fairly easy way of doing this, but, if there is one, I couldn’t find it at the time (there are now several web-based DNS services which appear to offer this service, though I haven’t tried any of them). Presumably this is why most people (including some surprisingly large organisations) never put into place any sort of failover solution and those that do tend to have large systems teams behind them. Some of the material I read made my eyes hurt: it involved tweaking vast numbers of options in vast numbers of configuration files. As someone who aims to spend, on average, a few minutes a week on sysadmin duties, I knew this was not a sensible long-term option. I’d inevitably forget all the complicated details, which, from a sysadmin viewpoint, is storing up problems for a rainy day...

Read more (2012-08-02 08:00)

Fast Enough VMs in Fast Enough Time

Updated (February 07 2013): If you enjoy this article, you may also find the more academically-inclined article The Impact of Meta-Tracing on VM Design and Implementation interesting.

[This is a long article because it covers a lot of ground that is little known by the computing mainstream, and easily misunderstood. If you’re familiar with the area, you may find yourself wanting to skip certain explanatory sections.]

If you can stomach the smell, put yourself briefly in the shoes of a programming language designer. What you want to do is create new programming languages, combining new and old ideas into a fresh whole. It sounds like a fun, intellectually demanding job, and occasionally it is. However, we know from experience that languages that exist solely in the mind or on paper are mostly worthless: it is only when they are implemented and we can try them out that we can evaluate them. As well as a language design, therefore, we need a corresponding language implementation...

Read more (2012-02-08 08:00)

Problems with Software 3: Creating Crises Where There Aren’t Any

[This is number 3 in an occasional series ‘Problems with Software’; read the previous article here.]

As an undergraduate in the late 1990s, the part of my course I liked the least was Software Engineering, which started with obviously silly 1970s practices, and went downhill from there. As someone who was already familiar with both programming and, to some extent, the software industry, I pooh-poohed the course; my arrogance was duly rewarded with the lowest mark of any part of my degree, which taught me a useful life lesson.

One previously unfamiliar concept which this dreadful course introduced to me was the ‘software crisis’. First promulgated in the late 1960s and early 1970s, it captured the notion that software was nearly always over-budget, behind schedule, and of low quality. Those with an appreciation of history will know that most of the software technologies (programming languages, operating systems, and so on) we now take for granted took form around this time. As with any new field, this was an exciting time, with no technical precedent: none of its participants would have been able to predict the path forward. Rather, experimentation with new ideas and tools led to successes and failures; the latter received greater attention, leading to the idea of a ‘software crisis’...

Read more (2011-06-28 08:00)

Problems with Software 2: Failing to Use the Computing Lever

[This is number 2 in an occasional series ‘Problems with Software’; read the previous article here.]

We all know at least one and I make no apologies for the picture I now paint. Hunch backed, tongue out, index fingers locked rigidly into an uncomfortable ‘r’ shape, and with no sense of the appropriate level of force to put into their actions. The neanderthal figures I refer to are, of course, two fingered typists. In my experience, a typical two fingered typist will struggle to type 30 Words Per Minute (WPM). An average touch typist will manage at least 70 WPM; and for those prepared to put in a bit of effort in, 90-100 WPM is eminently achievable.

Whether high typing speeds are useful depends on the person and the tasks they need to achieve. Agricultural workers, for example, probably won’t see a huge return on investment in typing skills. What astonishes me is that I know people who spend their entire working day in front of a computer, day in, day out, yet who still use this grossly inefficient technique. The slow typists I know easily lose a few hours every week due to their poor technique — yet suggesting that a few days spent learnt touch typing will quickly pay off in increased productivity is inevitably met with a reply of “I don’t have the time”...

Read more (2011-06-07 08:00)

Problems with Software 1: Confusing Problems Whose Solutions Are Easy to State With Problems Whose Solutions Are Easy to Realise

I love computing, in particular software: since I was young, I have devoted the major portion of my life to it. In return, I have gained access to a never‐ending source of challenges; met many great people; and had my ego smashed on the jagged rocks that are off‐by‐one errors. None of this makes me very well qualified to write an occasional series of articles on Problems with Software, but I will do my best. The spirit in which I write these is a positive one, perhaps akin to a parent admonishing a child for his own long‐term benefit. Of course, I am, and always be, a student of software, not its parent; searching, learning, and in awe of this ever‐evolving subject. With that in mind, let us begin.

I start with what I consider to be the most common mistake in software; a mistake that no other subject I know of commits so frequently. It is the confusion of problems whose solutions are easy to state with problems whose solutions are easy to realise. A non‐computing example may help explain this. The problem is war: around the world, it ruins the lives of countless people. The easily stated solution is that we should stop people from fighting each other. Regretfully, we know that this solution, while correct, is not worth the space it takes on the page. Telling people they should stop fighting changes nothing, and physically stopping them requires resources far beyond that which can be mustered. None of this is to say that it is not worth tackling the problem of war: but most of us have enough common sense to realise that solutions (for there will surely need to be more than one) will be expensive, long‐term, and come with no guarantee of success.

Alas, in software, such common sense is an uncommon thing. In our subject, unworkable solutions are frequently proposed to deep problems; funding is acquired; manpower deployed; and, after an exhausting death march, failure guaranteed. The problem most frequently invoked is the difficulty of creating good software. This clearly is a problem: too much software is expensive, unreliable, and ill‐suited to the task it was created for. The many easily‐stated solutions proposed have had, at best, a tiny effect; at worst, they have impeded progress by suppressing less exciting, but more realistic, truths. Two concrete examples exemplify this...

Read more (2011-04-19 08:00)

Parsing: The Solved Problem That Isn’t

Updated (2014-10-24): If you find this article interesting, you may be interested in the follow-up article on an editor for composed programs.

Parsing is the act of taking a stream of characters and deducing if and how they conform to an underlying grammar. For example the sentence Bill hits Ben conforms to the part of the English grammar noun verb noun. Parsing concerns itself with uncovering structure; although this gives a partial indication of the meaning of a sentence, the full meaning is only uncovered by later stages of processing. Parseable, but obviously nonsensical, sentences like Bill evaporates Ben highlight this (the sentence is still noun verb noun, but finding two people who agree on what it means will be a struggle). As humans we naturally parse text all the time, without even thinking about it; indeed, we even have a fairly good ability to parse constructs that we’ve never seen before.

In computing, parsing is also common; while the grammars are synthetic (e.g. of a specific programming language), the overall idea is the same as for human languages. Although different communities have different approaches to the practicalities of parsing - C programmers reach for lex / yacc; functional programmers to parser combinators; others for tools like ANTLR or a Packrat / PEG-based approach - they typically rely on the same underlying area of knowledge...

Read more (2011-03-15 08:00)

In Praise of the Imperfect

In this entry, I‘m going to break - in spirit at least - one of my golden rules: I‘m going to mention one of my hobbies. This is a slippery slope. Intelligent, interesting people (as well as me) are generally interested in more than one thing in one life - no matter the relationship between those things. Some of the people who most elegantly write about programming have, in my opinion, gone off the rails when they‘ve made fatuous comparisons between their hobby and computing of the type programmers like X because it is like programming in the sense Y. Often the only thing relating X - be it painting, poetry, guns, or red squirrels - to programming is the person writing about it. Good luck to them I say - but generalising from a case of one is frequently unenlightening. With that warning to myself - and the reader - in my ears, off I go.

I recently read an article on music that, boiled down to its essentials, talked about the lack of a meaningful relationship between technical virtuosity (roughly ‘how well can you play’) and musical quality (roughly ‘how enjoyable is this piece of music’). Sometimes 3 chords and a 3 note melody played slowly and imperfectly will be exactly what is required; sometimes precise fast scales and subtle melodic variations will do the trick. When a song that should be played slowly, sloppily, and minimally is played fast, precisely, and with extraneous notes - or vice versa - the balance between technical ability and musical quality is lost. Remembering this balance is a problem many musicians have. As their technical abilities grow over time, it is common to equate technical difficulty with musical quality. Despite my extremely limited musical abilities (it is difficult to dispute one persons assessment of my playing as ham-fisted), I have on occasion suffered this problem as I have got slightly more proficient: fortunately, since many of my favourite songs are - not to put too fine a point on it - brainless three and a half minute rockers, I generally get brought back down to earth at some point.

Reading this made me realise that a similar balance applies to programming. Sometimes a heavily engineered solution is right for the job; sometimes something much simpler is just the ticket. Let me give an example of each...

Read more (2010-08-25 08:00)

A Modest Attempt to Help Prevent Unnecessary Static / Dynamic Typing Debates

When I was a teenager, the relative worth of different operating systems was a continuous topic of debate. Starting with Amigas vs. Ataris (not that I could afford either), and later PC (meaning DOS / Windows) vs. Acorn (a British computer manufacturer, once fairly successful, but now unknown by anyone under 21), groups of teenage boys would spend countless hours arguing that the other side were, at best, misguided and, at worst, in league with the devil. Boys like to argue, and these were fairly harmless topics, but by the time I’d started university, I was long past the point of wanting to engage in heated debate about such topics.

Several times recently, memories of wasted hours have flooded back to me as I have been an unwilling participant in one of the longest-standing debates in programming languages - statically vs. dynamically typed languages. Frankly, such debates haven’t become any less painful since the passing of my teenage years. The problem in such debates remains the same: both sides over-inflate the advantages of their favoured approach while wilfully maintaining their ignorance of the other. Add to this the common tendency of any side in a debate to assume the worst of their opposition, and you have a situation that resembles Northern Europe in WWI: grim, unrelenting trench warfare, with neither side making any real advances.

To make matters worse, the debate in this particular case is asymmetric. For decades now, the computing intelligentsia have been overwhelmingly in favour of static typing - in academia, this bias is almost total. While the intellectual basis of static typing is both well established and frequently promulgated, the dynamic typing community has been far less intellectually confident. Please note, I’m not saying that dynamic typing has a less firm intellectual basis - simply that it is infrequently articulated. Let me give two simple, but different, examples: ...

Read more (2010-04-07 08:00)

A Proposal for Error Handling

A while ago I wrote about my experience of writing extsmail, and how surprised I was that highly reliable and fault tolerant programs could be written in C. In large part I attributed this to the lack of exceptions in C. In this article, I expand upon this point, consider some of the practical issues with exceptions based language, and present a candidate language design proposal that might partly mitigate the problem. I don’t promise that this is a good design; but it does present some of the issues in a different way than I’ve previously seen and if it encourages a debate on this issue, that might be use enough.

The two approaches

There are two main approaches to trapping and propagating errors which, to avoid possible ambiguity, I define as follows:

  1. Error checking is where functions return a code denoting success or, otherwise, an error; callers check the error code and either ignore it, perform a specific action based on that code, or propagate it to their caller. Error checking does not require any support from language, compiler, or VM; it depends entirely on programmers following the convention. Languages which use this idiom include C.

  2. Exceptions imply a secondary means of controlling program flow other than function calls and returns. When an exception is raised, this secondary control flow immediately propagates the exception to the functions caller; if a try block exists, the exception can be caught and handled (or rethrown); if no such block exists, the exception continues propagating up the call stack. Exceptions require certain features in the language, and support from both compiler and VM. Languages which use this idiom include Java, Python, and Converge.

...

Read more (2009-12-14 08:00)

The Missing Level of Abstraction?

Levels of abstractions

I feel fairly confident in stating that everyone who is familiar with the details of computing will have encountered the phrases high level of abstraction and low level of abstraction - probably rather often. Abstraction is one of those words which is used rather more frequently than it is considered. In short, an X that is an abstraction of Y implies three things in our context:

  1. That X is at a higher level of abstraction than Y (alternatively one could say that X is more abstract than Y).

  2. That X does not introduce anything fundamentally new over Y; indeed, it may well remove fundamental things, or present them in an easier fashion.

  3. Assuming that one does not need any of the fundamental things in Y that may have been lost in the abstraction X, then X is in some sense easier to use than Y.

Abstractions abound in computing, as they do in life in general. To take a simple example, the first computers were programmed in machine code (zeros and ones); the second generation, in assembly code which easily translated into zeros and ones, but provided an easier syntax for humans; the third generation and beyond, in languages such as C whose translation into machine code became increasingly complex. Though they are often nebulous and, indeed, often rather hard to spot and understand, abstractions are what make modern computing possible; life without them would be like trying to walk from Lands End to John o’ Groats with ones eyes pointed downwards and half an inch above the ground...

Read more (2009-09-15 08:00)

Good Programmers are Good Sysadmins are Good Programmers

It is human nature to assume that what is familiar to us, is familiar to all. I can still clearly remember, when I was around 12, going to a friends house, and being astonished at the fact that around my dinner plate were three pairs of knives and forks. In fact, petrified would probably be a better word - not only had I never personally seen anything like it, I had never imagined that anyone would, or indeed could, use more than one knife and fork in a single meal.

Of course, my life thus far has not just involved my surprise at other peoples habits; on occasions (less rare than my ego might have preferred), other people have been surprised at mine. Recently a non-computing friend saw my main computer workspace - a Unix setup with 4 xterms displayed - and asked, jokingly, if I was plotting to take over the world (I blame the media for this particular image). I’m so used to my own setup that I no longer think of it as odd but, when suitably prompted, I can see why other people might think so. In comparison to a Windows or Mac machine, full of little visual goodies, and perhaps with only a web browser or word processor loaded, a number of tiny xterms filled with half-executed commands does look odd.

It’s not really surprising that a non-computing person would find my setup odd - after all, I spend a lot of time on computers, so it’s to be expected that some things that I have come to find natural scare casual users. What has surprised, and continues to surprise me, is how many computing people I come across find my setup odd - sufficiently odd that it attracts comment. Some people are baffled as to why my systems are as they are, some are curious as to how it works, and some people sneer at the way I do things. There is no good answer to the sneer, nor any great reason to answer that person (although, possibly due to a deep character flaw, I find the sneer rather amusing). However the how and why are interesting questions, which raise interesting points, and are more closely integrated than they may first appear...

Read more (2009-03-20 08:00)

How can C Programs be so Reliable?

C is, today, a unique programming language. Surprisingly few people can really program in C and yet many of us have quite strong opinions about it. Buffer overflows, stack smashing, integer overflows — C has many well publicised flaws, and these terms are often bandied about confidently, even by those unfamiliar with C. Personally I shied away from C for a decade, for one reason or another: originally, compilers were expensive (this being the days before free UNIX clones were readily available) and slow; the culture was intimidatory; and, of course, all the C scare stories made me think that a mere mortal programmer such as myself would never be able to write a reliable C program.

Discounting a couple of tiny C modules that I created largely by blindly cutting and pasting from other places, the first C program I wrote was the Converge VM. Two things from this experience surprised me. First, writing C programs turned out not to be that difficult. With hindsight, I should have realised that a youth misspent writing programs in assembler gave me nearly all the mental tools I needed - after all, C is little more than a high-level assembly language. Once one has understood a concept such as pointers (arguably the trickiest concept in low-level languages, having no simple real-world analogy) in one language, one has understood it in every language. Second, the Converge VM hasn’t been riddled with bugs as I expected.

In fact, ignoring logic errors that would have happened in any language, only two C-specific errors have thus far caused any real problem in the Converge VM (please note, I’m sure there are lots of bugs lurking - but I’m happy not to have hit too many of them yet). One was a list which wasn’t correctly NULL terminated (a classic C error); that took a while to track down. The other was much more subtle, and took several days, spread over a couple of months, to solve. The Converge garbage collector can conservatively garbage collect arbitrary malloc’d chunks of memory, looking for pointers. In all modern architectures, pointers have to live on word-aligned boundaries. However, malloc’d chunks of memory are often not word-aligned in length. Thus sometimes the garbage collector would try and read the 4 bytes of memory starting at position 4 in a chunk - even if that chunk that was only 5 bytes long. In other words, the garbage collector tried to read in 1 byte of proper data and 3 bytes of possibly random stuff in an area of memory it didn’t theoretically have access to. The rare, and subtle, errors this led to were almost impossible to reason about. But let’s be honest - in how many languages can one retrospectively add a garbage collector? ...

Read more (2008-11-11 08:00)

Free Text Geocoding

I think nearly everyone would agree that maps are useful things; I’ve only met a couple of people who labour under the illusion that they know the way from anywhere to anywhere. Personally I find maps more than interesting - they’re often fascinating. Maps are one thing the British do immensely well (judging by some of the inaccurate doodles I’ve seen abroad). By looking at the detailed Ordnance Survey map of the area where I grew up, I can easily see layers of historical information such as: the Roman road a mile away; the long-abandoned railway lines upon which roads were later built; the now-drained marshes and the canal network later built upon them. Of course, these are fairly standard maps with a well-understood relationship to the real world; sites such as Strange Maps and Mark Easton’s blog show how maps can present data in many other forms. My headmaster at school thought that geography as a subject had lost its way; instead of focusing on rock strata, it should focus on teaching children where places where. At the time, we all thought he was a touch eccentric; in retrospect, he was absolutely right. Huge tracts of politics, economics, and history only make sense in the context of a given place(s). I could go on, but I hope my point is clear: maps are important, and not just for finding our way around.

A big problem with traditional maps is finding things: places, areas, and so on. Even for a medium sized paper map, the size of the index is huge; and the index for a good quality London A-Z is often almost as big as the map itself. Paper map indexes have their own funny little language, trying to squeeze as much data in as possible. However paper indexes have two problems. First, how many times have you forgotten what square you were supposed to look at when you get to the proscribed page? Second, indexes can only grow to a certain size before they are unusable.

Computer geocoding

The advent of computer mapping has been a real boon, and not to just map fiends such as I. The first site I used regularly was Street Map. Having the map data for the whole UK was incredibly useful and the search facilities, while crude, were a huge improvement on a paper index. When later sites such as Google Maps became available, I was stunned. Suddenly freely viewable map data (though note I do not use the phrase free map data) for much of the world was coupled with a new way of searching. No paper index, or Streetmap’s cloying need to be told what type of search was being performed (e.g. place name, street name, post code etc.). Instead is what I call (for want of a better name) free text geocoding: that is, where one types in something in the format used in real-life, and the search engine finds the right place automatically. One doesn’t need to tell the search engine that the search is for a postcode or a place-name, or even which country one is searching - it magically does the right thing. Well, of course - not always the right thing. For example, at the time of writing, if I do a search for Penrith in Google Maps UK, I get taken to a suburb of Sydney in Australia. The poor residents of Penrith in the North of England don’t even have their town mentioned as a possible match for the search; one must instead search for something like Penrith Cumbria or Penrith UK. There are a range of related minor infelicities in both Google Maps and Yahoo Maps. However on average, both do an adequate job...

Read more (2008-09-01 08:00)

Extended Backtraces

I can quite distinctly remember when, as a teenager, I realised that I would spend much of my life debugging. I was programming in assembler, where loops are simply branch statements to defined labels. Because loops are so common, one would use the same label for each loop; later loops would redefine the label, meaning that no problems could occur. Being a literal person I used the label loop for all such loops. When one of my programs mysteriously failed, I could not work out why. Eventually I realised that one of my label definitions had spelt looop (three O’s instead of two) instead of loop, so my loop had branched back to the previous loop in the file. Spotting that took me a couple of days.

Later, I realised that most programming errors fit into two broad categories: the obvious and the subtle. Obvious errors are those whose source can be easily pinpointed (even if fixing the problem takes a while). The subtle are typically those where cause and effect are separated, making identification of the root of the problem difficult (often, when eventually located, such problems are easily fixed). The looop problem above was, to a relative novice programmer, a very subtle problem (years of experience have taught me that looking for daft spellings in my own programs is a good initial target when debugging).

There are, to my mind, two tricks to debugging. The first is to try and turn subtle problems into obvious problems; however subtle problems are typically inherently subtle and unamenable to such treatment. The second is to try and speed up the solving of obvious problems. For me, the main tool for solving obvious problems is the humble backtrace which, when an exception occurs, shows one (in some manner or another) the call stack and, hopefully, what file and line number each entry therein is associated with. Given the following trivial program: ...

Read more (2008-06-02 08:00)

Designing Sane Scoping Rules

If there’s one thing which unites pretty much every post-assembly programming language it is the use of the humble variable. Variables are such a common feature that we tend to take them for granted; perhaps I show my background in assembly by being explicitly aware of them. However where programming languages often differ is in the way that they allow one to reference variables: the dreaded scoping rules. In this article I’m going to outline why Converge has the scoping rules it does.

The first thing I need to do is to outline the problem. Although all my examples are framed in terms of a typical imperative programming language, the underlying concepts all translate fairly directly to functional languages too. Here’s the simplest example possible:

x := 2
...
y := x

...

Read more (2008-03-03 08:00)

Some Lessons Learned from Icon

Updated (October 26 2007): Before reading this article, please be aware that it has been superseded by the paper Experiences with an Icon-like expression evaluation system.

One of Converge’s lesser known influences is Icon. Although it’s a relatively obscure language itself, a surprising number of people have heard of Icon’s direct predecessor SNOBOL. I first heard of Icon through Tim Peters and Python, where Icon was the inspiration for Python’s generator feature (basically procedures that whose execution can be resumed to produce multiple values). However Icon has a much richer, and definitely unique, approach to expression evaluation than any other imperative programming language. When I started working on Converge, I decided to go back to the source, and see how much of Icon’s unique approach I could incorporate into Converge.

This article is a personal look back at Icon’s influence on Converge, and the successes and failures resulting from that influence. It’s quite probable that much of it reflects my slightly superficial understanding of Icon - apart from its innovative approach to expression evaluation, much of the language has an old fashioned feel to it, sometimes appearing like a dynamically typed version of Pascal, and this prevented me from ever wanting to write anything large in it. This isn’t a criticism of Icon as such - it was ahead of its time in many ways - but is merely an observation that modern programmers expect something slightly different from their programming languages. Hopefully, even with the above caveat, this article contains something of value to those interested in programming languages...

Read more (2007-12-10 08:00)

How Difficult is it to Write a Compiler?

Recently I was discussing Converge with someone, and mentioned how little time the core compiler had taken to implement (no compile-time meta-programming, limited error checking, but a functioning compiler nonetheless) - only a few days. The chap I was talking to looked at me and told me that he didn’t believe me. I laughed. Actually, he really didn’t believe me. Initially that surprised me, and I wondered why he might think that I was pulling his leg. But then I thought back, and only a few years ago I would probably have had the same reaction. This article is my attempt to explain his reaction, and why it’s no longer the case.

When I was younger, there were three things in computing that seemed so complex that I had no idea how anyone had ever created them; certainly, I didn’t think I would ever be capable of working on such things. The imposing triumvirate were hardware, operating systems, and programming languages. Although electricity has always baffled me, the hardware people have done an amazing job on interoperability of components in recent years, such that I’ve never felt the need to gain a low-level understanding of how electrons whizz around my systems. Operating systems were partially demystified for me as I moved into the world of open source UNIX-like operating systems (for the record, I’ve been an OpenBSD user for 8 years or so), where one could poke and prod virtually any aspect - although kernels retain an aura of mystery for most of us. Despite the fact that I was familiar with several languages, and had even implemented a primitive assembler language (which, to my surprise, found its way into a real system - my first large-ish programming success), real programming languages remained a mystery to me for much longer. The obvious question is: why?

Programming languages are, for most of us, an integral part of the computing infrastructure. Only a small handful of languages have ever had a wide impact, and by the time a programming language becomes popular it already probably has a decade or more of history behind it. This means that when the average programmer first comes across a new language, it has had substantial development behind it, libraries and documentation, developed its own culture, and undoubtedly had several flaws uncovered; not to mention lots of little platform portability hacks, and often complex optimisations. Faced with this wodge of sheer stuff, most peoples reaction is either to shrug their shoulders and accept it at face value (the majority), try to understand it but not know where to start (a minority), or spot how similar it is to every other language (a tiny handful of people). Because the first two categories massively outnumber the third category, a general feeling - which we’re not necessarily consciously aware of - has developed that programming languages are special, and therefore that only special people can create them (a veiled insult, if ever I saw one). Thus if I suggest to people that, despite having created a fairly fully featured programming language, I’m no more capable a programmer than the next man, they dismiss it as false modesty...

Read more (2007-08-09 08:00)

When Are Macros Useful?

Many of us are used to being told that macros are useful. However few of us really understand why. A large-ish, if gradually dwindling, number of people use C, and its preprocessor, on a regular basis. This gives them access to a crude, dangerous, but surprisingly effective, macro system. However the type of macro system we are told is most useful is that of a LISP-like language. Commentaries, such as this article extolling the virtues of Scheme-like macros, continually reinforce the notion that macros are a programming nirvana. Claims of an order of magnitude improvement in developer time when macros are used are not uncommon. And because most of do not, and probably have never, used a language with a real macro system, we tend to be somewhat in awe of the programming intelligentsia who broadcast such messages.

For quite some time I have held a somewhat different opinion. For modern - and I use that word very deliberately - programming languages, I believe that LISP-like macros in their raw form aren’t hugely useful. Some justification for this position is thus in order.

I designed the Converge language which is one of the few modern programming languages with a macro system. Because its macro system is fairly directly inherited from Template Haskell, it’s rather more verbosely referred to as a compile-time meta-programming facility; but really, it’s just a macro system. From a practical point of view, there is only one substantive difference between Scheme-like macros and Converge macros. Scheme explicitly identifies macros, and then function calls which reference that macro magically turn into macro calls. In Converge, macros are just normal every-day functions, but the call site of the macro is explicitly identified. From an expressivity perspective, the two approaches can be considered equivalent...

Read more (2007-05-11 08:00)

Filling in a Gap

One of life’s little pleasures is filling in a gap and then making a new connection between the altered object and one of its neighbours. Performing this action on software is just as satisfying as doing the same to any real-world object.

Recently I’ve made a previously lop-sided part of Converge much more internally consistent, and in so doing realised a useful new connection between two language features. Since it’s to do with a unique part of Converge, it makes an interesting little study. In essence, Converge has a macro-esque system that is inverted from the traditional LISP/Scheme style. LISP macros are special constructs, with some ordinary looking function calls actually being macro calls. In Converge (as in Template Haskell), macros are normal functions or expressions; macro calls however are explicitly identified. In the following example, m is a normal function that intuitively is used a macro by the splice $<...> in main:

func m():
  return [| Sys::println("m") |]

func main(): $<m()>

...

Read more (2007-03-21 08:00)

Are Multicore Processors the Root of a New Software Crisis?

It’s just over a year since I first got my grubby little mitts on a machine with a dual-core processor (the fact that it took me over 12 months to get the rest of the machine fully working with my other equipment is annoying, but irrelevant to this entry). Such processors are gradually finding their way into more and more peoples hands. The era of widespread multi-processor machines is finally upon us - albeit rather later than many of us expected, and in the form of multicores rather than multi-processors.

The advent of such machines is having an odd effect on many in the software community, which is having what amounts to a collective crisis of confidence in its ability to fully utilise such machines. But before exploring this further, let’s take a step back in time.

The software crisis

I remember as an undergraduate being told - in Dickensian tones that suggested This Is The Way It Always Has Been And Always Will Be - that we were in the midst of a software crisis. Software was always behind schedule, over budget, unreliable, lacking features, and generally unusable. To a large extent I bought into this self-flagellating view of the world - after all, I saw software crash all the time...

Read more (2007-01-18 08:00)

The High Risk of Novel Language Features

It is quite fashionable to bemoan the lack of adventure in todays programming languages. Outside of a few esoteric languages which seem to be different only for the sake of it, most languages in real use today would be recognisable to programming language researchers from the late 1960’s. Most such languages have few, if any, genuinely novel language features in them; what differentiates them are seemingly minor, but nevertheless important, things such as syntax, libraries and - most elusively - the general feel. In fact some languages make virtues of this: until recently Python was proud of the fact that it had only one novel language feature, on the basis that it contained only those constructs which had proved themselves in prior languages.

The obvious question is: why don’t different programming languages have novel language features? I think the reason can be most clearly seen by example. When Java appeared it had precisely one novel language feature - checked exceptions, a language feature which has no precedent. Personally when I think of the single worst feature of Java, the feature which most turns me off the language, it is - wait for it - checked exceptions. By trying to enforce good programming practice, they make life so annoying that many people actually use empty catch blocks, which means their code is less reliable than it would have been without checked exceptions. I know that I’m not alone in thinking that this is a fundamentally bad thing. Here you have the quandary: novel language features carry with them an exceedingly high risk. Most novel language features turn out to be one or more of irrelevant, dangerous, or annoying; checked exceptions come under the latter two categories. Novel language features carry with them such a high risk of failure that most sensible language designers avoid them whenever possible.

Earlier, I said that (until recently) Python had only one novel language feature. It’s a small, rarely used feature: the else clause on for and while loops. One uses this as follows: ...

Read more (2006-11-21 08:00)

Evolving DSLs

Recently I have heard several people musing about the difficulties of evolving a DSL. Actually, musing might be being rather polite - rather, I have heard several people arguing vehemently that creating a DSL inevitably leads to doom when the requirements for the DSL change. This is a very interesting point, because in my opinion the ability for a DSL to evolve is critical.

Paul Hudak got a lot right in a sequence of papers he published on DSLs in the mid-90’s. Specifically he notes that DSLs generally start tackling a small problem, then need to grow bigger as more aspects of the problem are tackled. [He also noted - and I think he may have been the first to articulate this so eloquently - that DSLs eventually tend to evolve into a badly designed general purpose language, but that’s beyond the scope of this entry.] The way in which this evolution of requirements happens is almost always unpredictable, because it is the act of building and using the DSL that gives users the insight to change their requirements. In my mind, Hudak is entirely correct; I believe that the terms DSL and need to evolve go hand in hand.

The question one has to ask oneself is thus fairly simple: are DSLs hard to evolve? I think the answer is that, yes, today DSLs are hard to evolve. Why is this? Well, there are two types of DSL in common use. The first is standalone (often called external) DSLs such as Make. The second is integrated (often called internal) DSLs such as those that Hudak talks about, or those that are frequently talked about in conjunction with Ruby. These two types of DSLs are fundamentally different. Standalone DSLs are flexible, but an awful lot of work to create, because in a sense they’re a complete implementation of a mini-programming language. Integrated DSLs aren’t very much work to create, but they tend to have an unhealthy coupling to their host language, which means they have many limitations imposed upon them both in terms of what they can express and how they can express it. The difference between most standalone and integrated DSLs is so severe that I sometimes wonder if the umbrella term DSL is entirely helpful, but that’s an argument for another time...

Read more (2006-10-17 08:00)

More Meta Matters

Every so often, computing theory and practise collide, and one finds that the same answer has been produced by coming it at it from the two extreme angles. Alas, it is so difficult to locate the common ground between these two extremes that this is a rare occurrence; but it does happen. This entry is in some senses a follow-up to the metacircularity entry and documents one such occurrence.

Here is the problem I was facing in a nutshell. In the new version of the Converge language I was making much more use of meta-classes to hide the general ickiness of primitive datatypes. A simple example is the File class. When one opens a file for reading or writing, the resultant object needs to have extra space in memory to record the C-level file handle. Other types of objects may similarly need to have an arbitrary extra space to record some low-level information. Many languages attempt to hide this fact altogether, but that tends to rule out some useful things, and makes the language hard to extent. Far better in this case to have a Meta_File class which specifies how new File objects are to be created; users can also add their own meta-classes to create arbitrary types of objects as they see fit.

The Meta_File class (it isn’t actually called that in Converge, but this name makes things easier to explain in this entry) is in C and its definition looks like this in hybrid Converge and C code: ...

Read more (2006-08-30 08:00)

Strategies for Dealing With E-mail

E-mail is one of the most useful tools that most of us use - more important than the web for most people. And yet the world of e-mail seems to be split into two extremes: people who receive very little e-mail, and people who receive so much e-mail that they develop a siege mentality. There seems to be surprisingly few people who fit into what, one would imagine, should be the sizeable gap between these two poles. For many people in the latter category, the only solution appears to be to ignore - willfully or otherwise - e-mail coming in, replying only to those that scream urgency.

For one reason or another, I receive quite a bit of e-mail - not as much as a few people I’ve heard of, but certainly enough. As time has gone by I have developed a few simple techniques that mean that I very rarely feel that my e-mail has got the better of me. Much of what I’m going to say here is not that surprising, but there are a couple of things that I think might be of interest.

My first point is that e-mail is very hard to read “flat”. That is, many people I see have all the e-mail they receive delivered straight into their main inbox. This guarantees doom. Typically one gets all sorts of e-mails: private e-mails, work e-mails, mailing list e-mails, professional e-mails and so on. As I suspect most other people do, I read these different types of e-mails in different mental “modes”: I don’t respond to work e-mails in the same manner that I reply to private e-mails, for example. Context switching is a difficult thing to do in general, and I am particularly bad at it; it can take me several seconds to move to the appropriate mode when moving from a work e-mail to a private e-mail which, when multiplied, by many e-mails can be a significant time soak. Having observed many other people who read their e-mail “flat”, it’s clear to me that it’s not just me who finds this type of context switching - this is, to put it mildly, an unproductive and highly intimidating way of reading e-mail...

Read more (2006-06-26 08:00)

Debugging Driven Development

There are an awful lot of development methodologies floating about these days, running the gamut from agile development to model driven development. For the purposes of this entry, the success (or even the utility) of such development methodologies is irrelevant. What is relevant is what these methodologies promise: follow these rules, and you’ll create better software.

What I’m not sure that any of these methodologies tackle sufficiently is a far more down-to-earth part of development: how to ensure developers stay motivated during the development process. What I mean by this is how can developers be kept interested in the task at hand, so as to ensure that the software they are working on moves forward at a reasonable rate? This is an overlooked factor because it is hard to define and even harder to measure. Programming is generally a long drawn out affair; even small percentage losses in productivity can have fairly large real world effects on timeliness, and on the quality of the delivered product. Since ultimately it is programmers who do the programming, the lack of focus on this area does us all a disservice.

I’ve known for quite some time that I suffer from a habit that is common to many programmers. When I finish a challenging part of a program - after having spent an awful lot of time looking at incorrect output of one form or another - I find myself so surprised that things are working that I then spend a silly amount of time rerunning the program and admiring the correct output. It’s almost like I don’t believe that it’s really correct. When I eventually pull myself away from the programming equivalent of navel gazing, I find myself in a familiar, but unappealing, situation: I have a functioning, but incomplete system, and I need to add the next feature on the list to it. But adding another feature means that everything is going to stop working properly, and it’s going to take an awful lot of work to get back to another point where the system will be working properly. And so I stare and stare at the screen, semi-paralyzed by the thought of breaking my perfectly functioning (if incomplete) system. Of course, eventually I pull myself together, make the first stages of the necessary change for the new feature and then I’m away, only for the cycle to repeat itself when the system reaches its next stable state. I’ve come to realise that all I’m going through in such instances is the programming equivalent of writers block; the fear of setting pen to paper for fear of starting down the wrong track...

Read more (2006-03-26 08:00)

Make Exceptions Work For You

In my experience, most programmers are scared of exceptions. They dread the news that their program has terminated in an unexpected manner, mainly because it implies that debugging is necessary (although perhaps a small part of the reason is because it suggests that their creation has fallen just a smidgen short of perfection).

As such I don’t think there’s anything wrong with a small fear of exceptions. However it seems to be gradually becoming a part of programming culture to ‘do something’ about exceptions, and there’s really only thing that can be done to them: suppress them. While certain limited classes of exceptions - ‘file not found’ for example - are generally fairly benign, others - an incorrect array index for example - are far more serious. All too often, although the programmer thinks the program will keep running after an exception, instead it hobbles to a slow and painful death.

This mindset seems particularly prevalent amongst Java programmers. First, Java’s horrible notion of checked exceptions positively encourages exceptions to be suppressed, just to try and keep the number of try ... catch constructs to a semi-reasonable level. Second, in multi-threaded Java applications, an exception which reaches the top level in one thread terminates only that thread; this gives the impression that exceptions in multi-threaded programs are somehow less serious than in their single threaded brethren. I have seen many Java programs randomly suppressing some exceptions, and allowing others to spew backtraces others to stdout, but continuing on anyway; this seriously undermines my confidence in the programs’ correct running. But the funny thing is that people just say ‘oh, ignore that exception, it doesn’t seem to cause any harm.’ This is a mindset that I find alien - to me, exceptions are always serious, and they always need to be fixed...

Read more (2006-01-29 08:00)

Home Directory Synchronization

My life sometimes feels overly peripatetic. One way in which I feel this pinch is that I regularly use three computers: my desktop at home, my laptop, and my desktop at work. I also have a server at work (on which you are probably reading this) and a server at home. This creates an obvious problem in synchronizing files between machines. When I was only regularly using two machines, I used to use the traditional approach to the problem which was to manually copy files (using scp) from machine to machine. This was a pain, and even with just two machines I occasionally overwrote files with old versions, or went on a trip only to discover I didn’t have the latest version of a particular file on my laptop. With more than two computers the problem becomes disproportionately more difficult.

My solution to this problem is not a novel one, but nor does it seem to be particularly well known. I had the germ of the idea around three years or so ago, and largely got it working before finding that Joey Hess had already eloquently described most of the important steps; I used some of Joey’s ideas to refine my setup. The idea that’s fairly completely described by Joey is ‘use version control to store files in your home directory.’ Version control systems such as CVS are generally used so that multiple developers can work on the same source code and share their changes in a controlled fashion amongst each other. As this implies, on each developers’ machine lies a (largely) identical copy of the shared source code. However there’s no reason to restrict this to being of use only when multiple people are involved. If one has multiple computers, using version control software simply means that each contains an identical copy of shared files.

The benefits of taking this approach are, from my experience, almost impossible to overstate. My life has not only become significantly easier by significantly reducing the chance for mistakes, but I’ve also been able to be significantly more cavalier about moving between new machines, adding new machines to my menagerie, and even simply reinstalling existing machines. Of course for most normal people out there, this won’t be an advantage at all since it fulfils a need you don’t have, and uses a mechanism you won’t want to understand, but if you’re a serious computer user I think you should consider it...

Read more (2005-12-10 08:00)

SREP

In my opinion, what separates the men from the boys when it comes to programming is debugging. It doesn’t matter how good one is, bugs are an inevitable part of a programmers life. The difference in the amount of time it takes different people to notice a bug, track down its cause, and provide a fix can be quite amazing. Maybe I am a troglodyte, but I believe that the best advice I have seen on the subject of debugging is from Brian Kernighan, who once said that the best tool for debugging is printf and common sense. Frankly I have never found debuggers of any practical use (apart from when using those languages so crude that one needs a debugger to view a stack trace).

There is however one other technique in my debugging armoury, and it involves the humble grep utility. [For those unfamiliar with grep, it searches through one or more files searching for a match against a given regular expression]. I use it to hunt for every occurrence of a function name or data-type in a large code base while trying to track down a problem. If I have a hunch of a possible problem, this often enables me to track down the offending calling code far faster than any other mechanism I am aware of. A very handy idiom that I use is the following which finds every file containing Func and loads it straight into my text editor:

grep -iRl Func | xargs nedit

...

Read more (2005-10-03 08:00)

Metacircularity

One of the great things about so-called dynamic languages (meaning the Python’s, Converge’s, and Ruby’s of this world vs the static C’s, Java’s etc.) is the ability to rummage around the type system at run-time. In Converge, every object has a slot instance_of which tells you the class that any given object is an instance of. This makes sense because classes are normal run-time objects, which is rather different than some static languages like C++, where the concept of a class is essentially ‘compiled out’ and does not exist at run-time. In Converge, if one writes the following code:

Sys.println(1.instance_of)

then at run-time the following is printed: ...

Read more (2005-09-12 08:00)

Timing setjmp, and the Joy of Standards

When designing a C application - the new version of the Converge VM - I had cause to wonder how I should encode exceptions. There are essentially two choices here:

  1. Each function returns a value which, if not NULL, signifies that an error occurred. [In other words, a simple variant of the standard UNIX mechanism, albeit one that doesn’t involve the monstrosity known as errno.]

  2. Using some setjmp / longjmp hackery to emulate the exceptions one finds in many modern OO languages.

The former has the advantage of familiarity, and of forcing the programmer to be very aware of every exceptional condition that can happen in a program. The disadvantage is that it terminally pollutes the API (since virtually no function in the API can return a value that is not related to errors, since errors must be percolated back to the top of the call chain), and muddies code by forcing checks at every function call...

Read more (2005-07-21 08:00)

Text is Dead They Say

Martin Fowler has recently published a lengthy but rewarding article on language workbenches. I agree with the majority of what he says, and will follow up on that in due course. However there is one particular thread running through the article with which I disagree, and which I wish to address here. Martin’s thrust is that in the so-called “post-IntelliJ” world that we inhabit, programming via textual files is a relic from a past age.

The notion that the we are passing out of the “age of text” (my term, for want of a better one) is one that an increasing number of people appear to be coming to. However while Martin uses this as one of the central themes of his document, I personally think it does not affect his core argument. Furthermore, I disagree with the basic tenet.

In the scenario Martin outlines, users will be increasingly creating and editing programs via DSL’s. Martin’s assumption is that the only practical way of editing DSL’s is via dedicated editors that respond to the users input in intelligent ways. This implies to him that users are directly editing a representation of an instance of the languages abstract syntax. I believe that, practically speaking, this implication is correct, if irrelevant. However the original assumption I believe to be flawed - why should users have to edit DSL code via specialised editors? I suspect that there are two deeply connected reasons behind this: ...

Read more (2005-06-16 08:00)

The Importance of Syntax

For me, programming language syntax may well be the love that can not speak its name. In the modern era we are often conditioned to believe that “syntax doesn’t matter” - that we should only be interested in the underlying concepts that syntax allows us access to. Any particular syntax - sometimes called the concrete syntax - is malleable and can be easily exchanged for another without affecting the underlying meaning - the semantics - of what is being represented. I myself have said more than once that the syntax of various languages is largely irrelevant. Yet I also believe that a languages syntax is very important, and that getting it right is a large part of making a language useable. Somehow these two conflicting ideas manage to exist in my mind without rendering me incapable of functioning.

If one is being impartial, the first question one surely has to ask is: why would anyone say that syntax is unimportant? After all, syntax is our mechanism for reading and writing programs. Clearly, bad syntaxes can make our tasks much more difficult. I have pondered on this question for quite some time. I think there are two very different reasons which together explain this.

The first, and most obvious, is that syntax can become a very personal issue - based on a persons individual tastes, and experiences - and conversations about different syntaxes invariably descend into pointless, and circular, trivialities. There is thus an implicit social contract that we avoid talking about particular syntaxes, to avoid bogging down more important debate (and to try and avoid setting people against each other). If you’ve ever had the misfortune to step into a ‘tab vs. space’ debate, or something similar, you will appreciate why such topics are generally best avoided. Note that this is essentially a socio-polictical decision, and I would say that, on the whole, it’s a wise one...

Read more (2005-05-01 08:00)

Predicting the Future of Computing

The British Computer Society recently published a document entitled Grand Challenges in Computing - Research. A grand challenge is billed as the pursuit of ‘a goal that is recognized one or two decades in advance; its achievement is a major milestone in the advance of knowledge or technology, celebrated not only by the researchers themselves but by the wider scientific community and the general public.’ The BCS is a well respected organization, and the report was edited by Tony Hoare and Robin Milner, two of the best respected people in the field.

Given its pedigree, the failure of this document to deliver on its intriguing premise is perhaps surprising. Yet I can not honestly say I was surprised by this failure. Subsequently I was surprised by my lack of surprise. Some thought has led me to think that there are two different factors for my reaction.

The first is the somewhat predictable nature of the grand challenges. For example consider the grand challenge ‘scalable ubiquitous computing systems’ - this might sound fascinating, and is certainly an imposing title for those unfamiliar with the subject. But in reality it packages up what is an interesting, reasonably cutting edge, existing work - ubiquitous computing - and tacks on a vague wish (scalability). This is hardly a grand challenge - even I (as someone who doesn’t work in that area) can see that enough work is ongoing in this area to suggest that it’s already some way to becoming a reality. Then there are vague grand challenges such as ‘dependable systems evolution’, a large part of which appears to involve a few people doing some security analysis on code. This isn’t a grand challenge - the OpenBSD boys have set the standard on this in my opinion, and what’s more their stuff is practical and available now. The grand challenge’s ultimate aim of building a ‘verifying compiler’ to solve all of our security and reliability problems is simply silly, as anyone who has heard of the halting problem would surely know. Then there is the standard cyborg-esque nonsense. I am quite happy to believe that investigating the nematode worm might aid medical science, but ‘In Vivo-in Silico (iViS): the virtual worm, weed and bug’ suggests that such work might influence computer systems; almost renowned sci-fi author Kilgore Trout couldn’t have dreamed of such a bizarre idea. It’s the sort of fluff that leads to truly awful, and misleading, terms such as artificial intelligence and genetic algorithms...

Read more (2005-01-31 08:00)

Tail Call Optimization

Lambda the Ultimate might well be the last resort of the programming scoundrel, but it can be quite informative on occasion. Plus there’s always the unintended entertainment value of those who refuse to accept that people who don’t use functional programming very much may still be worthy computer users: it can be genuinely hilarious. It’s as if Tony Hancock’s spirit had risen from the ashes and decided to inhabit the body of a 21st century academically-inclined computer programmer.

However, I digress. A recent thread on dynamic languages on LtU has some comments on tail call optimization (inevitably courtesy in part of some griping from disgruntled functional programmers), which have prompted me to write down some thoughts I’ve had on this subject over the past six months or so.

The first question many people will ask is “what is a tail call?”. Whilst there’s a slightly deeper general principle at work, it’s easiest to see this via a simple example of a recursive function. Let’s take the standard Fibonacci number generator in a simple Python example: ...

Read more (2004-12-21 08:00)

Language Orientated Programming

It is a truth universally acknowledged, that a computer scientist in possession of a good idea, must be in want of a catchy phrase to frame it. It appears that ‘language orientated programming’ (originally defined by Martin Ward) is such a phrase, but I’d like to look a little beyond the phrase to get a glimpse of the underlying idea.

Essentially the idea is to allow certain aspects of a program to be expressed via small Domain Specific Languages (DSLs). A DSL should allow a certain concept to be expressed succinctly and naturally - in other words writing something intended for that DSL should be easier in the DSL than writing the same thing in a General Purpose Language (GPL).

Most of us are familiar with DSLs through programs like make which bite off a small problem domain and (hopefully) provide a good solution for it. Such DSLs tend to be somewhat stand alone though - one cannot integrate the DSL into a main program very easily. This often leads to bizarre kludges when we wish to integrate two languages - perhaps the classic example is the horror of embedding SQL commands as strings within a program. This obviously means that each SQL command thus can only be parsed and so on at runtime - more than bad enough. But it also means that the SQL command is like a foreign body - it can only interact with the rest of the program through what is typically a verbose and heavyweight interface...

Read more (2004-11-19 08:00)

Complete blog archive