Blog

Complete blog archive

pizauth: HTTPS redirects

I’ve just released pizauth-1.0.6 which introduces one notable new feature: support for HTTPS redirects. New features sound good, but what does this actually mean? My experience is that the way OAuth2 works is sufficiently counter-intuitive that people often struggle to make sense of what it’s doing.

Here’s a very quick, simplified, summary of what OAuth2, and pizauth do . OAuth2 lets you obtain a time-limited token which you can use to access a remote resource (e.g. the ability to send email): pizauth’s job is to get you such a token. If you ask pizauth for a token for a resource and a token isn’t already available, pizauth will create a URL which you then open in your local web browser. That URL will take you to the remote resource (e.g. your email provider) who will then ask you to authenticate yourself (i.e. login). Once you’ve successfully done so, the remote web page then needs to tell pizauth that you’ve been successful: it does so by redirecting you to an HTTP URL that’s actually pizauth running on your local machine.

This last part – i.e. redirection – tends to surprise people, but somehow the remote resource needs to communicate success (or failure…) to pizauth. Piggybacking off HTTP uses a well-known existing protocol that’s tolerated well by most networks — it’s quite a cunning solution to the problem...

Read more (2024-11-10 09:25)

Recording and Processing Spoken Word

What happens if you listen to 60 seconds of your favourite radio station / audiobook and then 60 seconds of a random non-professional podcast? The former will be pleasant to listen to, with good intelligibility — it’s easy to understand everything the speakers say. The latter, however, are often somewhat unpleasant to listen to and have poor intelligibility. To add insult to injury, different episodes of the same podcast will often vary in quality and volume.

Fortunately, it turns out that most of us can get reasonably close to “professional” sounding recordings without much effort or money. In this post I’m going to break this down into two parts: recording and processing. My experience is that most of us eventually work out some recording techniques, but many fewer dive into processing. I’m going to show how one can use widely available command-line tools to process recordings, producing good quality mixes (i.e. the output we release to listeners).

I’m going to show how we can go from recordings that sound like this :

...

Read more (2024-08-21 12:30)

Why the Circular Specification Problem and the Observer Effect Are Distinct

Following “What Factors Explain the Nature of Software?”, a number of people have asked me: “aren’t the circular specification problem and the observer effect the same thing?” I take the blame for not making this clearer previously. In this post, as recompense, I’m going to explicitly explain why I believe these two factors must be considered distinct.

As a quick reminder, in my previous post I defined a triad of factors that I believe go a long way to explaining what the nature of software is:

  1. Software occupies a liminal state between the constraints of the physical world and an anything-goes fantasy world. We frequently mistake the constraints that software faces.

  2. Our ability to specify what a given piece of software should be is limited by the circular specification problem. We nearly always have to fully build the software in order to know precisely what we want it to be.

  3. Software is subject to the observer effect. The act of seeing the software in action changes what we – or more often others – think the software should be, sometimes radically.

...

Read more (2024-05-31 10:10)

What Factors Explain the Nature of Software?

I sometimes find myself asked to give advice on how organisations should go about creating software, but often my advice doesn’t gel with those who sought it. Sometimes that’s because only one answer was ever considered acceptable; sometimes I am ignorant of important wider context and my answer is unworthy of consideration.

But, most often, I believe it’s because both sides have different underlying assumptions about the nature of software. These assumptions are so deeply embedded that most of us rarely, if ever, explicitly think about them. It took me many years to realise I had made such assumptions, and many more to be able to articulate them somewhat coherently.

In this post I’m going to define, as best I can, what I consider to be the triad of interacting factors that best define the nature of software. These are inevitably high-level, but my experience is that they are just about specific enough to allow one to make reasonable quality predictions about a given piece of software. I’m not claiming these thoughts to be original, nor do I claim software to be unique in having these factors as their fundamentals...

Read more (2024-05-14 11:00)

Some Reflections on Writing Unix Daemons

Unix daemons are programs which run in the background, performing tasks on our behalf indefinitely. Daemons are somewhat mysterious, easily overlooked, programs of a kind few of us have experience in writing.

In this post I’m going to describe some of the things I’ve learnt in the course of writing and maintaining 3 daemons (extsmail, snare, and pizauth) over 15 years. I’m going to start, however, with a brief overview of what daemons are, and how we get them up and running, as it will make the later parts of the post easier to understand.

Background

Let’s start with the most important question: why the term “daemon” and why the weird spelling? The term seems to date from the early 1960s when “operating systems” as we know them today started to take recognisable form. The general term is, quite sensibly, based on Maxwell’s demon, but I assume the “ye olde world” pseudo-antiquarian spelling (i.e. the extra “a”) is the result of people at the time finding it amusing. Unix seems to have used “daemon” as the spelling from early in its history. The spellings “demon” and “daemon” are often used interchangeably, probably because the former is both more familiar and in people’s spell checker...

Read more (2024-02-28 11:00)

Faster Shell Startup With Shell Switching

A few days ago Thorsten Ball wrote a post exhorting Unix users to optimise their shell’s startup time. It’s excellent advice and well worth following.

The Unix shell has two major use cases: as an interactive prompt (what we often call “the command line”); and as a “scripting” or “non-interactive command” language. We normally pick one shell (bash, fish, zsh, etc.) and use it for both use cases.

However, we can use different shells for each use case. People don’t normally bother doing so because there is little functional utility in doing so. There is, however, a small performance reason for doing so, which I’m going to look at in this post. I’m going to call the technique I describe in this post “shell switching” since I’m not aware that it has an existing name...

Read more (2024-01-16 14:50)

Choosing What To Read

Like many people, I spend a lot of time reading. Since there is much more that I could read than there is time for, I have to make choices about what I should read. In this short post I’m going to outline what my reading choices have become and why. My aim isn’t to suggest that anyone else should make the same choices – indeed, they’re intensely personal – but, rather, that thinking about such choices is worthwhile.

Unlike many people, these days I intentionally read very little fiction. Saying that I’m a literary philistine out loud is close to a taboo and for many years I downplayed it. These days, I’m much less worried about this admission, in part because the alternative I have gravitated towards has given me greater personal satisfaction and, I hope, insights: history.

Personally, I find that history contains many more surprises than authors can get away with in fiction. We have a strong sense of what is plausible: this constrains writers, even in supposedly fantastical settings, and readers tend to dislike violations of it. Our sense of what is plausible is in large part defined by our life experience — but most of our lives take part within surprisingly narrow parameters...

Read more (2024-01-04 09:50)

Debugging A Failing Hotkey

On Sunday I tried using hk, but it wasn’t working. That surprised me, because hk is a very simple program that allows one to set temporary hotkeys — and a program I use many times a day. Here’s a toy example of hk:

hk Ctrl+Shift+F8 echo Hello

hk waits until Ctrl+Shift+F8 is pressed — when it is, hk executes the command (which prints “hello”) and then exits...

Read more (2023-12-13 13:25)

How Often Should We Sharpen Our Tools?

[there was a] lumberjack who said that if his life depended upon his ability to cut down a tree in five minutes he would spend three minutes sharpening his axe.

Often ascribed, though inaccurately, to Abraham Lincoln, this aphorism neatly captures the idea that, in order to efficiently reach a direct goal we sometimes need to spend time on improving indirect factors such as our tools. In computing, we have access to a wide array of tools, but each takes effort to learn, to customise, and even, if necessary, to create. While our life rarely depends on sharpening our tools, the question implicitly posed by the aphorism is no less important for it. Rather than try and grapple with this question abstractly, I’m going to give an example of how and when I’ve chosen to sharpen some of my tools, and the consequences of doing so.

I spend a huge portion of every day programming or writing in a text editor. For at least 15 years I used NEdit, since that was the closest Unix match to the first (non-Unix) editor I had used, StrongED. When I started using it, NEdit felt modern, powerful, and evolving in the right direction — but, for the last 5 or more years of my use, it was unmaintained and gradually suffering from bit rot. I put some effort into keeping it running on OpenBSD, including fixing crashing bugs. This greatly amused my Vim-using colleagues and I was the butt of many jokes — something that didn’t worry me, because I was secretly convinced that I was right not to waste my time on Vim’s bizarre editing style...

Read more (2023-12-04 15:11)

Four Kinds of Optimisation

Premature optimisation might be the root of all evil, but overdue optimisation is the root of all frustration. No matter how fast hardware becomes, we find it easy to write programs which run too slow. Often this is not immediately apparent. Users can go for years without considering a program’s performance to be an issue before it suddenly becomes so — often in the space of a single working day.

I have devoted more of my life to optimisation than I care to think about, and that experience has led me to make two observations:

  1. Human optimism leads us to believe that we can easily know where a program spends most of its time.
  2. Human optimism leads us to believe that we can easily know how to make the slow parts of a program run faster.

...

Read more (2023-11-14 09:30)

Minor Advances in Knowledge Are Still a Worthwhile Goal

Despite a distinct lack of talent, and occasionally long hiatuses, I’ve enjoyed learning to play the electric guitar as an adult. One of the many variables in the feel and sound of a guitar is its strings, but the major manufacturers’ offerings all seemed rather similar to me. I eventually settled on D’Addario’s EXL offerings, mostly so that I didn’t have to think about it again.

Last year, needing to buy a new batch of strings, I idly looked around to see what is now available, and noticed that D’Addario had a newish “NYXL” range — at double the price of my EXL’s! Despite balking at the price, and knowing from experience that there is a tendency in music circles to ascribe magic to the mundane, the reviews I read were so positive that I decided to risk buying a single set.

I wasn’t expecting to notice much difference between the EXLs and NYXLs, but as soon as I fitted the latter, I was proven wrong. They sounded better, though that’s very subjective; they also appeared to be staying in tune better, which is more objective. But, gradually and more surprisingly, I realised that the NYXLs were taking perhaps three or more times longer to degrade (guitar strings gradually rust, in no small part thanks to our sweaty mitts)...

Read more (2023-10-03 13:00)

How Hard is it to Adapt a Memory Allocator to CHERI?

CHERI is a set of things (adapted CPUs, adapted operating systems, adapted libraries) that, collectively, aim to stop software bugs becoming security flaws. If you’d like to know more, I gave some of my thinking on CHERI in Two Stories for “What is CHERI?”.

In this post I’m going to flesh out an observation I made in that post, which is that some software needs thoughtful adaption to CHERI if we want to get the security advantages we hope for. Exactly what that thoughtful adaption might look like will vary, probably substantially, between different pieces of software. What, for instance, might it look like for critical, widely used, components? In this post I’m going to look at how memory allocators (henceforth “allocators”), one of software’s most fundamental building blocks, can be adapted to CHERI. If you find this interesting, but want greater depth than I’ll go into here, you might be interested in the paper Picking a CHERI Allocator: Security and Performance Considerations that this post is based upon.

A Simple Allocator

It is a truism that virtually every program needs to dynamically allocate memory. Our collective folklore tells us that allocators like dlmalloc or jemalloc are impressive pieces of software that improve on their predecessors, but very few of us can really explain why. We call malloc, realloc, and free and magically chunks of memory are allocated, resized, or freed on our behalf...

Read more (2023-09-18 10:30)

“Programming” and “Programmers” Mean Different Things to Different People

Late last year on my blog I published “How Might Generative AI Change Programming?”. Although I’m slightly biased, I’m still comfortable with that post’s high-level message. However, there is one important aspect that I think I failed to bring out as explicitly as I should have: different communities’ programmers will be affected in different ways.

One reason I made this mistake is that I had overlooked how different communities’ assumptions about “programming” and “programmer” have diverged. As with most instances of incommensurability, the people involved (e.g. me) often don’t realise that they’re talking about slightly different things.

As a concrete example, a couple of weeks back, the Economist – whose coverage of areas I know about I find to be rather good – published an article with the following claim: ...

Read more (2023-08-23 12:30)

pizauth: First Stable Release

In pizauth: dump and restore I said:

However, there was one feature I couldn’t find a good design for: persisting state to disk. This missing feature had been nagging away at me (and a small, though growing, number of other people) for quite a while: recently I finally alighted upon a design that I thought would work well. pizauth-0.2.2 is the first release to support this. Perhaps this will be the last release before 1.0.0!

I was a little too optimistic. One of pizauth’s configuration option names had long bothered me, and addressing that caused a backwards-incompatible change that forced a 0.3.0 release...

Read more (2023-08-13 08:00)

The Need to Explain

One of my in-built quirks – present, I’m told, from a very young age – is spotting patterns. Whenever I see X, I also see Y; there are 3 As for every B; if this happens, then that is more likely to happen afterwards; and so on. It’s not something I do particularly consciously, and few of my observations even reach the lofty heights of banal. Fortunately, it’s not all wasted effort, and a small number of these observations end up making my life a little better. My train home always leaves from platforms 1 or 8. If I arrive at the station as my train is due to leave, my best bet is not to check the departure boards but to run past platform 8 to see if my train is there and, if it isn’t, run straight to platform 1.

As I’ve grown older, I’ve learnt to keep more of the observations I’ve made to myself. This isn’t because my observations very often offend, but because some people view them as creating a void which must be filled, as soon as possible, by an explanation. The problem with spontaneous explanations on topics beyond our purview is that they are often obviously wrong. One of my other quirks is that I find it difficult not to rebut obviously wrong explanations. This can then cause an endless loop of further explanations and rebuttals, which is rarely useful or enjoyable.

To my surprise, I’ve realised that further attempts at spontaneous explanation become less convincing more often than they become more convincing. One obvious lesson to draw from this is that it is easier to disprove incorrect explanations for a phenomenon than it is to generate a correct explanation. As a researcher, I have learnt that lesson the hard way more than once! ...

Read more (2023-07-18 08:50)

Two Stories for “What is CHERI?”

Some readers might have heard of CHERI. Of those who’ve heard of it, most have probably got the rough idea that it’s a “more secure” CPU — but, after that, I find that most people start to get rather confused when they try and nail down “what is CHERI?”.

Having grappled with this myself, and having helped others grapple it, I think that the major reason is that the high-level story requires understanding quite a bit of detail — hardware, Operating Systems (OSs), programming languages (and some end user programs) all have to be adapted for CHERI. Most people tend to focus at first solely on the CPU, but one can’t form a coherent story from the CPU alone. Those who persist and piece together the other parts of CHERI can then get swept up in the novelty of the idea, and assume that CHERI magically secures everything it touches. However, CHERI is just a tool and, like all tools, it’s better at some things than others — and, even then, only when used thoughtfully.

In this blog post I’m going to explain just enough of CHERI to put together two coherent stories for what it is and where it might be used. I’m not going to try and cover all the possibilities, since, as you’ll see, this post is long enough as it is! One reason for that is that CHERI contains overlapping concepts that can be combined in more than one way. Because of that, I’ll warn you in advance that there is a fairly hard switch between the two stories in this post — sorry in advance! I’m also not going to be afraid to diverge from CHERI’s terminology, which has a number of historical quirks that I find add to the confusion...

Read more (2023-07-05 09:15)

My Interview with Eelco Visser on Parsing

In my previous post, I mentioned in a footnote an interview I had done with the late Eelco Visser in 2008. The audio is available here, but there isn’t a transcript available. That’s a pity, because, as far as I know, Eelco didn’t do another interview along those lines, and searching through audio is still a difficult thing to do well. So, I thought I’d produce a transcript, as a little way of honouring Eelco’s memory.

In my opinion, the technical material in the interview speaks for itself. The interview process, however, had a partly comic side — neither of us had a clue what we were doing. I had never been an interviewer before (and only did one further interview after this), and I don’t think Eelco had done a technical interview like this. My understanding of parsing was woefully incomplete, and Eelco hadn’t thought about some of the deeper details in several years (he said afterwards that he was a bit “rusty”), though it clearly came back to him very quickly! We recorded the interview in a dingy college room in Cambridge that, perhaps fittingly, looked it was last decorated during the heyday of parsing research — the 1960s.

Afterwards, I’m fairly certain that we went for a drink together in the college bar. I don’t remember the specific topic we talked about, but it almost certainly would have been like many others we had: he’d talk about deep technical matters while I made stupid jokes, with the two ends of the conversation somehow meeting in the middle from time to time. I unthinkingly expected to have more of those conversations in the future. While that’s no longer possible, I have many happy memories of those we did have...

Read more (2023-05-16 09:30)

Why Split Lexing and Parsing Into Two Separate Phases?

“lex and yacc” are two terms that are used together so often that they can seem indivisible. lex takes a string as input and divides it up into lexemes (roughly speaking “words”) and yacc takes a sequence of lexemes and parses them. Since some parsing approaches such as recursive descent parsing unify these phases, why do lex and yacc split them apart?

Running example

Let’s start with a simple language, which allows us to assign simple expressions to variables:

x = 1;
y = x;

...

Read more (2023-05-02 10:00)

Displaying My Washing Machine’s Remaining Time With curl, jq, and pizauth

A couple of months ago our washing machine produced a considerable quantity of smoke when I opened its door, which I interpreted as a suggestion that a replacement was needed. After rummaging around the internet for advice, a couple of days later a new Miele washing machine took its place in our home.

As well as boggling my mind at how much better the new machine was than the cheap machine it replaced, I was pleased to discover that Miele make available a third party API which one can use to interact with the machine. Putting aside any worries about connecting a high-speed rotating device to the internet, I decided to have a poke around. Although the API is more limited in practise than in theory – the API has support for setting values, but those mostly aren’t used – I eventually realised it can help me solve one recurring problem.

Like most washing machines, ours beeps to notify me that it’s finished. However, I can’t always empty it straight away, the beep is obnoxious and repetitive, and it ends up disturbing everyone in the house. I can turn off the beep, but then I don’t know when the machine has finished. Miele’s app can notify me, but it regularly logs me out, and finding out the time remaining is a chore. What I really wanted is a countdown of the remaining time and notification on my desktop computer. Fortunately, I can do exactly what I want on my desktop computer using basic Unix tools. Here’s what a sped-up version of the result looks like (the middle part is hugely sped up; the end part is sped up slightly less so): ...

Read more (2023-04-11 10:15)

pizauth: HTTPS redirects

I’ve just released pizauth-1.0.6 which introduces one notable feature: support for HTTPS redirects...

Read more (2023-04-03 10:00)

How Big Should a Programming Language Be?

Reading the thought-provoking “Patterns & Abstractions” post reminded me of a long-held opinion I have about programming language design: we have a tendency to keep adding features to a language until it becomes so big that its sheer size makes it difficult to use reliably. Since most of us spend most of our time programming in one language, it can be difficult to see a common trend amongst languages in general.

Language size over time

Let me start with a concrete example. I started using Python around version 1.3, though the first version I really remember was 1.4. Python 1.5 was a real improvement, adding the assert statement, nested packages (“hierarchical modules” in the release notes), and the re regular expression module — things that I suspect nearly every modern Python programmer finds useful. At that point, Python was still a relatively small language, to the extent that you could reasonably expect to store nearly every detail about it (and its implementation!) in your head.

Python 1.6 added support for unicode — support that was clearly useful, but which turned out to be sufficiently awkward that (somewhat) fixing that awkwardness turned out to be a major cause underlying the schism between Python 2 and 3. As Python 1.6 turned to 2.0 and beyond, this formerly small language kept adding features, many of which I find useful (e.g. list comprehensions), but each of which has inevitably made the language bigger...

Read more (2023-03-23 15:00)

Rust’s Two Kinds of ‘Assert’ Make for Better Code

Daniel Lemire’s recent post “runtime asserts are not free” looks at the run-time cost of assert statements in C and shows that a simple assert in a frequently executed loop can cause significant overhead.

My own opinion on assertions has shifted over the years, from “I don’t see the point” to “use them sparingly” to “use them as much as possible”. That last shift is largely due to Rust having two kinds of “assert” statement – assert and debug_assert – which has allowed me to accurately express two different kinds of assertions, largely freeing me from performance worries. If you come from a language that only has one kind of assert statement, this distinction can seem pointless, so in this post I want to briefly explain why it helped shift my thinking.

Background

Let me quickly define what I mean by an “assert”: it’s a programming language statement that checks a property and causes a crash if that property does not hold (conventionally called a “failing assert”). For example, if I have a Python program with a list of people’s ages and calculate the minimum age, I might want to check that the youngest person doesn’t have a negative age: ...

Read more (2023-03-16 16:00)

Scheduling my Electricity Usage

Modern societies function in large part thanks to our ability to use as much electricity as we want, when we want it. As renewable energy (solar, wind) plays a greater part in the electricity generation mix, our opportunities to use “cleaner” energy increase, but renewables are also fickle — sometimes the wind doesn’t blow, and the sun doesn’t shine. Today in the UK, for example, the weather is overcast and still: solar and wind aren’t going to generate much power. In such cases, the electricity we’re using is generated from much “dirtier” sources.

For the last 6 months or so, I’ve been trying to see what I can do to increase the “clean” proportion of electricity I use. I have a lot of limiting factors: I can’t, for example, install solar panels or a wind turbine. But what I can do is change when I use electricity.

My intention is not for or me (or my family!) to live like a stylite or troglodyte: personally, I rather like the conveniences and comfort of modern life. Therefore, there are lots of things where my timings are inflexible — I’m not going to turn the oven on to produce a meal at midnight, nor am I going to switch my computer off when there’s a big football match on...

Read more (2023-03-05 09:45)

Why Aren’t Programming Language Specifications Comprehensive?

In Compiled and Interpreted Languages: Two Ways of Saying Tomato I used this little example to show that we can observe differences in how CPython and PyPy execute code:

$ cat diffs.py
print(str(0) is str(0))
$ python3 diffs.py
False
$ pypy diffs.py
True

Although I explained in that post that programming language specifications and implementations are separate things, I didn’t explain why it’s ever considered acceptable for different implementations of a language to produce different output for a given program. In this post I’m going to attempt to correct that oversight, and to do so in a way that I hope is relatively approachable. As that suggests, I will be as informal as possible, since I’ve found that more formal treatments of this subject can obscure where it has a day-to-day impact on programming...

Read more (2023-02-16 10:00)

Distinguishing an Interpreter from a Compiler

In Compiled and Interpreted Languages: Two Ways of Saying Tomato, I showed how any language can be implemented as an interpreter or a compiler. However, I was deliberately vague when defining how one might distinguish an “interpreter” from a “compiler”, in no small part because I couldn’t think of a crisp definition of the two terms. Although I wasn’t quite as vague as “I know it when I see it”, I was uncomfortably close.

It was thus with great interest that I read a comment on the post from a good friend, and language implementation veteran, Mario Wolczko. In a sense, Mario gave two ways of distinguishing compilers from interpreters, but I’m going to quote the one that made my jaw drop:

How to distinguish between a compiler and an interpreter? One way is to look at the time it takes. A compiler should run in time roughly proportional to the size of the source, whereas the runtime of an interpreter is determined by how much work the source program has to do for a given input. For example, suppose the source program reads a number and then has a loop whose trip count is this number. The time it takes to compile this program will be independent of the number, in contrast to the time it takes to interpret the program.

...

Read more (2023-01-26 12:00)

try_repeat released

One of the great things about Unix is the ability for users to add small tools that fit in seamlessly with those provided by the base operating system. I’ve written a few such tools over the years.

Today I’m releasing another little Unix tool try_repeat, which tries to run a command n times, exiting early if the command exits with a non-zero exit code. Think of it a bit like the repeat command built into some shells (e.g. zsh) but which “exits as soon as an execution of the command fails”. This is particularly useful when trying to find intermittent failures in a command: I can set an upper bound (i.e. “if the command runs n times successfully, I’ll assume there are no problems”) but be notified if there are any problems earlier on.

I’ve had a basic version of this command knocking around in my bin/ directory for quite a while. Earlier today I was trying to debug a non-deterministic failure and fiddling around with unwieldy shell commands. Once I rediscovered try_repeat, it allowed me to quickly hunt down the non-deterministic failure, fix it, and feel confident that I’d fixed it...

Read more (2023-01-25 11:45)

Why We Need to Know LR and Recursive Descent Parsing Techniques

A couple of people have asked me for my thoughts on part of an article from Tiark Rompf called (roughly) “Just Write the Parser” which advocates the use of recursive descent parsing over more “formal” approaches to parsing, at least in the context of teaching compilers. I’m fairly sure I read this article a couple of years ago, but it may have been updated, or simply have been discovered anew. Either way, since I admire both Tiark and his work a great deal, it was useful for me to engage with his ideas and compare them to my own.

In this post, I’m going to summarise Tiark’s argument as I see it, and try to explain where I agree and disagree with that summary. I will inevitably rehash many of the arguments from my 2020 “Which Parsing Approach?” post, but I’m going to try to focus on some factors which I now realise were at best buried, and at worst inadequately explained, in that previous post.

Summarising the argument

In essence I believe Tiark is making two points: ...

Read more (2023-01-17 08:00)

Compiled and Interpreted Languages: Two Ways of Saying Tomato

What we might think to be a settled truth often isn’t. I was reminded of this recently after I publicly stated that there is no such thing as a compiled or an interpreted programming language: many people have made clear to me that they believe strongly that languages clearly fall into one of these two categories. Indeed, a simple web search suggests that the vast majority of pages describing compiled and interpreted languages propose a crisp delineation between the two.

I certainly sympathise with the temptation to classify languages in this way, because most programming languages are commonly implemented using one technique or the other. However, experience has taught me that it’s important not to conflate the most common implementation choice with inherent properties of a language.

In this post I’m therefore going to show, using a series of programming language implementations, why languages shouldn’t be classified as compiled or interpreted. Before I do that, I need to start by clarifying the difference between a programming language specification and an implementation of that specification...

Read more (2023-01-10 08:00)

Software Security Research Position

The soft-dev research team is growing again and we have a position open for a Research Associate / Fellow in Software Security — the full position is listed here with a closing date of Jan 25th.

The position is looking at the security of software systems in the context of CHERI, in particular what we can do to make web browsers more secure using CHERI.

There’s considerable flexibility in how we go about tackling this problem, so if you think this sounds potentially interesting, please consider applying. We’re an open-minded group: you might, for example, be a researcher who wants to work on your programming chops; or a programmer who wants to work on your researcher chops. Most importantly, you need to be enthusiastic about software, partly because the rest of us are, but mostly because, with that, you can learn nearly everything else you need on the job. You do need to be eligible to work in the UK, though we are flexible about where you work within the UK...

Read more (2023-01-09 08:00)

How Might Generative AI Change Programming?

The use of AI (Artificial Intelligence) techniques, specifically ML (Machine Learning) and its various sub-fields, is changing many fields and undoubtedly will change more in the coming years. Most of us are at least generally familiar with the idea of using ML to identify patterns in data. More recently Generative AI (“GAI” for the rest of this post), in the form of systems such as ChatGPT and Stable Diffusion, has made itself more widely known. Rather than simply classify new data, GAI can, as the name suggests, generate new outputs that conform to the underlying patterns contained in the model. Existing ML systems, in general, and GAI systems particularly, are almost certainly the harbingers of further advances. This inevitably leads to speculation about “what’s next?”

From my perspective, the obvious question is: how might ML and GAI change programming? In particular, the rapid advances in GAI have led many to assume that we will gradually do away with programming as a human activity. Although it’s rarely spelt out explicitly, this implies that a GAI system can take in a human description (or “specification”) and produce usable software from it. At a small scale, this is already possible, with the best known example being CoPilot.

When it comes to the generation of software, GAI and programming seem surprisingly similar: they both take in a specification and produce software as output. In each case, both fill in gaps in the inevitably incomplete specification in order to make complete software. However, the two approaches take a fundamentally different approach to understanding and filling in gaps in specifications: programming forces humans to use their knowledge and experience to manually fill in the gaps; GAI uses the model it has learnt from existing programs to automatically fill in the gaps. This might seem like a small, perhaps even a pedantic, difference, but I believe it has significant consequences...

Read more (2022-12-15 08:00)

pizauth: differentiating transient from permanent errors

As and when I’ve found some spare moments, I’ve been working towards getting pizauth in a state where I think it’s ready for a stable release. I’m now making what I hope is the final alpha release available.

Since the last alpha release I have been in many more situations where pizauth only has intermittent internet access. Previously pizauth treated a wide category of errors as transient — that is, the sorts of errors that generally resolve themselves given a little bit of time (e.g. a temporary loss of internet access). However, while some errors (e.g. DNS lookup failing) are probably benign, some (e.g. HTTP requests returning non-200 codes) show something serious has definitely occurred.

The new alpha release of pizauth first differentiates (possibly) transient error from (definitely) permanent errors. If a permanent error occurs when refreshing an access token, then the token is invalidated, and the error logged. The reason that the user isn’t explicitly informed is that the true error (e.g. “your account is no longer valid”) is generally masked by another more generic error (e.g. “server refused to refresh token”). By invalidating the token, the user will then be asked to reauthenticate, at which point the true error is more likely to be properly reported. In the (hopefully) rare cases where there are persistent permanent refresh errors, users can look through logs to find out what happened...

Read more (2022-12-14 08:00)

November Links

...

Read more (2022-11-30 08:00)

More Evidence for Problems in VM Warmup

Some readers may remember some work I and others were involved in a few years back, where we looked at how programming language VMs (Virtual Machines), mostly those with JIT (Just-In-Time) compilers, warmed-up — or, often, didn’t. In this blog post I’m going to give a bit of background to this area, and then look at a newly published paper (not written by me!), which gives further evidence of warmup problems.

Background

VMs start programs running in an interpreter where they observe which portions run frequently. Those portions are then compiled into machine code which can then be used in place of the interpreter. The period between starting the program and all of the JIT compilation completing is called the warmup time. Once warmup is complete, the program reaches a steady state of peak performance.

At least, that’s what’s supposed to happen: our work showed that, on widely studied small benchmarks, it often doesn’t. Sometimes programs don’t hit a steady state; sometimes, if they do reach a steady state, it’s slower than what came before; and some programs are inconsistent in whether they hit a steady state or not. A couple of examples hopefully help give you an idea. Here’s an example of a “good” benchmark from our dataset which starts slow and hits a steady state of peak performance: ...

Read more (2022-11-15 08:00)

What is a Research Summer School?

If I say “summer school” to you then you’ll probably think of extra sessions in summer holidays for children; depending on where you grew up, you might expect such sessions to be either exciting non-academic activities or catch-up lessons. What about summer schools for research students (what I’ll call “research summer schools” for the rest of this post, though in research circles they’re unambiguously referred to as just “summer schools”)?

I’ve co-organised four in-person research summer schools, most recently as part of the Programming Language Implementation Summer School (PLISS) series, and spoken at two others, and one thing that I’ve realised is that many people don’t really know what they involve. Indeed, I didn’t fully realise what they are, or could be, even after I’d been involved in several! This post is my brief attempt to pass on some of what I’ve learnt about research summer schools.

Let’s start with the obvious high-level intent: research summer schools are intended to help research students better understand a research area. “Research students” is an umbrella term: the majority of attendees tend to be PhD students, but there is nearly always a mix of people from other stages of life, particularly postdocs and BSc/MSc students, but sometimes including people who are coming at research from a less common career path...

Read more (2022-11-10 08:00)

October Links

...

Read more (2022-11-01 08:00)

pizauth: another alpha release

A few weeks back I made the first alpha release of pizauth, a simple program for requesting, showing, and refreshing OAuth2 access tokens. I’ve been pleasantly surprised at the number of people who’ve tested pizauth since then, mostly successfully. Inevitably, pizauth hasn’t quite worked perfectly for everyone, so I’m making available the next alpha release, 0.1.1. One new feature, and one portability fix, are particularly worth mentioning.

Running pizauth on a remote machine

A valid use case for pizauth is to run on a remote machine (e.g. to download email in a cron job on a server), but for authentication to happen locally. I had assumed, but hadn’t tested, that using pizauth with ssh -L would work. Indeed, it did work, but it required users to work out what port that pizauth’s HTTP server (a vital part of the OAuth2 authentications sequence) was listening on. Users could discover the port number by carefully examining the authorisation URL:

https://login.microsoftonline.com/common/oauth2/v2.0/authorize?access_type=offline&code_challenge=thZhkonKVClNmeyg8ahkah9A*£JZoGJ5AGSGjSZBEWw&code_challenge_method=S256&scope=https%3A%2F%2Foutlook.office365.com%2FIMAP.AccessAsUser.All+https%3A%2F%2Foutlook.office365.com%2FSMTP.Send+offline_access&client_id=<your-client-id>&redirect_uri=http%3A%2F%2Flocalhost%3A8765%2F&response_type=code&state=Y5f1CUxNUbw

...

Read more (2022-10-20 08:00)

UML: My Part in its Downfall

In the last year or two there has been a slow but steady trickle of articles attempting to account for UML’s lack of long-term success (if you have time for only one, I suggest Hillel Wayne’s article). As fate would have it, I had some involvement in UML standardisation in the early 2000s, so I saw some of the going-ons from the “inside”. Although I’ve touched on this before, I’ve never written about my experiences in detail because I was worried about offending people who I like. 17 years after the fact, I hope that the likelihood of anyone being offended is fairly low.

In this post I’m going to try and explain some of the factors that I think contributed to UML’s downfall. To some extent this is a historical document, at least of my perspective. But, with the benefit of hindsight, I feel there are general lessons to be drawn about how both group dynamics and standardisation can develop in unfortunate ways. Be forewarned that I only saw part of what was going on (so I will be unaware of possibly important details), I didn’t write a diary (so I will recall things incorrectly), and my recollections are bound to reflect my biases (my ego will inevitably encourage me to relay the story in a way that presents me in a better light than I deserve).

Background

People had long created diagrams to document important aspects of software. Over the course of the late 80s and early 90s, three diagramming styles or, more grandly, “methods”, had started to become popular: the Booch, Jacobson, and Rumbaugh methods. After two merges, these three methods were combined to create UML (hence the “U” standing for “Unified” in UML), released as a standard through the OMG (Object Management Group), a standards body that had previously been best known for the CORBA standard...

Read more (2022-10-03 08:00)

September Links

...

Read more (2022-10-01 08:00)

pizauth, an OAuth2 token requester daemon, in alpha

One Monday a few weeks back, I found myself unable to send emails via my work account. I had received no advanced warning and there was no clue in the error message I was seeing as to what the cause might be. I was briefly tempted to view my cut-off from work as a good thing before, irritatingly, the sense of duty my parents instilled in me kicked in. After a while I realised that the problem was that my msmtp setup was using basic authorisation – what the rest of us think of as a traditional username and password – to send emails. Not only does my work’s Microsoft Exchange server no longer accept basic authentication, but Microsoft are killing it off for everyone before the end of the year — more people are thus about to experience the same confusion I experienced.

A common alternative to basic authorisation is OAuth2. From the perspective of this post , OAuth2 requires you to use a web browser to authenticate yourself (which might then involve two-factor authentication e.g. via your mobile phone) after which you can obtain time-limited access tokens. You can use access tokens to authenticate yourself to, for example, an IMAP or SMTP server. As irritating as I sometimes find two-factor authentication , there is no doubt that OAuth2 is more secure overall than basic authentication.

In my case, I solved my immediate problem with email-oauth2-proxy, which has a very cunning approach to the problem (it’s a semi-transparent proxy that provides the illusion of basic authentication to user clients while performing OAuth authentication to users) but it occasionally stalled and required me monitoring an xterm for authentication links which doesn’t sit neatly in my email setup. I then tried mailctl but it segfaulted on me (perhaps because of an incompatibility with OpenBSD) and I’m not familiar enough with Haskell to debug such things...

Read more (2022-09-28 08:00)

A Week of Bug Reporting

If you use software regularly, and have your wits about you, you will often realise that you’ve encountered a bug — in other words, that the software has done (or not) something that it shouldn’t (or should have). When this happens, the vast majority of people moan to the nearest bystander about how incompetent the people behind the software are and then carry on with whatever they were doing — often stumbling into the same bug repeatedly. Only a minority of people think of reporting bugs and, if they do think of it, they often fail to do so either because they think it’s too time-consuming or that they might end up looking silly.

It’s unfortunate that so few people report bugs. First, surprisingly few bugs affect a single person: by the time someone does report a bug, it can have affected hundreds or thousands of other people. We can’t all be freeloaders — someone has to take the plunge and report the bug. Second, bugs frequently appear in contexts unfamiliar to the software’s developers. Most obviously, we all have slightly different software installations, and those variations can cause odd effects. Less obviously, users often use software in ways unimagined by its creators, such that some previously untested combinations of operations work incorrectly. Either way, we cannot assume that the software’s developers will encounter the same bugs that we do: in many cases, if we don’t report bugs, the developers simply won’t know they exist.

Reporting bugs in someone else’s software is different from finding and fixing bugs in one’s own code, because we understand those other systems much less well. The sense of confusion I have with bugs in my own code is magnified when using other systems. I’m less confident that I’m actually seeing unexpected behaviour; less sure what the cause might be; and have lower trust than normal in any fix I propose. That means that we must acknowledge an important trade-off: a full understanding of the cause of a bug in an unfamiliar system is often beyond any reasonable investment of time; but if we just say “it doesn’t work”, no-one can be expected to replicate, let alone fix, the bug. Somewhere in the middle is a sweet-spot where the bug reporter invests the minimum amount of time to generate a good enough bug report that a developer can use to fix the bug...

Read more (2022-08-31 08:00)

August Links

...

Read more (2022-08-30 08:00)

Making a Video of a Single Window

I recently wanted to send someone a video of a program doing some interesting things in a single X11 window. Recording the whole desktop is easy (some readers may remember my post on Aeschylus which does just that) but it will include irrelevant (and possibly unwanted) parts of the screen, leading to unnecessarily large files. I couldn’t immediately find a tool which did what I wanted on OpenBSD but through a combination of xwininfo, FFmpeg, and hk I was able to put together exactly what I needed in short order. Even better, I was able to easily post-process the video to shrink its file size, speed it up, and contort it to the dimension requirements of various platforms. Here’s a video straight out of the little script I put together:

...

Read more (2022-08-09 08:00)

Two researcher jobs in soft-dev

The soft-dev research team is growing and there are two open jobs:

Both jobs are looking at the security side of systems in the context of CHERI. Roughly speaking, the CapableVMs position is looking to secure programming language virtual machines, and the Chrompartments position is looking to secure web browsers...

Read more (2022-08-08 08:00)

Another Reorganisation

To every idea there is a season, though some ideas seem to have purpose only under faulty assumptions. In April I decided to rethink how I went about my “informal” writing, which had previously been highly intermittent, rather formal, and interminably long. In When is a Blog a Blog? I renamed my old blog to “essays” and aimed for more frequent, less formal, shorter updates in this blog.

After 4 months it’s time to evaluate how this split is working. I have managed rather more frequent updates, in a less formal style, and they are shorter in length. But – and this is perhaps unsurprising for someone who a school report once pinpointed as prone to “prolixity” – I’d hardly call many of my recent posts short in an absolute sense. Now I’ve met the new boss, I realise that he acts rather similarly to the old boss.

I have thus been forced to acknowledge that it makes little sense to divide my writing into separate “blogs” and “essays”. There is more that unites my writing, for better and worse, than divides it. I have therefore merged all my old “essays” back into my blog. Suddenly my blog archive has grown from 18 entries to 74. At some point I intend creating a separate list of what I consider the posts which are most likely to be of longer-term interest, because there is now a definite divide between more substantial and ephemeral posts...

Read more (2022-08-01 08:00)

July Links

...

Read more (2022-07-31 08:00)

What’s the Most Portable Way to Include Binary Blobs in an Executable?

I recently needed to include an arbitrary blob of data in an executable, in a manner that’s easily ported across platforms. I soon discovered that there are various solutions to including blobs, but finding out what the trade-offs are has been a case of trial and error. In this post I’m going to try and document the portability (or lack thereof…) of the solutions I’ve tried, give a rough idea of performance, and then explain why I’ll probably use a combination of several solutions in the future.

Outlining the problem

Let’s assume that I want to embed an arbitrary null-terminated string into an ELF executable and that I’m OK with that string having the fixed symbol name string_blob. My C program may then look as simple as:

#include <stdio.h>
extern char string_blob[];
int main() {
    printf("%s\n", string_blob);
    return 0;
}

...

Read more (2022-07-25 08:00)

CHERITech22 and PLISS 2022

One of the little pleasures of my job is organising events where people can come together to share knowledge and form new bonds. Unsurprisingly, covid put something of a dampener on in-person events, but I’m happy to announce two upcoming in-person events I’m co-organising:

CHERITech22 is free; PLISS does have a registration fee, though we are able to subsidise some attendees who otherwise might not be able to attend. In both cases you have to fill out a form expressing your interest in attending, and we’ll notify the successful applicants later...

Read more (2022-07-19 08:00)

How I Clean my Glasses

I’ve been wearing glasses since I was 4 years old: ever since, I’ve put them on as soon as I wake up and take them off just before I flop into bed. In all that time, I’ve never broken a pair of glasses, but I still get a new pair every 2 years: sometimes the frame starts to fall apart, but more often the quality of vision decreases. Often I notice increasing problems in night vision, though my last pair degraded in such a way that my eyes kept losing focus even in daylight, which was disconcerting to say the least.

Becoming frustrated, I guessed that the way I clean my glasses might be causing the lenses to degrade quicker than necessary. I used to just breathe on them, and wipe them with whatever came handy (a shirt or whatever). Then I switched to using a dry microfibre cloth which I would wash whenever it started leaving horrible smear marks on the lens. Then I used small quantities of soap and water with a microfibre cloth. Each of these changes seemed to slightly improve the situation, but I wondered if I could do better. The internet seemed mostly full of people telling me to do what I was already doing, which wasn’t what I was hoping for. I eventually realised that one group of people whose profession is based on lenses might have better advice: and, indeed, photographers have developed a number of techniques for cleaning camera lenses.

For the last 12 months I’ve been using an adapted version of what seems to be the “standard” advice for cleaning camera lenses. My lenses still seem as good as new (I can’t see any abrasion at all, nor any problems at night), which suggests this is a decent approach. Caveat emptor: I have not conducted a scientific test, so some of what comes next might be placebo, or the results of good fortune, or possibly even dangerous to your health. With that in mind, here’s how I now clean my glasses (note: I don’t use affiliate links, so you don’t have to worry that I’m trying to make money off you): ...

Read more (2022-07-13 08:00)

June Links

...

Read more (2022-07-01 08:00)

What Metric to Use When Benchmarking?

What is the right metric to use when measuring a program’s performance? There are various possibilities, from memory usage to branch prediction hit rates, but I’m going to pick on two that I see widely used: CPU instructions executed (what modern CPUs call “instructions retired”) and wall-clock time (i.e. “how much time has elapsed in the real world?”). In this post, I’m going to try and compare both, showing why each has fundamental weaknesses, before explaining why I use each in different circumstances.

Measurement instability

Let’s start by benchmarking a very simple program that runs on every Unix OS I have access to:

awk 'function fib(n) { \
  return n <= 1 ? 1: fib(n-1) + fib(n-2) \
} BEGIN { fib(30) }'

...

Read more (2022-06-30 08:00)

Chance, Luck, and Risk

Every so often, I’m asked to talk about my “career”, generally to people younger than I am. I realised early on that the path I’d taken was so full of unexpected turns that it would be ludicrous to talk about it as if it was the result of hard work and deliberate choices. I therefore chose to emphasise how lucky I’d been. Not only did this have the virtue of modesty but it genuinely seemed the most plausible explanation.

However, luck as an explanation didn’t get me very far when I looked at other people. The first hint was after I’d seen several instances where, for a given group of people, of a similar age, background, and talents, some ended up being more successful than others. What surprised me was how often my gut feeling about who would go on to do well, or not, turned out to be correct — far too often for luck, mine or theirs, to be a convincing explanation. The second hint was when I realised from reading history that some people were successful multiple times in their lives: it didn’t seem plausible to me that all of them had done so purely through luck.

As I currently think of things, there are at least three concepts that interact with each other. At the risk of overloading commonly used terms I think of them as: ...

Read more (2022-06-15 08:00)

What Makes a Good Research Proposal?

Research is rarely quick: it takes time, people, and (in most cases) equipment to complete. To convince those in control of the purse strings that we should be given the necessary resources to tackle a research problem, we create a research proposal. This might be: a written document, a presentation, or a series of conversations; presented to academia or industry; and aimed at convincing a manager or an external funding agency (or even oneself!).

Whatever the situation, I believe that good proposals share much in common. Unfortunately, many guides I’ve seen to creating proposals focus, sometimes very cynically, on the quirks or biases, real and imagined, of funders. In contrast, in this post I’m going to try to focus on the ideal core of a proposal, because that has inherent value in its own right, and also because it can be easily adapted to the specifics of a given funder.

By focussing on this core, I’m hoping that I might help you answer two questions: “do I have an idea that’s worth asking for resources to tackle?” and “how do I present that idea in a way that other people will best understand?” To make my life easy, I’m going to frame this discussion in terms of “readers” and “writers”, though I hope the ideas generalise beyond written proposals, and “funders” as a shorthand for those in control of resources...

Read more (2022-06-08 08:00)

May Links

...

Read more (2022-05-31 08:00)

Multiplicity Choices Are Hard to Model and Change

Programming involves continually making choices, whether we realise we are doing so or not. We hope that each choice advances us towards our intended goal of making a program that does what its users will want it to do. However, because the problems we are tackling are nearly always bigger than our ability to fully comprehend them, we often make choices that we later have to correct.

In my experience, the most common type of correction I have to make is where I realise that a piece of information that I’d localised to one component is also required by another component. Depending on the nature of the components involved (functions, classes, etc.), there are various ways of fixing this. Sometimes I might explicitly pass state through all the paths between the two components; sometimes I might place that state somewhere where it’s directly accessible to both components. Either way, the changes involved are invariably tedious, and often make the program’s structure slightly worse, but are rarely intellectually challenging. It can be frustrating to make such changes in a statically typed language , when one can spend an inordinate amount of time with the program not compiling, but when the program does compile again, there tend to be few if any resulting bugs to fix.

There is, though, a type of correction that makes me break out in a cold sweat: when I have made an incorrect choice about the multiplicity of a piece of information. The changes I have to make are typically far less mechanical, giving me many more opportunities to introduce bugs. In this post I’m going to look at this in more depth in the context of programming languages with static type systems...

Read more (2022-05-26 08:00)

Using a “Proper” Camera as a Webcam

It’s 2022, so many of us now have 2 years experience of online meetings. My experience is that the better that I can see and hear someone in an online call, the more natural the conversation feels. Put another way, I think one does a courtesy to the other person by improving the quality of how you appear in a video call.

Unfortunately, most webcams are terrible cameras that take terrible pictures. The market for decent quality webcams seems to be non-existent, with the available choices seeming to sit somewhere between bad and mediocre. Laptop webcams have the additional problem that thin lids mean that the camera is fighting a losing battle against physics that software can only partly overcome.

Fortunately there is a solution: you can use a “proper” camera, of the sort you associate with earnest tourists, as a webcam. In particular, most modern mirrorless cameras from the last few years can do an excellent job if you know what you’re doing...

Read more (2022-05-17 08:00)

Programming Style Influences

If you ever talk to, or read an interview with, a musician, they will inevitably, and quickly, end up talking about their influences. While there is an element of nostalgia to this, perhaps even of acknowledging debts, I see its main purpose as helping spread knowledge about who the most interesting set of musicians are.

In programming, in contrast, we rarely talk about our influences, other than a frequently expressed allegiance to a single programming language. This seems a shame to me, because it denies new people to our field helpful pointers to programmers and systems whose style might be a useful influence.

As a modest attempt to rectify this situation, I’m going to show how one particular system, OpenBSD, has had a big influence on my programming style over time. It’s far from the only system that’s influenced me – not to mention various programmers and communities who’ve also been an influence – but I need something concrete to use as an example...

Read more (2022-05-10 08:00)

snare: a Minimalistic GitHub Webhooks Runner

Starting with extsmail I seem to have developed the accidental hobby of occasionally developing new tools in the Unix tradition. In this post I’m going to introduce snare, a minimalistic GitHub webhooks runner daemon for those cases where something like GitHub actions don’t give you sufficient control. I’ve been using snare in production for over 2 years for tasks from running CI using specific hardware to sending out email diffs to automatically restarting Unix daemons when their configuration has been changed. It’s a simple, but now stable, tool.

What is a webhooks runner? Well, whenever something happens to a GitHub repository – for example, a new PR is raised, or a commit is pushed to a PR – GitHub can send an HTTP request to a specified URL informing it of what happened. Configuring a webhook for a given GitHub repository is relatively simple: go to that repository, then Settings > Webhooks > Add webhook.

snare is a simple program which listens for GitHub webhook events and runs Unix shell commands based on them. You need a server of your own to run snare on. When you add a GitHub webhook, you then need to specify http://yourmachine.com:port/ , a secret (in essence, a shared password between GitHub and snare) and then choose which events you wish GitHub to deliver. For example, the default “Just the push event” works well if you want to send email diffs whenever someone pushes commits to a repository...

Read more (2022-05-04 08:00)

April Links

...

Read more (2022-04-30 08:00)

Where do Research Problems Come From?

“Research” is a much broader term than most of us who consider ourselves to be researchers realise. My occasional interactions with people in very different disciplines (e.g. biology) have tended to involve a series of culture shocks to both parties. Even within computing, there are significant differences between sub-disciplines. These seem to mostly go unnoticed or, at least, uncommented on, which is a pity: at the very least it’s worth knowing that the way we do things isn’t the only possible way. One of the most surprising differences, at least to me, is where people expect to find the problems that they then work on.

What is a research problem?

Before I go further, it’s worth defining what I mean by “research problem”. At the most basic, I mean “the problem I’m trying to solve” or “the thing I’m trying to make better”. Research problems tend to come at multiple levels. For example, at the highest level, I want to make software better, but that’s such a vague aim that it’s difficult to convert into meaningful action. What I have to do is find lower-level, more concrete, problems where I can better define what direction I want to head in and what success might look like. For example, I might say “programs written in programming language X are so slow that programmers twiddle their thumbs waiting for programs to run; if I can make programs in language X run twice as fast as they currently do, programmer productivity will increase”.

I’ve actually done two separate things in that small example. Not only have I defined a problem (programs in programming language X run too slowly) but I’ve also motivated why that problem is worth trying to solve (it’s degrading programmer productivity). The requirement to have such a motivation is a common expectation in the parts of computing I work in, but not everywhere. For example, in some areas of mathematics, it’s considered worthwhile to solve a problem irrespective of any possible real-world use. In A Mathematican’s Apology, G. H. Hardy said : ...

Read more (2022-04-28 08:00)

Practising Programming

When we see a world-class musician flawlessly play challenging music, it can be tempting to imagine that they were always able to do so. A moment’s thought makes it obvious that they must have had to spend huge amounts of time practising basic techniques in order to reach that level. What’s less obvious is that they have to continue spending considerable amounts of time simply to maintain that technique, let alone expand it.

In contrast, in programming, we have only a haphazard notion of how one should go about obtaining sufficient technique to become good enough to write good software; and we have almost no notion of continued practise to maintain or expand that technique. Often overtaken by life events – notably, though not only, the requirement to earn money to support a family – many programmers implicitly finish the bulk of their learning early on, and some stop learning entirely.

As someone of rather limited talent, I have long been aware that if I’m not proactive about learning, I will fall even further behind where I would like to be. While some people enjoy practising programming with small puzzles (the programming equivalent of musical scales perhaps?), I am too grumpy to find them enlightening or enjoyable. Instead, I’ve found that real-world tasks that require me to do something new are where I most improve, and most enjoy improving, my programming skills. Fortunately, and at first accidentally, I’ve found that it’s relatively easy for me to continue practising programming on real-world tasks as part of my day-to-day life...

Read more (2022-04-20 08:00)

Making Rust a Better Fit for CHERI and Other Platforms

Recently, Aria Beingessner published an interesting essay “Rust’s Unsafe Pointer Types Need An Overhaul”. There are several parts to this essay, all of which address issues which cause me pain in Rust, and which I would love to see fixed. I want to focus on one issue: Rust’s usize conflates address, data, and object widths into a single static type. While this conflation is unproblematic on many major platforms, it causes problems on others such as CHERI. In my essay from last year on static integer types, I went into some depth on this, and you might find the motivation therein a useful additional explainer to Aria’s essay.

The basic problem with Rust and CHERI is that capabilities – think “an address alongside additional data” – require additional room over traditional pointers, which contain just an address. Modern CHERI capabilities are typically twice the width of what is often called the “native machine word size”. However, Rust’s integer type hierarchy mandates that usize must be wide enough to allow any pointer type to be cast into it without information loss. That means that the obvious way of making Rust work with CHERI is to double the width of usize so that capabilities can be cast to a usize — but this wastes 50% of bits when usize is used to represent sizes or addresses alone.

Aria then makes an intriguing suggestion for how to push Rust in the right direction. The basic idea is to: break the link between pointers and usize so one can not directly cast between the two; make *mut () the root of the integer/pointer part of the type system (in other words, as well as being a pointer type, *mut () also serves roughly the same purpose as C’s uintptr_t); and to make the address portion of a pointer a usize to allow bit-fiddling operations on that portion. On common platforms (e.g. x86_64) this scheme makes everything work much as it does now, but on systems like CHERI, pointers can now be wider than usize...

Read more (2022-04-13 08:00)

When is a Blog a Blog?

At some point in 2004 I added to my website the sort of thing that most normal people would have called a “blog”. For reasons best forgotten, I called it a “bliki”, before quickly realising that I had no interest in creating anything resembling a wiki. I soon renamed it “technical articles / blog”, an awkward hybrid that reflected my lack of clarity about what I wanted it to be. In late 2012 I gave up and shortened the name to just “blog” — except it never really was a blog, at least not in the sense that other people use that word. Really, what I’ve been doing for the last 18 years is infrequently writing lengthy essays.

“Better late than never” is a phrase that trips off my tongue worryingly easily. In yet another instance, I have finally relabelled my previous “blog” as “essays”. I enjoy writing essays, because they invariably force me to investigate things I’d previously glossed over, and rethink others from scratch. However, that means that essays are hard work — each typically takes about 3 to 5 days of my time. As the years have passed, I’ve found it more difficult both to carve out, and to justify, the time they take — too often, there seems to be something more important to do.

Perhaps the fundamental problem with essays is that, to my mind at least, they come with a contract: they should be on fairly meaty subjects, and have been subject to enough thought that they can stand up to a reasonable level of scrutiny. That makes essays an appropriate medium for getting across mature ideas (or ideas that I think can be matured), but it makes them a poor medium for new, incomplete, or simple ideas...

Read more (2022-04-11 08:00)

Static Integer Types

Over several years I’ve had several conversations with people about static integer types in programming languages. Given the number of subtleties in the design, and pitfalls in the use, of integer types it’s not surprising that most programmers are unaware of at least some of them. However, what is perhaps more surprising is that most programming language designers and implementers aren’t fully aware of some of the subtleties and pitfalls either. That leads to languages and their implementations baking in unintended, and sometimes unfortunate, design decisions. At the end of conversations on this topic, I’ve sometimes said “this stuff is hard and someone should think more about this and write it up” but it’s an unsatisfying conclusion. So even though I don’t have all the answers – indeed, I’m fairly sure that I don’t know all the questions – I’m going to jot down my current thinking on this subject.

The basics

In this post I’m considering the design of fixed-width integer types in statically typed languages. By “statically typed” I mean types which are enforced at compile-time whether they are explicitly spelt out in source code or inferred by the compiler. By “fixed width” I mean integers which fit within a number of bits determined at compile-time. These come in two flavours: those where the number of bits is the same on all platforms (e.g. u32 typically means “an unsigned 32-bit integer); and those where the bits are parameterised by the platform (e.g. Rust’s usize might be 16-bits, 32-bits, 64-bits or bigger depending on the underlying platform). I won’t be considering floating point numbers, arbitrarily-sized integers (i.e. those whose storage size varies at run-time depending on the number they are storing), or the like.

An obvious question – and one that I imagine most new programmers have – is why do we have more than one integer type? Why, on a 64-bit machine, do I not always use 64-bit integers? There are various possible explanations, but to me two stand out. First, we often have to interact with things (from external libraries to the hardware we’re running on) that require integers of certain widths. Second, even though our computers often have large quantities of DRAM, memory is not infinite, and even when there’s plenty of free memory, using it wastefully can slow programs down. For example, imagine that I’m writing a system which records students’ marks out of 10: I only need 4-bits to represent all the valid marks, meaning that I need 16 times less memory relative to a 64-bit integer. If my program records lots of students’ marks, then the memory saving between 4-bits and 64-bits will probably have a noticeable effect on the program’s performance, because the caches in modern CPUs are tiny relative to DRAM. For example, my desktop computer has 32GiB DRAM but the L1 cache (split into two equal-sized sub-parts for instructions and data) is a mere 64KiB (about 0.000002% the size of DRAM): saving memory is an effective way of improving program performance...

Read more (2021-06-22 08:00)

Automatic Video Editing

Amongst the many consequences of COVID-19 has been the suspension of in-person talks: suddenly, people like me have had to think about how to produce prerecorded videos. In this article I’m going to explain what I’ve come to understand about video recording and the “automatic video editing” technique I’ve used for videos such as Virtual Machine Warmup Blows Hot and Cold.

To give you an idea of how unfamiliar I was with video production, 12 months ago I not only had no experience of creating videos but I hadn’t even watched enough videos to form an opinion of what was possible or desirable. However, within a couple of hours of attending my first online conference in June 2020, the challenges of video production had become clear to me. To be blunt, most of the videos I was watching were worse than the equivalent live talk: they were mostly bullet point laden PowerPoint slides with poor quality audio. I’ve long been of the opinion that bullet point presentations are unengaging, as they’re designed for the speaker’s benefit, not the audience’s. However, they become even less engaging when there’s no person to look at and all you can hear is a disembodied, poorly amplified, and fuzzy-sounding mumble. Fortunately for me, I saw a talk by Sam Ainsworth that showed me that a different, rather more engaging, type of talk was possible.

Realising that I’d need to produce videos of my own at some point, I immediately started looking into how one might go about recording videos. Unfortunately, it soon became obvious that common video production tools such as OBS hadn’t been ported to OpenBSD. Undeterred, the following day, I experimented with using FFmpeg to record and mix videos. I’ll go into more detail about FFmpeg later, but at this point it’s enough to know that FFmpeg is a powerful command-line tool that can record and edit videos...

Read more (2021-03-23 08:00)

The Evolution of a Research Paper

As far as I can recall, the first time that I truly tried to read a research paper was when I started my PhD. I don’t remember which paper was the first recipient of my incompetent gaze, but I do remember how completely out of my depth I felt when trying to make sense of it. Over many subsequent papers I tried during that period, I made very little progress in meaningfully digesting any of what I was reading. I quite often found a fog descending on my brain, a curious mix of tiredness and bewilderedness, that left me feeling defeated.

In retrospect, I faced three main issues: I didn’t have a good technique for reading papers ; I was trying to read papers that far exceeded my rather narrow technical knowledge; and I wasn’t reliably selecting papers that were well written. To a reasonable extent, sheer repetition tends to fix all of these issues.

However, I eventually realised there was a much deeper issue: I did not understand how papers come into being and certainly had no idea how I should go about writing my own. From my naive perspective, papers seemed to come fully formed as 20 page PDFs, full of dense prose, foreboding formulae, and complex tables. I had little idea how they got to that state and, for whatever reason, nobody I bumped into ever seemed to talk in a meaningful way about the mechanics of writing a paper. Even now, I’m not sure how most people or groups go about writing their papers...

Read more (2021-01-19 08:00)

Automatic Syntax Error Recovery

Programming is the best antidote to arrogance I’ve come across — I make so many errors that I am continually reminded of my own fallibility. Broadly speaking, I think of errors as severe or minor. Severe errors are where I have fundamentally misunderstood something about the system I am creating. Each severe error is a bespoke problem, often requiring custom tooling to understand and fix it. Minor errors, in contrast, are repetitive and quickly fixed. However, they’re also much more numerous than severe errors: even shaving a couple of seconds off of the time it takes a programmer to fix a class of minor errors is worthwhile when you consider how often they occur.

The most minor of minor errors, and also I suspect the most frequent, are syntax errors. They occur for three main reasons: mental sloppiness; inaccurate typing ; or an incomplete understanding of the language’s syntax. The latter case is generally part of a brief-ish learning phase we go through and I’m not sure what a good solution for it might look like. The former two cases, however, are extremely common. When I’ve made a small typo, what I want is the parser in my compiler or IDE to pinpoint the location of the syntax error accurately and then recover from it and continue as if I hadn’t made an error at all. Since compilation is often far from instantaneous, and I often make multiple errors (not just syntax errors), good quality syntax error recovery improves my programming efficiency.

Unfortunately, LR parsers – of which I am particularly fond – have a poor reputation for syntax error recovery. I’m going to show in this article that this isn’t inevitable, and that it’s possible to do surprisingly good automatic syntax error recovery for any LR grammar. If you want to know more details, you might be interested in the paper Lukas Diekmann and I recently published called Don’t Panic! Better, Fewer, Syntax Errors for LR Parsers. The paper also has a fairly brief accompanying talk, if you find that sort of thing helpful:

...

Read more (2020-11-17 08:00)

Stick or Twist?

All of us have points in our lives where we have to decide whether we should continue down our current path or change to another. As a researcher, I often face a constrained version of this problem: should I continue on my current research path or change to another? For a long time I wasn’t aware that I was being faced with such decisions; and, when I did become aware, I wasn’t sure how best to make a decision. Over time I’ve realised that a simple “stick or twist?” heuristic mostly works well for me. I don’t claim that anything in this article is novel, nor do I think I’m describing an approach that’s applicable to every situation — but it might provide some useful food for thought.

Let’s start by dismissing a common heuristic: wait and see. When faced with a hard decision, most of us have a tendency to hope that the underlying problem magically disappears: sometimes we consciously delay making a choice, but often we try and pretend the choice doesn’t exist at all. Although it sounds like a cliché, I’m a firm believer that, in general, not making a decision is equivalent to making a decision. If I dither over whether to buy a shop’s last cake or not, someone else will buy it: I’m then, involuntarily, in exactly the same situation as if I’d decided not to buy the cake. If used too often, “wait and see” turns us into corks on the ocean, entirely at the mercy of events. Except for a small number of exceptionally poor decisions, I’ve found that I personally regret decisions I didn’t take more than decisions I did take.

Next, let’s dismiss the two extreme heuristics for making decisions: never change (which is similar, though with a very different motivation, to “wait and see”) or always change. Because we live in a continually changing world, it is inevitable that even a once-optimal path will become suboptimal over time: not changing guarantees irrelevance in the long run. At the other extreme, change for its own sake means that even when we stumble onto an optimal path, we’ll change off it for no good reason...

Read more (2020-10-07 08:00)

Which Parsing Approach?

We all know that parsing is an important part of designing and implementing programming languages, but it’s the equivalent of Brussels sprouts: good for the diet, but a taste that only a select few enjoy. Unfortunately, I’ve come to realise that our general distaste for parsing is problematic. While many of us think that we’ve absorbed the advances of the 1960s into our collective understanding, I fear that we have regressed, and that we are often making inappropriate decisions about parsing. If that sounds accusatory, I don’t mean it to be: I spent over 20 years assuming that parsing is easy and that I didn’t need to understand it properly in order to use it well. Alas, reality has been a cruel teacher, and in this post I want to share some of the lessons I’ve been forced to slowly learn and acknowledge.

Let’s start with the basics. A grammar encodes the syntax rules for a given language. Parsing is the act of taking in an input (e.g. a source file) and determining if, and how, it corresponds to a grammar. At its most basic level, parsing just says “this input does/doesn’t correspond to the grammar”. That’s rarely useful for programming languages, so we normally execute semantic actions while parsing, allowing us to, for example, build a parse tree that represents the input as a tree. If I have a simple calculator grammar and the input 2-3*4 I might get back a tree that looks like the following:

...

Read more (2020-09-15 08:00)

Alternative Sources of Advice

With slowly increasing frequency, I am asked for my advice on life or careers, or something similar. Let us put aside worries about this sign of my increasing decrepitude or the poor taste that some people show in those from whom they seek advice. The brutal truth is that most of the advice that I’ve received as an adult – nearly all of which has been well intentioned – has not been right for me. I therefore assume that any advice I might give others would be equally wrong, and so I try and avoid doing so. In this blog post, I will try to enumerate the problems I have with advice as it is commonly given and look at one way that we can do things differently.

In the context of this article, what I mean by “advice” is the guidance given by one adult (generally older) to others (generally younger). Advice can be solicited or unsolicited and it can come through many mediums such as one-on-one conversations, listening to a talk, or reading a book.

A common way that advice is given is as a series of commands such as “you should do X” or “you should never do X”. Commands can be expressed bluntly (“you must always do X”) or diplomatically (“in my opinion it is always wise to do X”); they can be general (“you should travel to as many places as possible”) or specific (“you should avoid person P”); and so on...

Read more (2020-05-06 08:00)

Minimum Times Tend to Mislead When Benchmarking

Most of us want, or need, to benchmark software at some point, either because we’re proactively trying to avoid performance problems, or because we’re trying to narrow down the cause of a specific performance problem. Benchmarking seems like it should be simple, but there are countless ways to go wrong: indeed, as things stand, none of us really knows how to do it well, at least not in general. In this short post, I want to make a simple argument against using the minimum time of a benchmark as a proxy for that benchmark’s performance. Doubtless I am recasting other people’s arguments, poorly, but I’ve seen a resurgence in people advocating for the use of the minimum time recently, so it seems like a good time to make the opposing case.

The basics

There are all sorts of things that one might wish to measure about a program: how much memory it consumes, the number of files it opens, and so on. By far the most common measure is “wall-clock time” – i.e. how much time elapses in the real world while the program is running – and that’s what I’m going to concentrate on exclusively here.

Once upon a time, measuring wall-clock time was fairly simple. You read the system clock, ran a program, read the system clock again, and reported the delta between the two clock measurements. There were some minor annoyances to do with clocks but measurements tended to be fairly predictable and repeatable. The weasel word in that statement is “fairly”. Noise has always been an issue in benchmarking: except in rare circumstances, there are always external factors (e.g. other processes; even the kernel itself) that can interfere with your benchmark’s execution which, collectively, we tend to refer to as noise...

Read more (2019-08-15 08:00)

A Quick Look at Trait Objects in Rust

Rust is a genuinely interesting programming language: it has a number of features which are without precedent in mainstream languages, and those features combine in surprising and interesting ways. In many cases, it’s a plausible replacement for C : it leads to fairly fast code; and because it doesn’t have a garbage collector, its memory use is more predictable, which can be useful for some programs. I’ve been doing an increasing amount of programming in Rust for the last 3 or 4 years, the biggest thing being the grmtools set of libraries (a good starting place for those is the quickstart guide). However, there isn’t any doubt in my mind that Rust isn’t the easiest language to learn. That’s partly because it’s a fairly big language: in my opinion, language complexity grows exponentially with language size. But it’s also partly because a lot of Rust is unfamiliar. Indeed, my first couple of years in Rust reminded me of my early fumblings with programming: a lot of activity rewarded with only limited progress. However, it’s been worth it: Rust has proven to be a good match for many of the things I want to do.

One of the things that baffled me for quite a long time are Rust’s “ trait objects”: they felt like an odd part of the language and I was never quite sure whether I was using them or not, even when I wanted to be. Since I’ve recently had cause to look into them in more detail, I thought it might be helpful to write a few things down, in case anyone else finds my explanation useful. The first part of this blog post covers the basics and the second part takes a look at the performance implications of the way trait objects are implemented in Rust.

The basics

In general, Rust has a very strong preference for static dispatch of function calls, which is where the function matching a call is determined at compile-time. In other words, when I write a function call f(), the compiler statically works out which function f I’m referring to and makes my function call point directly to that function f. This stands in contrast to dynamic dispatch where the function matching a call is only determined at run-time. Dynamic dispatch is common in languages such as Java. Consider a class C which defines a method m which one or more subclasses override. If we have an object o of type C and a function call o.m(), then we have to wait until run-time to determine whether we should call C’s implementation of m or one of its subclasses. Simplifying slightly, static dispatch leads to faster performance while dynamic dispatch gives one more flexibility when structuring a program. These features can combine in a variety of ways: some languages only offer static dispatch; some only offer dynamic dispatch; and some require users to opt-in to dynamic dispatch...

Read more (2019-02-12 08:00)

Why Aren’t More Users More Happy With Our VMs? Part 2

In the first part of this two-part blog, I showed results from an experiment on VM performance that I’d been part of. Although it wasn’t our original intention, we eventually ended up trying to find out how often mainstream programming language VMs warm up as per traditional expectations. If you consider only individual process executions, then about 70% warmup; if you consider (VM, benchmark) pairs, then about 40% warmup consistently; and if you consider (VM, benchmark, machine) triples, then about 12% warmup consistently.

However you choose to look at it, these numbers aren’t pretty. In trying to make sense of them, I’m going to try and provide some answers to the following three questions. First, are there simple explanations for the performance problems we encountered? Second, how did we get into this situation? Third, how can we do better in the future? To answer the first question, I’m going to draw on our paper. In providing suggested answers to the second and third questions, I’m going to be make extensive use of my personal opinions and experiences, so please don’t blame anyone else for them!

Are there simple explanations for poor performance?

There are all sorts of factors that might be responsible for odd performance, but, beyond those covered in the first part of this blog post, there are two that seem both particularly likely and relatively easy to test for: garbage collection and JIT compilation. For example, garbage collection is, to state the obvious, memory intensive: it could conceivably have all sorts of odd effects on processor caches and thus performance. Similarly, JIT compilation can happen at any point, and there’s no guarantee that it makes things faster...

Read more (2018-09-19 08:00)

Why Aren’t More Users More Happy With Our VMs? Part 1

Programming language performance is something that nearly everyone cares about at some point, whether they realise it or not. If you’re a programmer, you sometimes have to make your programs run fast; if you’re a user, you might be frustrated at a program that runs slowly, without knowing that it’s caused by a poor programming language implementation. Those of us who work on, or around, programming language Virtual Machines (VMs) tell a good story about performance, but a surprising number of users seem unhappy with the performance of their programs. Sometimes, yes, they’re unrealistic, but are they always so? In this first blog post (based on this paper) of two, I’m going to show that programs running on VMs often don’t follow the simple performance patterns that nearly all of us expected. Perhaps that’s why users aren’t as happy as VM developers think they should be?

A typical claim

Every so often there is a spate of blog posts along the lines of “Language XYZ is faster than C”. Language XYZ varies, of course, whilst C is always… C. Love it or hate it , C is, overall, probably the fastest language there is. At the very least, it’s the bar by which all other languages measure themselves.

For example, a claim made about HotSpot (the standard Java VM, often called just “the JVM”, though that there are other Java VMs) is that it makes Java programs as fast as C programs compiled with gcc -O2. At first glance this statement seems reasonable enough: gcc -O2 produces fast code, as, most people agree, does HotSpot. But what’s actually being compared here? A static (or “Ahead of Time”) compiler such as GCC compiles a program once; the resulting executable/binary can be run as many times as you want, even on another machine without GCC. In contrast, VMs such as HotSpot observe a program running and dynamically compile it into machine code. Dynamic compilers are thus subject to a warmup cost: it takes programs running on them a while to reach a steady state of peak performance. VM developers are very keen on the steady state of peak performance and would rather forget that warmup exists: VM performance is universally reported relative to the steady state of peak performance. Thus the claim at the beginning of this paragraph entirely ignores warmup costs...

Read more (2018-09-05 08:00)

What I’ve Learnt So Far About Writing Research Papers

I long flattered myself that I’m only good at one thing: programming. Experience has taught me that I’m not as good at it as I thought – indeed, that there are people much better at it than I – but it’s fair to say that I’m not terrible at it. The problem is that, as a researcher, programming can only ever be part of my job: it’s not enough just to build something interesting, I need to explain it to people too. And explaining it to those further away than my voice can carry requires writing a paper.

There are many people who dismiss paper writing as a distraction from building real systems. I sometimes wondered if it was worthwhile myself but, when you’ve been knocking around as long as I have, the importance of writing things down becomes clear. I’ve lost track of how many conversations I’ve had along these lines: “We saw that problem in System B, and we had a really interesting solution.” ”Where can I read about it?” “We never wrote it up.” “Where can I download it then?” “We never released the source code, the binaries are lost, and it runs on hardware that no longer exists.” I encountered this a lot when I was investigating existing language composition, systems with tantalising references ranging from Lisp systems with different macro expansion semantics to novel text editors. Even when the person I was talking to was passionate and knowledgeable about the system in question, I was rarely able to gain the insights I suspect I could have if I’d had a paper to pore over. Doubtless, many of these systems weren’t good enough to worry about them slipping from our memory; but the loss of the best, which may have involved tens or even hundreds of skilled person years of effort, is a tragedy. Writing things up does not guarantee that people in the future will be able to understand everything about the work , but at least they stand a sporting chance. Besides, even if people of the future can’t understand things, writing things down increases the number of people who can understand the work today.

How should one go about writing a paper? When I started my PhD, I had no idea of how one might go about this task, and it took me many years to find an approach that worked for me. In the rest of this blog post, I’m going to try and drill down into some of the things I’ve learnt about writing papers, and I’m going to give some examples of where I’ve learnt from my mistakes. However, I’m deliberately not phrasing anything herein as “advice” — indeed, several parts of my approach seem to run contrary to most of the advice I’ve seen. I don’t see this as a problem: people work in different ways, and if you’re unlucky enough to have a mind similar to mine, you might find something useful in here...

Read more (2018-01-10 08:00)

What Challenges and Trade-Offs do Optimising Compilers Face?

After immersing oneself in a research area for several years, it’s easy to feel like the heir to a way of looking at the world that is criminally underrated: why don’t more people want what I want? This can be a useful motivation, but it can also lead to ignoring the changing context in which most research sits. So, from time to time, I like to ask myself the question: “what if everything I believe about this area is wrong?” It’s a simple question, but hard to answer honestly. Mostly, of course, the only answer my ego will allow me is “I can’t see anything I could be wrong about”. Sometimes I spot a small problem that can be easily fixed. Much more rarely, I’ve been forced to acknowledge that there’s a deep flaw in my thinking, and that the direction I’m heading in is a dead-end. I’ve twice had to face this realisation, each time after a gradual accumulation of unpleasant evidence that I’ve tried to ignore. While abandoning several years work wasn’t easy, it felt a lot easier than trying to flog a horse I now knew to be dead.

All of this means that I’m probably a little better disposed to contrarian thinking than I was when I started as a researcher. And so it is that I’m going to write about some thoughts that were initially inspired by a (somewhat controversial) talk by Daniel Bernstein. The talk itself, entitled “Death of Optimizing Compilers” (slides and audio), was presented at ETAPS 2015 in London. I happened to be present for the talk and have engaged in mild advocacy both immediately afterwards and in the two years since. The problem is that the case against is quite easy to state but the case for rather hard — or, at least, I haven’t managed to frame an argument good enough to convince anyone. But I hope I might now have sharpened my argument enough to make another attempt worthwhile.

Suitably condensed, the talk’s thesis is fairly simple. As software systems become bigger and bigger, an ever smaller percentage of code dominates execution time , especially as computers are asked to crunch ever bigger chunks of data. However, optimising compilers (e.g. gcc -O2 or even a VM like HotSpot) are outclassed by humans on the most performance critical code. Because of that, optimising compilers are operating in a region between a problem that’s not worth solving (most parts of most programs) and a problem they can’t solve (the really performance critical parts). It doesn’t take much thought to realise that this last point has been the source of controversy: how can someone belittle the huge amounts of effort put into optimising compilers? ...

Read more (2017-06-22 08:00)