General Purpose Programming Languages' Speed of Light
April 9 2013
Having recently returned from a week of talks on programming languages, I
found myself wondering where general purpose programming languages might go
in the future; soon I wondered where they could go. The plain truth
about programming languages is that while there have been many small gains in
the last 20 years, there have been no major advances. There have been no new
paradigms, though some previously obscure paradigms have grown in popularity;
I'm not even aware of major new language features (beyond some aspects of
static type systems), though different languages combine features in slightly
The lack of major advances in a given time period may not seem surprising.
After all, major advances are like earthquakes: they occur at unpredictable
intervals, with little prior warning before their emergence. Perhaps we are
on the cusp of a new discovery and none of us – except, possibly, its
authors – are aware of that. That is possible, but I am no longer sure
that it's likely. If it doesn't happen, then it seems clear to me that we are
getting ever closer to reaching general purpose programming language's speed of
light—the point beyond which they cannot go.
Programming languages cannot grow in size forever
The basis of my argument is simple:
Note that I deliberately phrased the first point to reflect the fact
that some concepts are more complex to learn than others; but that, at a
certain point, the volume of concepts becomes at least as important as the
cumulative complexity because of the inevitable interactions between
features. In other words, if language
- Every concept in a programming language imposes a certain cognitive cost
- Our brains ability to comprehend new concepts in a language is inversely
proportional to the number of concepts already present.
- At a certain number of concepts, we simply can't understand new ones.
L1 has 20 features, of
which 6 are complex and 14 simple, the cognitive cost is similar to
L2 which has 3 complex and 17 simple features.
This doesn't sound too bad, until one considers the following:
In other words, most programming languages are surprisingly similar at their
core. Before we can differentiate
- The core of virtually every extant programming language is largely
- That core contains many concepts.
- Many of those concepts have a high cognitive cost.
must already have understood features ranging from variables to recursion,
from built-in datatypes to evaluation order.
Programming languages' core
The overly earnest teenage boy that lives within all of us wants to believe that
programming languages are massively different. At any given moment, someone
with a nihilistic bent and an internet connection can find fans of different
programming languages fighting pitched battles where language
is argued to be always better than
L2. The dreary irony,
of course, is that what
L2 share in common
is nearly always greater than that which differentiates them. How many people
use programming languages without function calls? without recursion? without
variables? without ...? Nobody does, really. Those languages that are truly unique,
such as stack-based languages, remain locked in obscurity.
The reality is that, since the late 1950s, programming languages have
steadily been converging towards a stable core of features. That core has proven itself
sufficiently good to allow us to develop huge pieces of software. However,
it leaves our brain surprisingly little capacity to understand new
features. Try and do too much in one language, and problems accrue. We long
ago developed the technical ability to create programming languages too
complicated for any single person to fully understand. C++ is the standard
(though not the only) example of this, and far be it for me not to pick on an
easy target. As far as I can tell, not a single person on the planet
understands every aspect of C++, not even Stroustrup. This is problematic
because one always lives in fear of looking at a C++ program and finding uses
of the language one doesn't understand. At best, individuals have to
continually look up information to confirm what's going on, which limits
productivity. At worse, people misinterpret programs due to gaps in their
understanding; this introduces new bugs and allows old bugs to hide in plain
In an ideal world, we would be able to understand programming languages in
a modular fashion. In other words, if I don't use a feature
don't need to know anything about
X. Sometimes this works, and
organisational coding standards are one way to try and take advantage of
this. If this modularity property held fully, it would allow us to add as
many modularised features to programming languages as we wanted. However,
programming rarely involves creating a program in isolation from the rest of
the world, particularly in the modern era. Instead, we pull in libraries from
multiple external sources. Even if I don't want to use feature
X, a library I use might; and, if it does, it can easily do so in
a way which forces me to know about
X. Take C++ again: I might
not want to use templates in my C++ programs but most modern C++ libraries do. Even
if a programming language is designed so that its features are modular, the
social use of such a language tends to destroy its modularity. This explains
why it's virtually impossible to be a competent programmer without a
reasonable understanding of nearly every feature in a language.
A reasonable question to ask is whether this ability to exceed the human
brain's capacity is a temporary one, due to poor education. As time goes on,
we tend to become better at condensing and passing on knowledge. I have heard
it said, for example, that what might have baffled an early 20th century
physics PhD student is now considered a basic part of an undergraduate
education (since I struggled with physics as soon as it became
semi-mathematical, I am not in a position to judge). Perhaps, given time, we
can teach people the same number of concepts that they know now, but at a
lower cognitive cost.
I do expect some movement along these lines, but I'm not convinced it will
make a fundamental difference. It seems unlikely that programming students of
the future will find it substantially easier to learn recursion or iteration,
to pick two semi-random examples. Every language needs to choose at least one
of these concepts. Of the two, most people find recursion harder to grasp,
but nested iteration doesn't come naturally to many people either. Those of
us who've been programming for many years consistently underestimate how
difficult newcomers find it. Starting programming education from a
young age might change this dynamic, but it's difficult to imagine that
sufficient quantities of competent programmers will move to become teachers,
at least in the short term.
Better language design
Let's assume, briefly, that you agree that C++ is too complex but are
wondering whether its difficulties are due to bad design rather than size.
Perhaps all I've really done so far is point out that badly designed languages
overload our brains sooner than well designed ones? There is clearly some
truth in this but, again, I'm not sure that it fundamentally changes the
Let's take two statically typed functional languages as an example of more
considered language design: Scala and Haskell. Both languages aim to stop
users shooting themselves in the foot through the use of expressive type
systems. Maybe by constraining users, we can help our brains absorb more
general language concepts? Alas, I am not convinced. Some of the things
Haskell and Scala's type systems can express are astonishing; but I have also
seen each type system baffle world-renowned experts. Type systems, like any
other language feature, do not come for free. Even if we could design a
perfect programming language, I doubt it could be much larger than
such languages are today.
What the above means is that the space for a major advance in general
purpose programming languages is limited. The analogy with the speed of light
works well here too. As an object's speed approaches the speed of light, the
energy required to accelerate it reaches infinity, which is why a normal
object can't ever travel at the speed of light. Worse, the energy
required is non-linear, so the closer you get to the speed of light, constant
increases in energy lead to ever-slower acceleration. Language design is
rather like that. Beyond a certain point, every feature – no matter how
well designed – has a disproportionate cost. It takes longer to
understand, longer to learn how to use idiomatically, and longer to
understand its interactions with other features.
The tooling that surrounds programming languages has changed significantly in
the last two decades. Most developers now use huge IDEs , which offer a variety of ways to understand and
manipulate programs. Verification techniques and tools have made major
advances . And whole new styles of tools now exist. To
take two examples: Valgrind allows one to analyse
dangerous run-time behaviour in a way that can significantly tame the danger
of even unsafe languages such as C; Moose
allows one to gain a reasonable high-level understanding of large programs in
a fraction of the time it takes to wade through source code by hand.
Clearly, such tooling will only increase in quantity and quality as we
move forwards. But will it help us understand programming languages more
effectively? Again, I expect it to help a bit, but not to make a fundamental
difference. Perhaps future editors will have the ability to simplify code
zoom in (rather like an extreme version of folding).
Ultimately, however, I don't know how we can avoid understanding the detail
of programming languages at some point in the development process.
What are we to do? Give up on general purpose programming language design?
No—for two reasons. First, because a degree of group-think in language
design means that we haven't explored the language design space as well as we
should have yet. For all we know, there could be useful discoveries to be
made in unexplored areas . Second, because the
general purpose part of the name is subtly misleading. A traditional
assumption is that every programming language has been expected to be
applicable to every problem. This then leads to the
my language is better
than yours notion, which is clearly nonsense. Anyone who tells you their
favoured programming language – or even their favoured paradigm –
is always the best is delusional. There is no such thing, and my experience
is that programming language designers are almost universally modest and
realistic about the areas to which their language is best suited.
In a single day, I can do Unix daemon programming in C, virtual
machine development in RPython, system administration in the Unix shell, data
manipulation in Python, and websites in a hodge podge of languages. Each has
strengths and weaknesses which make it better suited to some situations than
others. People who need certain styles of concurrent programming might favour
Erlang for that, but more conventional languages for sequential processing.
Many use cases are currently uncatered for: languages which target power
efficient execution may become of greater interest in the future. All in all,
I suspect we will see a greater level of customisation of nominally general
purpose languages in the future.
I also suspect we will see greater use of more obviously restricted
languages, which are often called domain specific languages . Rather than try and make features that work well for
all users – and that must still be understood by those for
whom they don't work well – we are likely to have better luck by
focussing on specific groups and making their life easier. For example, I do
not wish to have my programming language cluttered with syntax for
traditional mathematics (beyond the bare essentials of addition and the
like), because I don't use it often enough. A programmer crunching data to
produce statistics, on the other hand, would be much more productive with
such a syntax. My current guess is that we will build such languages by
composing smaller ones, but there are other possible routes.
In many ways, neither of these outcomes is quite as exciting as the notion
of a perfect language. The contemporary emergence of a wide class of
general purpose languages is an acknowledgement that we
can't squeeze every feature we might like into a single language—our
brains simply can't handle the result. Rather than try and move general
purpose languages beyond the speed of light, we'll probably end up with many
different points near it. At the same time, the possibility of restricted
domain specific languages may enhance our productivity in narrower domains.
This is, perhaps, not as immediately exciting a prospect as targeting all
users, but it is its relative modesty that makes it more likely to pay off.
Furthermore, unlike general purpose languages, that journey is nearer its
beginning than its end. We might be reaching the speed of light for general
purpose programming languages, but we've barely started with domain specific
And, of course, there's always the remote possibility of an unforeseeable
earthquake hitting programming languages. Unlike in the real world, we are
not callous or inhuman for hoping that such an earthquake will hit us, even
though many of us are not sure it will ever come.
Acknowledgements: My thanks to Martin Berger, Carl Friedrich Bolz,
Lukas Diekmann, and Naveneetha Vasudevan for insightful comments on an early
draft of this article. All opinions, errors, and infelicities are my own.
 I don't, because I'm a Unix Luddite at heart, but you
may well think that's my loss.
 Even the simple static analyser in
catches a host of nasty bugs that previously escaped the eyes of even the
best developers. More advanced tools can do even better, though one should
not overstate the power of such tools.
 As an example of the fun to be had exploring unusual language design, I immodestly offer my experiences on integrating an Icon-like expression system into a Python-like
language. If nothing else, it taught me that it is possible to completely
subvert expectations of how standard parts of a programming language
work. The HOPL
series of workshops, and the generally wonderful papers therein, are another
excellent source of programming language design ideas.
 Note that I'm talking about languages with distinct
syntax and semantics; the term is also currently used to describe particular
idioms of library design and use in conventional languages, though I suspect
that will fade in the future.
Follow me on Twitter