The LLM-for-software Yo-yo

Recent posts
The LLM-for-software Yo-yo
The Fifth Kind of Optimisation
Better Shell History Search
Can We Retain the Benefits of Transitive Dependencies Without Undermining Security?
Structured Editing and Incremental Parsing
How I Prepare to Make a Video on Programming
pizauth: HTTPS redirects
Recording and Processing Spoken Word
Why the Circular Specification Problem and the Observer Effect Are Distinct
What Factors Explain the Nature of Software?

Blog archive

In late 2022 and early 2023 I published a pair of articles: one outlining my best guess of how LLMs might change programming; and a second adding some needed subtlety about how different kinds of programmers might be affected in different ways. I did my best to extrapolate based on the position I observed us to be in, and what one might reasonably predict could happen.

Since then, the technology has moved forward at pace, the terminology has solidified (I can now safely say “LLM” without further explanation), and the debate has swung wildly back and forth.

When I wrote those posts, nearly every programmer I spoke to thought there was absolutely no chance of an LLM replacing them. Most viewed LLMs as an amusing toy; a minority viewed them as a possible threat to their jobs.

By early 2025 I was almost overwhelmed by public figures stating that programming is about to be fully automated. Several were confident enough to put dates to their predictions. A small number of sober, thoughtful, programmers have shown how they’ve used LLMs to write chunks of software quicker than they could have managed alone.

Now we have a study showing the opposite: the use of LLMs in software development often slows developers down. You might expect me, as I have seen many others, to use this as proof that the human craft of programming has been validated. I think that would be a mistake.

Most obviously, the study, even if it is perfect, is just one study: it is rarely wise to draw conclusions before experiments have been repeated many times. Minor changes in experiment, or context, might change the results or our perception of them, in either direction.

Less obviously, surely only the most complacent programmers have not had their use of computers affected by LLMs in some way. For me, LLMs are the search engine I have always wanted. I remember when Google turned web search from a needle-in-a-haystack to something close to pinpoint accuracy. For years, I used search engines far more often than anyone else I knew, and I derived huge benefits from doing so: I could reliably find information that other people claimed didn’t exist.

Now I use LLMs to condense down huge tracts of human knowledge, giving me an entry point into areas I am entirely ignorant of. For example, in 2 quiet mornings, I was able to use an LLM to learn a new-to-me instruction set architecture well enough to write a register allocator. That would have taken me many days in the past. But I still wrote all of the code myself: I hadn’t replaced a tool, but I had gained a new one. Even if LLMs do not meaningfully progress further, they have usefully, if slightly indirectly, changed how I approach programming.

The current debate thus seems to me to be akin to discussing a yo-yo while only considering it to be in two states: at the far left or far right of its swing. When we discuss only the extremes, we miss all the action in the middle — and we tend to forget that, unless an external force intercedes, the resting place for yo-yos is in the middle.

2025-07-14 10:30 Older
If you’d like updates on new blog posts: follow me on Mastodon or Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Comments



(optional)
(used only to verify your comment: it is not displayed)