Home > Blog e-mail: laurie@tratt.net   twitter: laurencetratt   twitter: laurencetratt
email updates:
  |   RSS feed

Multiplicity Choices Are Hard to Model and Change

May 26 2022

Blog archive

 
Last 10 blog posts
A Week of Bug Reporting
August Links
Making a Video of a Single Window
Two researcher jobs in soft-dev
Another Reorganisation
July Links
What's the Most Portable Way to Include Binary Blobs in an Executable?
CHERITech22 and PLISS 2022
How I Clean my Glasses
June Links
 
Programming involves continually making choices, whether we realise we are doing so are not. We hope that each choice advances us towards our intended goal of making a program that does what its users will want it to do. However, because the problems we are tackling are nearly always bigger than our ability to fully comprehend them, we often make choices that we later have to correct.

In my experience, the most common type of correction I have to make is where I realise that a piece of information that I'd localised to one component is also required by another component. Depending on the nature of the components involved (functions, classes, etc.), there are various ways of fixing this. Sometimes I might explicitly pass state through all the paths between the two components; sometimes I might place that state somewhere where it's directly accessible to both components. Either way, the changes involved are invariably tedious, and often make the program's structure slightly worse, but are rarely intellectually challenging. It can be frustrating to make such changes in a statically typed language [1], when one can spend an inordinate amount of time with the program not compiling, but when the program does compile again, there tend to be few if any resulting bugs to fix.

There is, though, a type of correction that makes me break out in a cold sweat: when I have made an incorrect choice about the multiplicity of a piece of information. The changes I have to make are typically far less mechanical, giving me many more opportunities to introduce bugs. In this post I'm going to look at this in more depth in the context of programming languages with static type systems.

Outlining the problem

Let's start with a simple example. I might have written a system that assumes every customer only has 1 address to ship to, only to later find out that some customers have more than 1 address. At first glance this seems simply to require a mechanical change, for example, changing code that references Address to Vec<Address> or equivalent. However, there are likely to be knock-on effects that require non-mechanical changes. For example, the GUI that displays address(es) will need rethinking; I will probably have to adapt the database schema; and there might be various dynamically typed things (such as SQL statements) that are not easily found. A simple change in requirements can mean that I end up changing huge chunks of the codebase.

My life, though, could soon get worse: I might later find out that some people don't have, or at least we don't know of, any address at all [2]. Now all my code that assumed "1 user == 1 or more addresses" is wrong. Fixing this is much harder than moving from "1 address" to "1 or more addresses", because my language's static type system probably won't give me any leads on what needs fixing. I have moved from a semi-mechanical change to an entirely non-mechanical change, and this is where I start to worry.

How one can get into such a situation? Well, to my mind, there are 4 main kinds of multiplicity [3] which typically have the following static types:

Multiplicity   Typical static type
Exactly 1 T.T
0 or 1 Ts.T or Option<T>
0 or more Ts.[T]
1 or more Ts.[T]
This simple table immediately highlights two problems when making a choice of multiplicities. First, we have to correctly model the problem domain — how many T's could there be? Second, we have to decide how to model that multiplicity in a programming language that might not have an obvious way of expressing it.

The problem domain

We almost never fully understand what we want a program to do in advance. In particular, there is almost always a gap between domain experts [4] (the people who have a need which can be met by creating a new, or adapting an existing, program) and programmers (the people who write programs). It's easy for the two sides to miscommunicate, particularly over small details such as multiplicities. Approaches like UML were partly designed to help two sides communicate such matters but miscommunication is still inevitable.

However, there are other issues that are easily overlooked. If there is more than 1 domain expert, it's almost certain that none of them has complete knowledge of the domain, and indeed there will be often often gaps in the domain experts' collective knowledge. More often than is typically acknowledged, domain experts' knowledge is also inaccurate: edge cases are underappreciated, forgotten, or happen so rarely that no current domain expert has seen them occur.

All this means that even the most talented programmers find themselves making inappropriate multiplicity decisions, either because they've misunderstood the domain experts, or the domain experts are wrong. In either case, we then find ourselves in a situation where our program will disappoint unless we correct it to do the right thing.

Modelling multiplicities and ergonomics

Let's assume that we've accurately worked out what the correct multiplicity for a given T is: we then have to work out how to model it in the programming language of our choice. Since most programming languages can't express all reasonable multiplicities we then have to decide what balance of accuracy and ergonomics we want.

Let's start with the obvious. The "exactly 1 T" multiplicity is easy to model and has good ergonomics.

As soon as we move to "0 or 1 Ts" we start to encounter messiness. In languages with a direct concept of null (e.g. Java), a type T covers both "exactly 1 T" and "0 or 1 Ts": we can't rely on our static types telling us which of these we intended to model, hence the frequency of null pointer exceptions. Languages with option types (e.g. Rust) do allow us to accurately, distinguish "exactly 1 T" and "0 or 1 Ts". Ergonomically, option types are inherently messier to use than languages with null but I tend to feel that it's worth the pain.

However, ergonomics are not a one-off up-front cost: changing T to Option<T> can be much more challenging than first expected. Code paths that expected a T, and thus always succeeded, now have to consider the possibility of there not being a T at all, which means the code path may now fail. The static type system is, in essence, forcing me to manually deal with every place that would, in a non-option-type language, be a null pointer exception: do I throw an error? provide a default? or ... ? In some cases, only a small number of code paths need updating, but sometimes the changes keep rippling outwards.

Ironically, I often find that the hardest places to work out what to do are those where the static type system forces me to deal with the possibility of there not being a T but my analysis of the code shows (or, at least, suggests) that at run-time there always will be a T. This leads to a counter-intuitive situation: although the static type system has helpfully told me all the places where changing the multiplicity needs to be considered, I can introduce more run-time errors than I could ever have encountered in a non-option-types language! I doubt that actually happens very often, but I strongly suspect that the number of new bugs I introduce scales exponentially relative to the number of of places I have to change before the program starts compiling again.

A different problem comes from multiplicities that the language's static type system can't directly model. For example, Rust doesn't provide an obvious way of differentiating "0 or more Ts" from "1 or more Ts": both multiplicities naturally map onto Vec<T>. If I want to correct my code from one of those multiplicities to the other, I am on my own. If you are unlucky enough to have to correct your code from one of these multiplicities to the other then, in my experience, you should gird your loins for lengthy debugging, and a long tail of bugs finding their way into production.

Of course, I can make my own ZeroOrMoreVec<T> and OneOrMoreVec<T> types [5], but I then end up in an ergonomic mess whenever I deal with existing code that expects plain old Vec<T> as inputs or, worse, gives them as outputs (e.g. do I dynamically check the property each time I create one of my new wrapper types? what is the performance cost?). It's very hard to give a general rule about whether the resulting ergonomic pain is worth it or not.

The solution I most often fall back on is to augment the programming language's support for expressing a multiplicity with asserts. For example, if I want to express a "1 or more T" multiplicity I will sprinkle assert!(v.len() > 0) in many places around the code. If I later have to change that multiplicity, I won't get any additional static warnings from my asserts, but I will at least be given informative warnings whenever a code path that hasn't been adapted is about to be executed. That's a lot better than the program spluttering on and falling over in a way that I can't easily trace back to the root cause.

Conclusions

Multiplicities are difficult, especially if we have to change them. Because of this, whenever there is even the slightest doubt as to what the right multiplicity is, we will use the most ergonomic option and hope for the best. In particular, ergonomics heavily bias us towards the "exactly 1 T" multiplicity over all the others. I am no different: I need compelling evidence of the need for another multiplicity in order for me to switch. Because of that, I inevitably use the "exactly 1 T" multiplicity in situations that I later regret.

I've given a few suggestions for ways of ameliorating the problems of multiplicities but is there anything further we can do about multiplicity problems? Better, or at least different, programming language design can help us a bit when we have to change a multiplicity. Moving from null-able types to option types is generally an improvement — though it makes some types of code correction far more painful. Dependently typed languages, though not yet widespread, and with trade-offs at scale which I only have a dim understanding of, make it possible to differentiate "0 or more Ts" from "1 or more Ts" and the like. However, programming language design can't remove all the pain — it can ameliorate it at best, and sometimes all it does is move the pain around.

Even with such improvements, I don't think the ergonomics of multiplicities will change much: "exactly one T" will always be much easier to deal with than any other multiplicity. Similarly, I don't believe that the problem of incomplete, inaccurate, or misunderstood domain experts can ever be solved: it's fundamental [6] and best acknowledged as such.

Putting all this together can make it clearer why we often encounter software systems that don't accept the multiplicities we expect: the programmers didn't realise the correct multiplicity early enough; and by the time they did realise, the cost of change was considered too high. Every time I encounter a case in someone else's software where I spot an incorrect multiplicity I try to remember how hard I find it to get this right too!

Acknowledgements: thanks to Edd Barrett, Lukas Diekmann, Andrei Lascu, Dan Luu, and Davin McCall for comments.

If you’d like updates on new blog posts: follow me on Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Footnotes

[1] Spare a thought for functional programmers in languages where their ability to share state is far more constrained: this problem moves from tedious to... well, whatever word is stronger than "tedious". The more thoughtful programmers I know who write code in such languages freely admit that this is a serious issue but, perfectly reasonably, consider it a reasonable trade-off for the other benefits they gain. Personally, I prefer a mix of "sort-of pure programming" for perhaps 80% of a system and "definitely absolutely very impure" for the other 20%, precisely because I know how often some sort of state sharing is the only practical way for me to evolve a system.
[2] In case this seems impossible, consider those who are homeless, whether by choice or not. This 2000 article on giving a park bench a postcode is an interesting example of the consequences. In this case, as far as I have ever been able to tell, homeless people were supposed to be given an address in health systems of "No fixed abode", which all the health systems supported, but then meant that the people so registered were inaccurately tracked through various health systems (UK postcodes are widely used for approximate identification, as each postcode covers only about 15 properties).
[3] Of course, there are also variations such as "exactly 9 x's or 7–42 x's" but in my experience these are obvious up-front, and rarely require much thought on my part.
[4] In the most extreme case, programmers perceive the domain experts as a vast, unwashed, horde that are referred to as "users" — in such cases it is considered a badge of honour by many programmers not to listen to this group of domain experts at all. Even in those cases where programmers and domain experts work together frequently, diligently, and harmoniously, it is hard for both sides to truly understand each other.
[5] I could automatically generate these with meta-programming – whether C++ style templates or Template Haskell-style compile-time meta-programming – but that wouldn't change the main problem this post is tackling.
[6] This is why, in my opinion, up-front formal methods require a strict superset of the effort involved in programming: you still have to spend exactly the same effort to understand the domain as when programming, including inevitable changes, plus whatever additional overhead the formal methods require.
Spare a thought for functional programmers in languages where their ability to share state is far more constrained: this problem moves from tedious to... well, whatever word is stronger than "tedious". The more thoughtful programmers I know who write code in such languages freely admit that this is a serious issue but, perfectly reasonably, consider it a reasonable trade-off for the other benefits they gain. Personally, I prefer a mix of "sort-of pure programming" for perhaps 80% of a system and "definitely absolutely very impure" for the other 20%, precisely because I know how often some sort of state sharing is the only practical way for me to evolve a system.
In case this seems impossible, consider those who are homeless, whether by choice or not. This 2000 article on giving a park bench a postcode is an interesting example of the consequences. In this case, as far as I have ever been able to tell, homeless people were supposed to be given an address in health systems of "No fixed abode", which all the health systems supported, but then meant that the people so registered were inaccurately tracked through various health systems (UK postcodes are widely used for approximate identification, as each postcode covers only about 15 properties).
Of course, there are also variations such as "exactly 9 x's or 7–42 x's" but in my experience these are obvious up-front, and rarely require much thought on my part.
In the most extreme case, programmers perceive the domain experts as a vast, unwashed, horde that are referred to as "users" — in such cases it is considered a badge of honour by many programmers not to listen to this group of domain experts at all. Even in those cases where programmers and domain experts work together frequently, diligently, and harmoniously, it is hard for both sides to truly understand each other.
I could automatically generate these with meta-programming – whether C++ style templates or Template Haskell-style compile-time meta-programming – but that wouldn't change the main problem this post is tackling.
This is why, in my opinion, up-front formal methods require a strict superset of the effort involved in programming: you still have to spend exactly the same effort to understand the domain as when programming, including inevitable changes, plus whatever additional overhead the formal methods require.

Comments

Comment:
Name:
Homepage: (optional)
Email: (used only to verify your comment: it is not displayed)
Can't load comments
Home > Blog e-mail: laurie@tratt.net   twitter: laurencetratt twitter: laurencetratt