At a recent conference, I had one of those 'aha!' moments when some existing pieces together to give a new insight. Sometimes, a glimpse of greater clarity sheds light on where we are and where we're going. In this particular case, I'd been wrestling with concurrency issues. All concurrent software needs to share state. Obviously, if this were not true, it would just be a bag of disjoint stuff otherwise. Sharing state is where the fun starts. And there is a fourth model! I'll explain...
Stating the Obvious - Stuff Gets Shared
Under the hood, each CPU has a pile of memory in a linear space, albeit often apparently made sparse via a memory management unit. Many commodity CPUs these days have multiple cores; but whether or not this is the case, there are usually many things that a CPU or core is trying to do at the same time. There are various ways to do this - here's a summary...
Model 1: Sharing Mutable Variables Using Locks
The ‘mainstream’ approach to sharing state is simple, flowing directly out of the underlying linear memory model - let the threads of activity on each CPU/core access the same or overlapping regions of the memory.
State in memory needs to be changed, so let’s make sure we don’t get race conditions by having locks around each shared access (writes and reads alike). Often simple locks are used, although a ‘concurrent read / exclusive write’ (Crew) lock allows better underlying concurrency so potentially better performance, at the cost of a little extra complexity.
Model 2: Sharing Mutable Variables Using Lock-Free Algorithms
Sometimes (or even often), problems arise because of using locks to prevent race conditions. Firstly, there is the overhead of obtaining and releasing the locks. Then there is the reduction in concurrency because only one activity thread exists during the duration of the lock. Then there is the possibility that a mutual grab for more than one lock by two or more threads may cause a deadlock. Hmmm.
So, clever people have developed a range of lock-free algorithms that allow mutable state to be shared. The principle is simple: in each algorithm, follow the rules during the exchange of information and race conditions won’t happen. This is good. Unfortunately, there is no such thing as a free lunch; lock-free algorithms may perform faster than their lock-based counterparts, or they may indeed be slower instead. They add complexity, require more self-discipline, and therefore introduce more scope for bugs. And they are not available for every possible situation either. So lock-free algorithms are not a panacea.
Model 3: Sharing Immutable State
There’s a simple answer to the problem of race conditions. This answer is applicable in quite a wide range of programming languages, but is voiced most loudly by the functional programming community for whom it is a basic principle.
Stop using mutable state. All immutable values can be happily read concurrently by many threads.
This is, in essence, like the ‘C’ part of a Crew lock, but when no concurrent writing can happen, no lock is needed. Simples.
There is a downside (…why is there always a downside?)! The downside is that pure immutabiity is tricky with non-FP languages. And FP languages need state monads because there is always progressing state that needs to be modelled. Hmmm, tricky either way.
Model 4: Not Sharing Mutable State
The ‘aha!’ moment I mentioned earlier was from the discovery that there is a fourth model. The other three all have their place. But there is a fourth. Perhaps it’s a well-kept secret: not many languages offer it and it’s rarely taught. I wonder why that is - it actually seems quite obvious and easy once you know.
The fourth model is simple: single transferrable ownership of state. This is nothing like the complicated voting system with a similar name - it’s really very simple.
If I have a complicated data structure and you need it, I give it to you. Only its owner is allowed to change it. It only ever has one owner.
The fact that I have to give it implies one important feature: messages are passed as the way to exchange data. In part, Go popularised this approach with “Do not communicate by sharing memory; instead, share memory by communicating“ (ref). However, Go doesn’t quite reach the model. It allows references to complex data structures to be aliased - i.e. there can be more than one owner of a given mutable variable. Nothing in Go formally stops you from causing nasty race conditions, but if you follow the channel-based pattern and are careful, you can help yourself do it.
If multiple aliases to shared variables are a problem, then one way to prevent race conditions is to prevent aliases. Preventing aliases simply and cleanly prevents race conditions.
Message passing concurrency has been specified and derived using process algebras such as CSP, CCS and Pi Calculus. Alias-free languages that implement this are very rare. I know of one - Occam, an old and rather-forgotten language. Hmmm, so this means I’ve just learnt something that was well known 30 years ago. Ho hum…
comments powered by Disqus