Communicating Process Architectures - CPA2014 Conference Summary

The CPA2014 conference held in Oxford over the last few days brought together an interesting group of around three dozen experts in the field of communicating process architectures. Discussions were on concurrency and parallelism topics, both theoretical and practical, in areas related to communicating process architectures (CPA).

Also known as process-oriented programming, CPA is about both concurrency (the natural expression of things that are happening alongside each other) and parallelism (the physical execution of many parts of a program at the same time) by means of process algebra formalisms and CSP in particular.

Many leading names of this field were there. Being hosted by the prestigious Programming Research Group at Oxford University, it was not surprising to meet a large contingent from that body, notably including Bill Roscoe and Bernard Sufrin. Their research colleague Thomas Gibson-Robinson won the best paper at conference award for his well-presented work achieving super-linear speed-up of the FDR verification tool, which is implemented in Haskell, across a large parallel cluster in the Amazon EC2 cloud. This was achieved using CPA techniques.

Notable amongst the keynote speakers was Roger Shepherd’s “*Parallel Systems from 1979 to 2014*” review, a clear insight into what the drivers are for parallel systems and progress (or sometimes otherwise) towards constructing useful parallel systems. Roger was one of the lead architects in the Inmos Transputer programme and sees a lot of the then-drivers of that technology re-emerging into modern-day computing. Importantly, because Moore’s “Law” and Dennard Scaling stopped nearly ten years ago, parallel systems will now become increasingly necessary.

The strong theme running through all the papers was communicating process architectures based on process algebras, primarily CSP. CSP mappings into various languages were discussed at this conference (and of course its forerunners in earlier years) and included JCSP for Java, PyCSP for Python, and Communicating Scala Objects (CSO) for Scala. Go supports a muted form of CSP interactions directly. The original formally-derived Occam language is perhaps the grandfather of these and still the one to beat - Occam programs compiled with current compilers are notable for their high performance and unrivalled scalability across parallel machines. Ongoing development of Occam is modest but still continues.

In a workshop led by Peter Welch on applying lock-free algorithms, ways to improve JCSP (which is mature and stable) involved coding exercises and application of FDR to verify the correctness of the results.

CSO is far less mature than JCSP, but could potentially totally transform concurrency in Scala on the JVM. It solves the serious problems of using native threads in the JVM by using lightweight coroutine-like concurrency in the same manner as Occam and Go. It offers a simpler, better performing and more general (and verifiable) alternative to products such as Akka.

Besides covering concurrent software issues, the conference also included hardware co-design. For example, there were papers describing the mapping of Haskell models directly into FPGA bespoke execution units for various applications.

I have given only an overview here. This sector has always been deeply interesting to me. The efforts to use process algebras to find easier ways to create faster, more scalable concurrent and parallel systems have worked extremely well in the past but have tended to face strong headwinds, perhaps because some of the propositions are radical and often because alternatives claim to be more expedient. Over the longer term, it is quite likely that limited-scope concurrency solutions (Node-JS for example) will come and go, whilst ideas with a clearer theoretical basis will stick around longer.

On this basis you should expect to find process oriented programming making inroads into your own work.

 
comments powered by Disqus