I will try to skip some of the awkward, long-winded introductions that come with the inaugural entry of a new blog. So, to keep things short, my name is Neil, and I’m a software developer.
The subject that has been gnawing at my brain for a while now is concurrency. Not a lot of developers realize it yet, but concurrency is coming to get us all. It’s like a drumbeat in the distance which is getting gradually louder and louder until soon we won’t be able to hear anything else.
Why is this? Blame it on the speed of light. Although Moore’s Law still seems to be with us (i.e. the transistor density of chips continues to grow exponentially) there has been a marked slowdown in the growth of clock speeds. CPUs are still able to churn through ever more instructions per second, but the time to process one instruction is not coming down as fast. This means that code written in the traditional way just isn’t getting faster each time you buy a new computer, like it used to.
But new computers have to perform faster than the ones they replace! It’s a law of the market, just like the one that says public companies have to make more money each year than they did the last. So the chip-makers have started to use other techniques. They have gone multicore: these days when you buy a new computer, it has two or four or even eight CPUs inside. And they’ve added hyperthreading, which allows a single core to run more than one thread the same time.
For developers, this is a nightmare. It almost doesn’t matter how much we optimize our code any more — that’s just straight-line speed. A program that runs fast only on a single core is like a drag car: put it on a race track with corners, and watch it fly straight into the wall. We have to be like a Formula One car, with the optimal mix of cornering ability and straight-line speed.
Unless we aggressively seek out ways to make our code run in parallel, we’re going to sacrifice 50% or 75% or 87.5% of the available speed of our computers. And that’s scary because parallel code is just so much harder to write. Building Formula One cars is an order of magnitude harder than building drag cars.
This is bad news for our industry because there aren’t enough of the really smart people who know how to build concurrent applications. I can see only two solutions to this problem. One is to just accept that development is going to get harder, and the developers who can’t deal with it should just get out. Well, I guess we all know a few developers who we think should take up another profession. Ideally a profession that doesn’t require rational thought. But putting that aside, this isn’t really a solution. There is still more software needed in the world than the software development profession is able to deliver. Besides, I don’t think they will actually go, they’ll just go on producing broken code.
So the second solution is to improve our tools, and give every developer the chance to safely employ concurrency. Today, when we introduce concurrency into an application, we’re very likely to shoot ourselves in the foot with a deadlock or a race condition. Even when we get concurrency right, we can’t build higher level abstractions that allow others to consume our code without understanding its internal structure. Very rarely do we achieve what we set out to do, i.e. make the damn thing go faster. The mainstream languages and tools are failing us.
A seminal paper on this subject entitled “The Free Lunch is Over” by Herb Sutter appeared in Dr. Dobb’s Journal, 30 March 2005. It can also be found online here. Herb’s analysis is far more detailed than mine, so I encourage you to read it.
Anyway this (at least until I pick up a new obsession) will be the subject of my blog: why concurrency is hard; why our mainstream languages and programming techniques fail to help us get it right; and what rays of light are on the horizon to bring concurrent programming to the masses.