Tech Tuesday: Concurrency (Conclusion)

One of the things I always realize as I write Tech Tuesdays is just how much there is to potentially know on any given topic. Entire books have been written on concurrency alone and there is already a long list of research papers published in 2013. Sometimes this realization can be genuinely daunting — any one human can know so little of all available knowledge. But I digress. Today’s post will be an attempt to write a conclusion to this mini series on concurrency.

As I pointed out at the end of the previous post on locks and mutexes, sometimes the cure to concurrency problems is worse than the disease. One field in which this is particularly acute are realtime systems, such as the computer in your car.  That computer is responsible for a lot of different things from operating your navigation system to deploying your air bag. One approach would be to use concurrent programming techniques. But that could have some pretty problematic results. Imagine the program for the airbag not being able to execute because it has to wait for some other program to release a needed resource. That could have some dramatic consequences.

So car computers tend to use a different approach. They run only a single program which takes predefined turns carrying out activities. For instance, it may check the conditions for air bag deployment every 100 milliseconds.  This approach has a lot of waste involved as (hopefully) most of the time when this check is carried out nothing needs to be done. But it is entirely predictable. And in this situation there is a very high premium on predictability of program behavior!

One way to build such a system is to have a master loop running that calls subroutines one after another (eg a subroutine for the air bag, one for the navigation, and so on). What’s critical to making that approach work is that the subroutines need to return in predictable (and usually very short) time. Another area where this approach is used a lot are user interfaces. For instance, I am typing this in a web browser on Tumblr. As I type, move the mouse, click on things, etc, we could either be jumping back and forth between different programs or execute a loop that invokes little pieces of code one after another to “work off” the user events. In this case the master program is known as an “event loop.” In fact there have been whole operating systems based on this kind of “cooperative" approach.

So why don’t we do everything that way? Because modern hardware has gone a different direction. Today’s CPUs all have multiple cores. A core is essentially a separate set of code running. And so we can’t have a single event loop but instead are back to having multiple programs executing concurrently. Because of this realization a lot of work is going into developing non-blocking algorithms and data structures. The basic idea is simple: let multiple programs work on the same things by allowing for the modification of data structure without blocking (hence, no deadlock, no resource starvation, etc). This turns out to be quite hard to do in practice though. One enabling technology that people have been working on is so-called Software Transactional Memory.

I won’t go into more detail here as this post is already too long — proving my point from the introduction. So at least for the moment this will be the end to the rather long programming series on Tech Tuesday. We have now covered all the 9 questions that I originally set out when I compared programming to telling a person what to do. Next Tuesday I will probably run another survey to determine what to write about next with one option being working through an example based on my interest in neural nets.

Posted: 12th February 2013Comments
Tags:  tech tuesday programming concurrency

Newer posts

Older posts

blog comments powered by Disqus
  1. wolf reblogged this from continuations and added:
    Thanks, very informative tech post.
  2. continuations posted this

Newer posts

Older posts