Concurrency

Future (circa 2010) Parallel Programming Models
Concurrency
Published: 2015-12-26
Future (circa 2010) Parallel Programming Models

Joe Duffy regularly posts amazing material which is well ahead of our time, such as his current blog post series about Midori.

I’d like to call out this particular assertion made by him way back in 2010:

[D]evelopers must move towards single-threaded programming models connected through message passing, optionally with provably race-free fine-grained parallelism inside of those single-threaded worlds.

Add “async/await everywhere” and you can sign me up!

Threading is a Poor Concurrency Programming Model
Concurrency
Published: 2015-06-03
Threading is a Poor Concurrency Programming Model

Here’s why I try to avoid thread-based programming models for expressing concurrency:

The opponents of thread-based systems line up several drawbacks. For Ousterhout, who probably published the most well-known rant against threads [Ous96], the extreme difficulty of developing correct concurrent code–even for programming experts–is the most harmful trait of threads. As soon as a multi-threaded system shares a single state between multiple threads, coordination and synchronization becomes an imperative. Coordination and synchronization requires locking primitives, which in turn brings along additional issues. Erroneous locking introduces deadlocks or livelocks, and threatens the liveness of the application. Choosing the right locking granularity is also source of trouble. Too coarse locks slow down concurrent code and lead to degraded sequential execution. By contrast, too fine locks increase the danger of deadlocks/livelocks and increase locking overhead. Concurrent components based on threads and locks are not composable. Given two different components that are thread-safe, a composition of them is not thread-safe per se. For instance, placing circular dependencies between multi-threaded components unknowingly can introduce severe deadlocks.

Read more...
Clean Code is Not Threadsafe Code?
Concurrency clean-code thread-safety
Published: 2013-10-05
Clean Code is Not Threadsafe Code?

I am currently reading Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin and I’ve noticed a pattern.

In Listing 2-1, Martin recommends refactoring this code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
private void printGuessStatistics(char candidate, int count)
{
    String number;
    String verb;
    String pluralModifier;
    if (count == 0) {
        number = "no";
        verb = "are";
        pluralModifier = "s";
    } else if (count == 1) {
        number = "1";
        verb = "is";
        pluralModifier = "";
    } else {
        number = Integer.toString(count);
        verb = "are";
        pluralModifier = "s";
    }
    String guessMessage = String.format(
        "There %s %s %s%s", verb, number, candidate, pluralModifier
    );
    print(guessMessage);
}

into this:

Read more...
Reader/Writer Lock Pattern
Concurrency locking
Published: 2010-03-04
Reader/Writer Lock Pattern

A reader-writer lock is a lock which will allow multiple concurrent readers but only one writer. A reader-writer lock can be significantly more efficient than a standard mutex if reads on your shared memory far outnumber writes.

Reader-writer locks naturally fit together with caches, as caches are only effective if reads far outnumber writes.

Here is a general pattern for using a reader-writer lock with a cache:

  1. Acquire a reader lock.
  2. Check the cache for the value. If it exists, save the value and go to step 8.
  3. Upgrade the reader lock to a writer lock.
  4. Check the cache for the value. If it exists, save the value and go to step 7.
  5. Calculate the value (expensive, otherwise we wouldn’t cache it)
  6. Insert the value into the cache.
  7. Release the writer lock.
  8. Release the reader lock.
  9. Return the value.

The reason why we have to check the cache for the value again in step (4) is because of the following possibility (assume step 4 doesn’t exist):

Read more...