Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Welcome Guest | Log In | Register | Benefits
Channels ▼
RSS

Design

Sharing Is the Root of All Contention


Quick: What fundamental principle do all of the following have in common?

  • Two children drawing with crayons.
  • Three customers at Starbucks adding cream to their coffee.
  • Four processes running on a CPU core.
  • Five threads updating an object protected by a mutex.
  • Six threads running on different processors updating the same lock-free queue.
  • Seven modules using the same reference-counted shared object.

Answer: In each case, multiple users share a resource that requires coordination because not everyone can use it simultaneously — and such sharing is the root cause of all resulting contention that will degrade everyone's performance. (Note: The only kind of shared resource that doesn't create contention is one that inherently requires no synchronization because all users can use it simultaneously without restriction, which means it doesn't fall into the description above. For example, an immutable object can have its unchanging bits be read by any number of threads or processors at any time without synchronization.)

In this article, I'll deliberately focus most of the examples on one important case, namely mutable (writable) shared objects in memory, which are an inherent bottleneck to scalability on multicore systems. But please don't lose sight of the key point, namely that "sharing causes contention" is a general principle that is not limited to shared memory or even to computing.

The Inherent Costs of Sharing

Here's the issue in one sentence: Sharing fundamentally requires waiting and demands answers to expensive questions.

As soon as something is shared in a way that doesn't allow unlimited simultaneous use, we are forced to partially or fully serialize access to it. The word "serialize" should leap out as a bright red flag -- it is the inverse and antithesis of "parallelize," and a fundamental enemy of scalability. That the very act of sharing a thing demands overhead is inherent in the notion of sharing itself, not only in software; it applies equally to "one child at a time can use the blue crayon" as to "one process at a time can write to the file" and "one writer at a time can use the shared object."

That we're forced to serialize access, in turn, means two things: we are forced to (potentially) wait, and we are also forced to start answering expensive questions like: "Can I use it now?" Computing the answer to any question in software costs CPU cycles. But these are more expensive questions, because answering them requires agreement at many levels of the system — across software threads and processes, and across hardware processors and memory subsystems — and getting that agreement can cost substantial communication and synchronization overhead that's not even visible in the source code.

A Roadmap

Table 1 helps to map out these costs by breaking down their sources (rows) and how they manifest (columns), and giving examples of each case. The rows categorize the sources into three kinds of sharing overhead, where the first two are fundamental and the third arises when attempts to optimize these costs fail:

  • Blocking progress (fundamental): Sharing requires waiting. Clearly, when one client has exclusive access to the resource, some or all other clients must wait idly and are unable to make progress. (The worst case is when a client could starve, or be forced to wait indefinitely.)
  • Slowing progress (fundamental): Sharing demands answers to expensive questions. Beyond actual waiting, there is usually a cost for just asking for the coordination — whether or not it is actually needed.
  • Wasting progress (secondary): Resolving contention can mean throwing away work. Additionally, some protocols perform optimistic execution to mitigate the first two costs and perform faster in the case of no contention. But when contention actually happens, the protocol may have to undo and redo otherwise — useful work. The more contention we encounter, the more effort we waste.

Table 1: Examples of contention penalties

These penalties also manifest in various ways, shown in the columns. The simplest case is when the potential cost is visible in our source code (column A). For example, just by looking at the code we know that the expression mutex.lock() might cause our thread to wait idly. But each penalty also manifests in invisible ways, both in software and in hardware (columns B and C), particularly on multicore hardware.


Related Reading


More Insights