It feels like every few days I run into a trace that follows an unfortunate pattern. The top-level call depends on an external HTTP call , a series of calls to a database , and finally a join operation . This often presents (at least) three problems.
The queries don’t depend on results from one another and be executed as a single transaction.
Someone was being silly and set to run only after has completed. We can resolve this by setting and to run concurrently (provided neither strictly depends on the other).
Operation is a non-negotiable, blocking call to a datacenter halfway around the world. We can treat this as totally external to our system, out of our control.
Even if we have a poorly structured implementation of , fixing the first two points alone may only yield a small performance improvement on the overall execution time of . The long-running operation on the hot path, will be the death of us. This is an example of an operation running smack into Amdahl’s law.
Paraphrasing Amdahl: The overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used.
I’m certain Amdahl wasn’t the first person to notice this as a general property of systems. I wouldn’t be surprised if someone in Mesopotamia observed this years ago.
This effect was originally described in the context of computer systems in Gene Amdahl’s Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities and thus named Amdahl’s Law. We can write to describe the throughput of a system as a function of the speedup () on a task which is used for of the system’s execution time.
If our improvable task is of the total execution time, there is no amount of optimization we can do to speed up the overall throughput beyond times.
In A Simple Capacity Model for Massively Parallel Transaction Systems, Neil Gunther described a similar phenomena in the context of analyzing the scaling of multi-processor transactional DB systems. In this model, can be thought of as a function of throughput given nodes.
It bears mentioning that the parameters of a system are observed empirically. A regression doesn’t care how a slowdown arises in a system, just that it performs relatively better/worse when new nodes are added.
When we deal with distributed systems, we must concern ourselves with the potential for retrograde scaling. Beyond a certain number of nodes, absolute throughput may drop due to coordination overhead across many nodes. If your system is farming out stateless calculations, we may expect and to be . The same system, but with serialized writes may have a relatively a high (contention waiting for a write lock — polynomial with ). Notice that when is large can actually become when is high. This is very bad, we want to avoid this.
There are two PSAs here:
Optimizing a single part of a system that contributes little to the overall execution is a fool’s errand. Obvious in theory, not always obvious in application!
Retrograde scaling exists. If we must design systems that require many independent workers, we should prioritize minimizing contention for shared data before adding more workers.