This is the thirteenth video in Part 3 of the Performance-Aware Programming series. Please see the Table of Contents to quickly navigate through the rest of the course as it is updated weekly.
In the previous post we talked about latency, throughput, and dependency chains. There was a lot to cover, so we only talked about dependency chains briefly, and we didn’t really use latency or (reciprocal) throughput to make any predictions performance. In this post, we’re going to look at dependency chains more closely, and see how latency and reciprocal throughput measurements relate to performance.
We’ll start at the laundry, but then we’ll move to the CPU to set us up for next week’s post.
Let's start with a more complicated “laundry” example than we had last time. Suppose we have twelve different things we need to do before we can go to bed: