I am new to Parallel Computing, new to writing blog posts, basically, I am, what they call, a “noob”. Through this blog, I plan to document my exploration, interpretation and implementation of parallel computing. I hope this serves as a reference to some student somewhere trying to get a hang of the subject, and excel at it, like I am trying to.
My first post, to get the clichés out of the way, is going to be about the differences between Latency and Throughput. This is a fundamental concept in Computer Science and Parallel Computing, and it is essential that every Computer Science major have a clear picture about the differences. I am going to use GIFs that I made (using Photoshop) to get my message across. (A picture is worth a thousand words, so a 4 frame GIF worth four thousand? :P)
In layman terms, latency is the time it takes for a message to traverse a system. A low latency is indicative of a high network efficiency.
Throughput can be looked at as the amount of material or items that traverse the system. It is similar to capacity, and can be looked upon as the amount of work a computer can do in a given period of time. The throughput of a computer will depend on factors like the speed of the CPU, the amount of available memory, the performance of the operating system, the kind of transmission media, etc.
An Interesting Analogy
My professor gave us an interesting analogy in class to get the message across and I hope it serves as helpful for everyone as it was for me.
Imagine a group of people who want to travel from one place to another. If we use a sports car to transport one person at a time, individual people get there faster. Hence, the latency is high, the time per person is low. But to transport the entire group of people it takes much longer; the sports car needs to make n trips, i.e the throughput of the system is low.
The GIF above shows one person speeding away, thus increasing the latency per person.
Now if we had a school bus, we could have transported all of those people together in one go. In this case, the throughput would have been high, but the latency, which describes the speed per person, will go down. This is shown in the following image.
The GIF shows a bus transporting a bunch of people together, thus increasing the throughput of the the system.
We cannot say what is better, having a higher latency or having a higher throughput. It is implementation dependent. Different applications might require different criteria.
Another interesting analogy I found on the internet spoke about latency and throughput in terms of water entering a pipe. Latency depends on the length of the pipe, if the pipe is small in length, water will flow out faster. Throughput depends on the diameter of the pipe, more the diameter, more the amount of water that can traverse through the pipe.
Interested readers can read the original post at: http://www.futurechips.org/thoughts-for-researchers/clarifying-throughput-vs-latency.html