Jump to Content
Networking

Network Performance Decoded: A brief look at performance limiters

December 10, 2024
Sumit Singh

Senior Product Manager

Rick Jones

Network Software Engineer

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

A few months back, we kicked-off Network Performance Decoded, a series of whitepapers sharing best practices for network performance and benchmarking. Today, we’re dropping the second installment – and this one's about some of those pesky performance limiters you're bound to run into. In this blog, we're giving you the inside scoop on how to tackle these issues head-on. 

First up: A Brief Look at Network Performance Limiters

This whitepaper busts the myth that network interface card speed is the be-all and end-all of network performance. Sure, high Mbit/s or Gbit/s sounds great, but packet size and offloading techniques can make a huge difference in how efficiently your network actually runs. Here's a sneak peek:

  • Mbit/s isn't everything: It's not just about raw speed. How you package your data (packet size) seriously impacts throughput and CPU usage.

  • Bigger packets, better results: Larger packets mean less overhead per packet, which translates to better throughput and less strain on your CPU.

  • Offload for a TCP boost: TCP Segmentation Offload (TSO) and Generic Receive Offload (GRO) let your network interface card do some of the heavy lifting, freeing up your CPU and giving your TCP throughput a nice bump — even with smaller packets.

  • Watch out for packet-per-second limits: Smaller packets can sometimes hit a bottleneck because of how many packets your system can handle per second.

  • At a constant bitrate, bigger packets are more efficient: Even with a steady bitrate, larger packets mean fewer packets overall, which leads to less CPU overhead and a more efficient network.

Get a handle on these concepts and you'll be able to fine-tune your network for better performance and efficiency, no matter the advertised speed.

Next: A Brief Look at Round Trip Time

This whitepaper dives into TCP Round Trip Time (RTT) — a key network performance metric. You'll learn how it's measured, what can throw it off, and how to use that info to troubleshoot network issues like a pro. We'll show you how the receiving application's behavior can mess with RTT measurements, and call out some important nuances to consider. For example, TCP RTT measurements do not include the time TCP may spend resending lost segments, which your applications see as latency. Lastly, we’ll show how you can use tools like netperf (also included in our PerfKit Benchmarker toolkit) to get an end-to-end picture.

Finally: A Brief Look at Path MTU Discovery

Last but not least, this whitepaper breaks down Path MTU discovery, a process that helps prevent IP fragmentation. Understanding how networks handle packet sizes can help you optimize your network setup, avoid those frustrating fragmentation issues, and troubleshoot effectively. We'll even walk you through common problems — like those pesky ICMP blocks leading to large packets being dropped without any notification to the sender — and how to fix them. Plus, you'll learn the difference between Maximum Transmission Unit (MTU) and Maximum Segment Size (MSS) — knowledge that'll come in handy when you're configuring your network and troubleshooting packet size problems.

Stay tuned!

These resources are part of our mission to create an open, collaborative space for network performance benchmarking and troubleshooting. The examples might be from Google Cloud, but the ideas apply everywhere – regardless of where your workloads may be running. You can find all our whitepapers (past, present, and future) on our webpage. Keep an eye out for more!

Posted in