This document in the Google Cloud Architecture Framework provides recommendations to help you optimize the performance of your networking resources and APIs in Google Cloud.
Network Service Tiers
Network Service Tiers lets you optimize the network cost and performance of your workloads. You can choose from the following tiers:
- Premium Tier uses Google's highly reliable global backbone to help you achieve minimal packet loss and latency. Traffic enters and leaves the Google network at a global edge point of presence (PoP) that's close to your end user. We recommend using Premium Tier as the default tier for optimal performance. Premium Tier supports both regional and global external IP addresses for VMs and load balancers.
- Standard Tier is available only for resources that use regional external IP addresses. Traffic enters and leaves the Google network at an edge PoP that's closest to the Google Cloud location where your workload runs. The pricing for Standard Tier is lower than Premium Tier. Standard Tier is suitable for traffic that isn't sensitive to packet loss and that doesn't have low latency requirements.
You can view the network latency for Standard Tier and Premium Tier for each cloud region in the Network Intelligence Center Performance Dashboard.
Jumbo frames
Virtual Private Cloud (VPC) networks have a default maximum transmission unit
(MTU)
of 1460 bytes. However, you can configure your VPC networks to
to support an MTU of up to 8896
(jumbo frames).
With a higher MTU, the network needs fewer packets to send the same amount of data, thus reducing the bandwidth used up by TCP/IP headers. This leads to a higher effective bandwidth for the network.
For more information about intra-VPC MTU and the maximum MTU of other connections, see the Maximum transmission unit page in the VPC documentation.
VM performance
Compute Engine VMs have a maximum egress bandwidth that in part depends upon the machine type. One aspect of choosing an appropriate machine type is to consider how much traffic you expect the VM to generate.
The Network bandwidth page contains a discussion and table of network bandwidths for Compute Engine machine types.
If your inter-VM bandwidth requirements are very high, consider VMs that support Tier_1 networking.
Cloud Load Balancing
This section provides best practices to help you optimize the performance of your Cloud Load Balancing instances.
Deploy applications close to your users
Provision your application backends close to the location where you expect user traffic to arrive at the load balancer. The closer your users or client applications are to your workload servers, the lower the network latency between the users and the workload. To minimize latency to clients in different parts of the world, you might have to deploy the backends in multiple regions. For more information, see Best practices for Compute Engine regions selection.
Choose an appropriate load balancer type
The type of load balancer that you choose for your application can determine the latency that your users experience. For information about measuring and optimizing application latency for different load balancer types, see Optimizing application latency with load balancing.
Enable caching
To accelerate content serving, enable caching and Cloud CDN as part of your default external HTTP load balancer configuration. Make sure that the backend servers are configured to send the response headers that are necessary for static responses to be cached.
Use HTTP when HTTPS isn't necessary
Google automatically encrypts traffic between proxy load balancers and backends at the packet level. Packet-level encryption makes Layer 7 encryption using HTTPS between the load balancer and the backends redundant for most purposes. Consider using HTTP rather than HTTPS or HTTP/2 for traffic between the load balancer and your backends. By using HTTP, you can also reduce the CPU usage of your backend VMs. However, when the backend is an internet network endpoint group (NEG), use HTTPS or HTTP/2 for traffic between the load balancer and the backend. This helps ensure that your traffic is secure on the public internet. For optimal performance, we recommend benchmarking your application's traffic patterns.
Network Intelligence Center
Google Cloud Network Intelligence Center provides a comprehensive view of the performance of the Google Cloud network across all regions. Network Intelligence Center helps you determine whether latency issues are caused by problems in your project or in the network. You can also use this information to select the regions and zones where you should deploy your workloads to optimize network performance.
Use the following tools provided by Network Intelligence Center to monitor and analyze network performance for your workloads in Google Cloud:
Performance Dashboard shows latency between Google Cloud regions and between individual regions and locations on the internet. Performance Dashboard can help you determine where to place workloads for best latency and help determine when an application issue might be due to underlying network issues.
Network Topology shows a visual view of your Virtual Private Cloud (VPC) networks, hybrid connectivity with your on-premises networks, and connectivity to Google-managed services. Network Topology provides real-time operational metrics that you can use to analyze and understand network performance and identify unusual traffic patterns.
Network Analyzer is an automatic configuration monitoring and diagnostics tool. It verifies VPC network configurations for firewall rules, routes, configuration dependencies, and connectivity for services and applications. It helps you identify network failures, and provides root cause analysis and recommendations. Network Analyzer provides prioritized insights to help you analyze problems with network configuration, such as high utilization of IP addresses in a subnet.
API Gateway and Apigee
This section provides recommendations to help you optimize the performance of the APIs that you deploy in Google Cloud by using API Gateway and Apigee.
API Gateway lets you create and manage APIs for Google Cloud serverless backends, including Cloud Run functions, Cloud Run, and App Engine. These services are managed services, and they scale automatically. But as the applications that are deployed on these services scale, you might need to increase the quotas and rate limits for API Gateway.
Apigee provides the following analytics dashboards to help you monitor the performance of your managed APIs:
- API Proxy Performance Dashboard: Monitor API proxy traffic patterns and processing times.
- Target Performance Dashboard: Visualize traffic patterns and performance metrics for API proxy backend targets.
- Cache Performance Dashboard: Monitor performance metrics for Apigee cache, such as average cache-hit rate and average time in cache.
If you use Apigee Integration, consider the system-configuration limits when you build and manage your integrations.
What's next
Review the best practices for optimizing the performance of your compute, storage, database, and analytics resources:
- Optimize compute performance.
- Optimize storage performance.
- Optimize database performance.
- Optimize analytics performance.