This article discusses Google Cloud Platform's networking services, and how they compare to traditional data center technologies. In addition, it explores the differences between the two service models, and describes Cloud Platform's VPC networking features in detail.
Service model comparison
In a traditional data center, you manage a complex network setup composed of racks of servers, storage devices, multiple layers of switches, routers, load balancers, firewall devices, and more. In addition to these hardware components, you must set up, maintain, and monitor the network's underlying software, as well as detailed device configurations for your environment. And the managerial overhead doesn't end there: you also have to worry about the security and availability of your network, and you must plan out the upgrades and expansions of your network—a lengthy and time-consuming process.
In contrast, Cloud Platform’s VPC networking infrastructure is built around a software-defined networking (SDN) model. This model removes much of the aforementioned maintenance and managerial overhead so that you can more rapidly customize and scale your services to help meet the needs of your growing customer base.
Cloud Platform's VPC networking services are designed to provide:
- High throughput
- Low latency
- Global availability
- Hitless upgrades
- Rapid scale
- Security, management, data privacy, and monitoring
Though Cloud Platform removes the need to manage physical data center networking components yourself, many familiar networking concepts, including subnets, routes, firewall rules, DNS, load balancing, VPN, NAT, and DHCP, remain applicable with Cloud Platform. Your prior knowledge in these areas will reduce the learning curve when dealing with VPC networking and help you drive best practices in your cloud environment.
VPC Network comprises Cloud Platform's fundamental networking technologies, including networks, subnets, IP addresses, routes, firewalls, VPN, and Cloud Router. This foundation allows you to create compute instances and containers that share a single, global VPC network, and can be added to regional subnets. With Cloud Platform, you have the flexibility to offer low-latency services in regions and zones close to your end users and to spread services across multiple failure domains for availability.
On Cloud Platform, a VPC network is a global, private RFC 1918 IP space. Because Cloud Platform's VPC networks are global by default, you no longer need to connect and manage multiple private networks separately to achieve global availability. When using a global VPC network, your Google Compute Engine virtual machine (VM) instances can be addressed within your VPC network by both IP address and name, as your VM instance names are automatically tracked as hostnames by Compute Engine's internal DNS.
You can use subnets within your VPC network to group your instances by region or zone and to control the segmentation of your IP address spaces. You can manage your own subnets using any private RFC 1918 IP range, or you can use an auto mode VPC network, which automatically creates subnets in every Cloud Platform region. IP ranges between subnets do not need to be contiguous and are dynamically expandable.
To help control the isolation between subnets, you can add instance tags to individual VM instances or to instance templates, and then configure your routes and firewall rules to make use of these tags.
Instances support up to two IP addresses: an ephemeral, internal IP address, which is used for communicating within the VPC network, and an optional, external IP address, which is used to communicate outside the VPC network. External IP addresses are ephemeral by default, but can be static as well. For more information about how IP addresses work on Cloud Platform, see IP Addresses.
Cloud Platform provides robust control mechanisms to help your organization-wide network and security administrators safeguard your resources, manage your VPC networks, and authorize access to your external endpoints. On Cloud Platform, you can use identity and access management (IAM) roles, firewalls, and routes to help you implement policies and patterns to control and secure your VPC networks.
Google Cloud IAM provides several networking-related roles, including Network Admin, Network Viewer, Security Admin, and Compute Instance Admin, to clearly separate resource control. For more information about Cloud IAM and identity management on Cloud Platform, see the Management article.
Similar to your data center's DMZ, each VPC network has a firewall that blocks all incoming traffic from outside a VPC network to instances by default, and you can allow access by configuring firewall rules. Unlike traditional DMZs, however, Cloud Platform's firewalls are globally distributed to help avoid problems related to scaling with traffic.
By default, firewall rules are applied to the whole VPC network. However, you can apply a firewall rule to a set of instances by adding a specific tag to instances, and then applying the firewall rule to instances with that tag. You can also use firewall rules to control internal traffic between instances by defining a set of permitted source machines in the rule.
Routes, or rules for forwarding traffic, are global resources that are tied to a single VPC network. You can apply routes to one or more instances in the VPC network by including target instance tags when creating the routes. The routes will then be added to any instances that also use those instance tags.
When you initialize Compute Engine or create a new VPC network, certain routes are automatically created by default. You can also add user-created routes to a VPC network as needed. For example, you can add a route that forwards traffic from one instance to another instance within the same VPC network, even across subnets, without requiring external IP addresses.
When you set up subnets within a VPC network, GCP creates routes
between the subnets by default. However, unless you are using the
default VPC network, you must create firewall rules
to allow traffic among instances on your VPC network.
Routes also allow you to implement more advanced networking functions for your VM instances, such as setting up many-to-one NAT and transparent proxies. The default routes are designed so that the VPC network knows how to send traffic to the Internet and to all instances you create.
In traditional data center environments, load balancers can create operational complexity and usually lack global scalability. To support the scalability and availability of your services, you can use Cloud Load Balancing, which distributes your compute resources behind a single global anycast IP. Cloud Load Balancing supports HTTP(S), TCP/SSL, and UDP load balancing, as well as SSL offloading, session affinity, health checks, and logging.
Layer 7 load balancing
HTTP(S) (Layer 7) load balancing, automatically balances your traffic across regions, directing the traffic to the nearest instances and away from unhealthy instances. You can also add content-based load balancing to distribute traffic to instances optimized for specific content such as video. To help lower latency and improve performance for your users, you can also combine Cloud CDN, Cloud Platform's content delivery network service, with HTTP(S) load balancing.
Layer 4 load balancing
Network load balancing (Layer 4) is also available for when TCP/UDP services demand additional instances for traffic spikes. With network load balancing, you can load balance additional protocols such as SMTP traffic, add session affinity, and inspect packets. SSL proxy provides SSL termination for non-HTTPS traffic with load balancing, while SSL offloading allows you to centrally manage SSL certificates and decryption, offloading that processor-intensive work from your instances.
Load balancing is backed by Compute Engine autoscaling, which automatically adds and removes instances from an instance group based on an autoscaling policy. You can choose one of the autoscaler's predefined autoscaling policies, or define a custom policy based on Stackdriver Monitoring metrics.
Google Cloud DNS
You can use Google Cloud DNS to make your applications and services globally available to users with very high availability and low latency. Cloud DNS scales to large numbers of DNS zones and records so you can reliably create and update millions of DNS records.
If you’re designing a hybrid architecture, in which you will maintain some resources on Cloud Platform and other resources elsewhere, Cloud Platform provides a variety of ways to integrate with your external environment.
The Cloud VPN managed service allows data from your private cloud to feed into your Cloud-Platform-hosted services through secure IPSec tunnels. Each Cloud VPN gateway can sustain up to 1.5 Gbps across its tunnels. For guidance on configuring Cloud VPN, see the VPN Interoperability Guides.
Google Cloud Router
As you expand your cloud subnets, create additional cloud applications, or make changes to your private cloud subnets, you can use Google Cloud Router to automatically discover and advertise your network changes. Cloud Router supports multiple VPN tunnels through either ECMP routing or a primary/backup setup. Router events, BGP events, and route events appear in Stackdriver Logging, and Cloud Router publishes metrics to Stackdriver Monitoring.
For customers that require an enterprise-grade connection with higher availability and lower latency than their existing connections, Cloud Platform offers Cloud Interconnect. Your network traffic will go directly to Cloud Platform, avoiding unnecessary trips through third party networks. Cloud Interconnect provides the following services:
- Carrier Peering services, which are provided through a group of service provider partners.
- Direct Peering services. You can peer with Google at any of its 70+ peering locations to exchange high throughput cloud traffic, provided that you can meet Google at its peering location and satisfy Google's peering requirements.
- CDN Interconnect, which allows select providers to establish direct interconnect links with the Google edge network. This edge connection lets you optimize your traffic, allowing you to cache large data files near your users and reduce access latency for frequently updated content.
The egress traffic from a given VM instance is subject to maximum network egress throughput caps. These caps are dependent on the number of cores that the VM instance has. Each core is subject to a 2 Gbps cap for peak performance. Each additional core increases the network cap, up to a theoretical maximum of 16 Gbps for each instance. The actual performance you experience will vary depending on your workload. All caps are meant as maximum possible performance, and not sustained performance.
To measure the performance of Cloud Platform and other cloud offerings against
the performance of your native data center environment, you can use
an open-source benchmarking tool. To benchmark Cloud Platform, you can run the
named set, which includes
For a simple example in which PerfKit Benchmarker is used to measure network
Measuring network throughput.
For product-specific pricing details, see the following pages:
- VPC Network: Network Pricing
- Cloud Interconnect: Cloud Interconnect Pricing and Direct Peering Pricing
- Cloud DNS: Cloud DNS Pricing
You can also use the Cloud Platform pricing calculator to model costs for specific scenarios.
Review best practices
Try some hands-on tutorials
Ready to get your hands dirty? Take a look at the Networking 101 and Networking 102 codelabs, which walk you through basic and intermediate networking tasks using Cloud Platform's VPC networking services.
Read about Google's networking stack
For over a decade, Google has invested heavily to innovate in software-defined networking in the cloud for optimal performance and cost benefits. The following articles and research papers provide insight into various aspects of Google's networking stack:
- A Guided Tour of Data-Center Networking (2012)
- B4: Experience with a Globally-Deployed Software Defined WAN (2013)
- Enter the Andromeda zone - Google Cloud Platform’s latest networking stack (2014)
- Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network (2015)
- Maglev: A Fast and Reliable Software Network Load Balancer (2016)