Updated July 18, 2017
Compare the networking services that Azure and Google provide in their respective cloud environments. Networking services provide connectivity across virtual machines, other cloud services, and on-premises servers.
Networks and subnetworks
Both Azure and Cloud Platform provide isolated virtual networks. Azure VNets are regional resources, and Cloud Platform VPC networks are global resources. In both Azure and Cloud Platform, you can create multiple networks within a given subscription or project, and you can further segment these networks into one or more subnetworks. In addition, VMs deployed within a virtual network can communicate with each other without any additional configuration, regardless of the subnetwork they're in.
On Azure, you can shrink or extend the IP range of your subnet only if the subnet does not already contain VMs or services. In contrast, on Cloud Platform, you can shrink or extend your subnet's IP range without affecting the VMs within the subnet.
Cloud Platform also allows you to share virtual networks across projects that are grouped within the same Cloud Organization. This capability is referred to as Shared VPC Networking. With Shared VPC, Cloud Organization administrators can give multiple projects permission to use a single, shared virtual network and corresponding networking resources. For more information about Shared VPC, see Shared VPC Networks Overview. Azure does not provide a feature comparable to Shared VPC.
Network interfaces and IP addresses
At a high level, VPC networks and Azure VNets handle IP addresses in similar ways. At launch, all VMs have a private internal IP. You can optionally associate an external IP with your VM, and this IP can be dynamic or static. However, the low-level details of each service differ slightly.
Azure treats network interfaces (NICs) and all IP address types as resources. You associate a public IP address resource with a NIC resource that is associated with a specific VM. Azure supports up to 32 NICs per machine, depending on the machine type. Each NIC supports up to 50 IP configurations, each of which can support a single public IP address and a single private IP address.
By default, your public IP address is ephemeral. If you dissociate a public IP address resource from a NIC resource, the public IP address resource persists, but the IP address itself is dropped. If you associate the public IP address resource with a new VM, a new IP address is attached to the resource.
Your VM's private IP address is also ephemeral. The VM drops its private IP address when you stop or decommission the VM, and picks up a new IP address when you start the VM again.
You can configure your public or private IP address to be static as well. If your IP address is static, it persists even if the IP address resource is dissociated from any NICs.
On Compute Engine, NICs are not treated as separate resources. Instead, they are tied directly to a given VM instance. Compute Engine VM instances support up to eight NICs, depending on machine type. Each NIC supports a single external IP address and a single internal IP address.
As with Azure, Compute Engine gives each VM instance an internal IP address, and you can attach an external IP address that is either ephemeral or static. Both internal and ephemeral external IP address are part of your VM instance rather than being separate resources. However, if you convert your external IP address from ephemeral to static, it becomes an independent resource that can be detached from the VM.
In contrast with Azure, your VM instance's internal IP address is reserved for the life of the instance.
Compute Engine also lets you assign a range of IP addresses as aliases to a VM instance's primary IP address. This feature allows you to assign separate IPs to separate services running on the same VM instance, making it particularly useful for use cases involving containerization. In addition, you can set up a secondary CIDR range in your subnet that can be used to assign alias IPs for that subnet.
|IP type||Azure||Cloud Platform|
|Permanent IP||Public Static IP||Static IP|
|Temporary IP||Public Dynamic IP||Ephemeral IP|
|Internal IP||Private IP||Internal IP|
Azure automatically handles the migration of your VMs when the underlying hardware is being updated or is failing. Though this process is usually seamless, you might occasionally be required to reboot your VM for updates to take effect. In some rare cases, Azure might need to force-reboot your VM.
Similarly, Cloud Platform features live migration, in which Cloud Platform automatically and transparently migrates VM instances when their host hardware needs maintenance or replacement. As with Azure's migration process, live migration is usually seamless but can require a VM instance reboot in some rare cases. Live migration is enabled by default for most machine types. However, machines with GPUs attach cannot use live migration, as GPUs are attached directly to the host hardware. For more information about the live migration feature, see the blog post.
Azure and Compute Engine both allow users to configure stateful firewall policies to selectively allow and deny traffic to networked resources. In both environments, each virtual network blocks all incoming traffic from outside the network by default. You can allow access to a specific resource by applying rules. On Azure, each rule is configured as a network security group (NSG), and on Compute Engine, each rule is configured as a firewall rule.
In both Cloud Platform and Azure, you can associate tags with your NSGs or firewall rules, allowing you to apply the rules to resources that use a given tag. On Azure, you can associate only a single NSG with a given subnet or NIC. In contrast, on Cloud Platform, you can apply multiple firewall rules to a resource, allowing for more granular management of your firewall.
Azure also allows users to configure stateless firewall rules by creating endpoint access control lists (ACLs), which can be applied at the VM level. Cloud Platform does not support stateless firewall rules.
Load balancers distribute incoming traffic across multiple instances. When configured appropriately, load balancers make applications fault-tolerant and increase application availability.
At a high level, Azure's load-balancing components map to those of Cloud Platform as follows:
|Component||Microsoft Azure||Google Cloud Platform|
|HTTP load balancing||Application Gateway (can be paired with Traffic Manager for cross-region load balancing)||Compute Engine HTTP(S) load balancer|
|TCP/UDP load balancing||Azure Load Balancer (Internet-facing load balancer)||Network load balancer, TCP proxy load balancer (cross-region)|
|Internal load balancing||Azure Load Balancer (Internal load balancer)||Compute Engine internal load balancer|
|SSL load balancing||Application Gateway (HTTPS traffic)||Compute Engine HTTP(S) load balancer (HTTPS traffic), SSL proxy load balancer (encrypted non-HTTP traffic)|
HTTP(S) load balancing
Azure and Compute Engine provide layer 7 load balancing, which distributes client requests at the application layer to achieve more sophisticated routing than layer 4 load balancing can provide. Azure provides this service through Application Gateway, and Compute Engine provides this service through its HTTP(S) load balancer. These services map to each other as follows:
|Feature||Application Gateway||Compute Engine HTTP(S) Load Balancer|
|HTTP load balancing||Yes||Yes|
|Single, global IP across regions||No||Yes (both IPv4 and IPv6)|
|Cross-region load balancing||When paired with Traffic Manager (DNS-based)||Supported natively (IP-based)|
|Content-based load balancing||Yes||Yes|
|Session affinity||Yes (cookie-based)||Yes (cookie-based and IP-based)|
|Logging||Yes||Yes (currently in Alpha)|
|Load distribution||Round robin||CPU utilization or requests per second (RPS)|
Cross-region HTTP(S) load balancing
Application Gateway and Compute Engine both provide ways of achieving cross-region HTTP(S) load balancing.
On Azure, you can configure cross-region HTTP(S) load balancing by placing Traffic Manager, Azure's DNS-based traffic routing service, in front of multiple Application Gateway endpoints. Traffic Manager then routes traffic to the Application Gateway endpoint closest to the end user according to the routing strategy you have chosen.
In contrast with Application Gateway, Compute Engine HTTP(S) load balancing supports cross-region HTTP(S) load balancing natively. When you construct your HTTP(S) load balancer, you set up a global forwarding rule that holds a single, global IP entry point. This rule sends traffic through a target proxy that distributes your traffic to the relevant backends. The HTTP(S) load balancer directs traffic to the backends closest to the end user. However, Compute Engine's global IP-based load balancing offers significant performance benefits:
- IP-based load balancing is load-aware. You can configure Compute Engine's global load balancer to distribute traffic based on CPU utilization or requests per second. In contrast, DNS-based load balancing is load-unaware, and must route traffic based on general routing strategies.
- DNS-based load balancing is at the mercy of ISPs. For a DNS change to take effect, the change must be recorded at the ISP level. Because ISPs often cache DNS records for hours at time, they might not pick up your DNS changes in a timely manner, causing your end users to be routed to an overutilized—or even failing—backend service. Global IP-based load balancing can help you dodge this source of potential issues.
Session affinity allows you to map a specific client to a specific backend, potentially saving server-side resources.
Application Gateway provides cookie-based session affinity, which associates a client with a specific backend VM by storing a client-side cookie.
Compute Engine's HTTP(S) load balancer also provides cookie-based affinity. In addition, the HTTP(S) load balancer provides IP-based affinity, which forwards all requests from a specific client IP address to the same VM instance.
On Application Gateway, you must manually add VMs when your load balancer's serving capacity is exceeded. In contrast, Compute Engine's HTTP(S) load balancer offers autoscaling of VM instances based on the load balancer's serving capacity, allowing you to handle traffic overflow without the need for manual intervention. This autoscaling occurs instantly, without any need for prewarming. For more information, see Load Balancing and Scaling.
TCP/UDP load balancing
Both Azure and Compute Engine provide layer 4 load balancing, which distributes client requests within a region at the networking transport layer. Azure provides this service through the Azure Load Balancer, and Cloud Platform provides this service through the Compute Engine network load balancer.
These services map to each other as follows:
|Feature||Azure Load Balancer||Compute Engine Network Load Balancer|
|TCP/UDP load balancing||Yes||Yes|
|Internal load balancing||Yes||Yes|
|Internet-facing load balancing||Yes||Yes|
|Supported application-layer protocols||Any||Any|
|Supported endpoints||Azure VMs (excluding Basic VMs), Cloud Services role instances||Target pools, target VM instances, backend services (internal load balancing only)|
|Default load balancing mode||5-tuple (source IP and destination IPs, source and destination ports, protocol type)||5-tuple (source and destination IPs, source and destination ports, protocol type)|
|Session affinity modes||2-tuple (source and destination IPs), 3-tuple (source and destination IPs, port)||2-tuple (source and destination IPs), 3-tuple (source and destination IPs, protocol type)|
Cross-region TCP/UDP load balancing
On Azure, you can improve end-user latency by placing Traffic Manager in front of your Azure Load Balancers, configuring it to dynamically route to their public IP addresses. This arrangement allows you to use a single DNS configuration to route traffic to the VMs that are closest to your users.
On Compute Engine, you can achieve similar results by setting up TCP proxy load balancing or SSL proxy load balancing. These services are similar in design to the Compute Engine HTTP(S) load balancer, but support a wider variety of application-layer protocols. As with the HTTP(S) load balancer, both services allow you to use a single, global IP address to distribute traffic to the VM instances that are closest to your users.
IP-based load balancing offers significant performance advantages over DNS-based load balancing, including load-based traffic distribution and more consistent performance. See the Cross-region HTTP(S) load balancing section for more information.
Azure Load Balancer charges are included in the pricing of Azure virtual machines.
Application Gateway charges an hourly rate for each gateway and a separate per-GB rate for traffic processed through the gateway.
Compute Engine charges an hourly rate for each forwarding rule and a separate per-GB rate for traffic processed through the load balancer.
A DNS service translates from human-readable domain names to the IP addresses that servers use to connect with each other. Managed DNS services, such as Azure DNS and Google Cloud DNS, offer scalable managed DNS in the cloud.
Azure DNS and Cloud DNS are very similar. Both support most common DNS record types, as well as anycast-based serving. Neither service currently supports DNSSEC.
Azure's connectivity services compare to those of Cloud Platform as follows:
|VPC peering||VNet peering||VPC network peering|
|Virtual private network||Azure VPN Gateways||Cloud VPN|
|Dedicated private connection through carrier partner||ExpressRoute||N/A|
|Dedicated public connection through carrier partner||N/A||Carrier Interconnect|
|Dedicated direct connection||N/A||Direct Peering|
|CDN peering||N/A||CDN Interconnect|
Azure and Cloud Platform both provide a virtual private network (VPN) service. Azure offers Azure VPN Gateway, and Cloud Platform offers Google Cloud VPN as part of Compute Engine. In each service, you create a tunnel from an external network to your internal Azure or Compute Engine virtual network, and then establish a secure connection over that tunnel.
To route traffic on Cloud Platform, you can use Google Cloud Router, which enables dynamic BGP route updates between your Compute Engine network and your non-Google network. Azure offers a similar routing service as a part of the Azure VPN Gateway service.
Virtual network peering
Both Azure and Cloud Platform provide the ability to peer one or more virtual networks. On Azure, this feature is called VNet peering, and on Cloud Platform, this feature is called VPC network peering. Virtual network peering has several advantages over external IP addresses or VPNs, including:
- Lower latency than public IP networking
- Increased security, as service owners can avoid exposing their services to the public Internet with its associated risks
Azure's VNet peering maps to Cloud Platform's VPC network peering as follows:
|Locality restrictions||Peered networks must be in the same region||No restrictions|
|Supported services||VMs, Cloud Services||Compute Engine, App Engine flexible environment|
|Maximum number of peered networks per network||Up to 10 by default, with a maximum of 50 on request||Up to 25|
|Overlapping IP addresses||Not supported||Not supported|
|Transitive peering||Not supported||Not supported|
On Azure, you can peer between VNets that exist in different subscriptions. Similarly, on Cloud Platform, you can peer between VPC networks that exist in different projects or organizations. To peer VPC networks between projects, you can peer a given VPC network with a Shared VPC network. To peer VPC networks between organizations, you can peer with VPC networks that exist within projects within those organizations.
In some scenarios, an on-premises-to-cloud VPN might not provide the speed or security required by a particular workload. For these situations, Azure and Google, in conjunction with partners, both allow you to lease a network line that has a guaranteed capacity level.
Azure offers ExpressRoute, which allows you to set up a private leased line into Azure by way of a partner carrier facility. Each partner location services a specific region.
Google offers an equivalent service called Cloud Interconnect. As with ExpressRoute and Azure, Cloud Interconnect allows you to set up a regional leased line into Cloud Platform by way of a partner facility. However, the line uses public networks.
In addition to carrier peering, Cloud Platform also allows you to connect directly to Google's network rather than through a third-party partner. Azure does not offer this service.
Content delivery network (CDN) peering
Content delivery network (CDN) peering provides a connection between your resources in the cloud and a CDN provider by way of network edge locations. Google provides CDN peering for several CDN providers through its CDN Interconnect service.
Azure does not provide CDN peering as a service. However, Azure features dedicated connections to the Akamai and Verizon CDNs as part of its Azure CDN service.
Azure and Cloud Platform both charge for VPN services at an hourly rate.
For Cloud Interconnect, the cost of carrier peering is set by the partner leasing the line. For ExpressRoute, Azure offers two billing models:
- Unlimited data, in which you pay a monthly fee for unlimited data transfer.
- Metered data, in which you pay a smaller monthly fee but are also charged for per-GB network egress.
Cloud Platform does not charge for direct peering.
As with carrier peering, Cloud Platform's CDN peering costs are set by the peering partner. Cloud Platform does not charge for CDN Interconnect.