Updated June 29, 2016
Compare the networking services that Amazon and Google provide in their respective cloud environments. Networking services provide connectivity across virtual machines, on-premises servers, and other cloud services.
Service model comparison
Amazon Web Services
Because most of Amazon's web services are deployed on Amazon Elastic Compute Cloud (EC2) instances, Amazon's networking services are heavily tied to Amazon EC2. Amazon Web Services (AWS) has two different networking stacks, both of which center on Amazon EC2:
- Elastic Compute Cloud-Classic (EC2-Classic), their original offering.
- Amazon Virtual Private Cloud (VPC), their current offering.
Amazon EC2-Classic launches all instance types into a public, shared network, where each instance has access to the Internet and is assigned a public IP address. This offering has been deprecated since late 2013, and can only be used by accounts created before that date.
Amazon VPC is a newer model, with support for a wider array of networking features. For example, Amazon VPC offers the following upgrades:
- Support for creating private RFC 1918 address spaces and subnetting
- Network access control lists (NACLs)
- Inbound and outbound firewall rules
Google Cloud Platform
In contrast, Google Cloud Platform treats networking as a global feature that spans all services. Cloud Platform's networking is based on Google’s Andromeda architecture, which can create networking elements at any level with software. This software-defined networking allows Cloud Platform's services to implement networking features that fit their exact needs, such as secure firewalls for virtual machines in Google Compute Engine, fast connections between database nodes in Cloud Bigtable, or fast query results in BigQuery.
When you create virtual machine instances in a Cloud Platform project, Compute Engine automatically connects them to a default internal network. If needed, you can create additional networks as well. As with Amazon VPC, each network is private, and each supports firewall rules, routing, VPNs, private RFC 1918 address spaces, and subnetting.
Most of the networking entities in Cloud Platform, such as load balancers, firewall rules, and routing tables, have global scope. More importantly, networks themselves have a global scope. This means that you can create a single private IP space that is global, without having to connect multiple private networks and manage those spaces separately. Due to this single, global network, your Compute Engine instances can be addressed within your network by both IP address and name.
Cloud Platform and Amazon VPC handle IP addresses in very similar ways. At launch, all instances have an internal IP. You can optionally request an external IP. By default, an external IP is ephemeral, meaning that it is tied to the life of the instance.
You can also request a permanent IP address—called an Elastic IP in Amazon VPC and a static IP in Cloud Platform—to attach an instance. In both services, a static IP address is yours until you choose to release it, and can be attached to and detached from instances as needed.
Amazon EC2-Classic takes a slightly different approach to IP addresses. When you create an instance in Amazon EC2-Classic, your instance is given an external, ephemeral IP address that is only valid as long as that machine is running. The instance is also given an internal network IP. You can create an Elastic IP and assign it to the instance at any time. This is much the same for Amazon VPC, except it is optional to have an external IP assigned to a new instance.
AWS's terminology for IP types maps to that of Cloud Platform as follows:
|IP type||AWS||Cloud Platform|
|Permanent IP||Elastic IP||Static IP|
|Temporary IP||Ephemeral IP||Ephemeral IP|
|Internal IP||Internal IP||Internal IP|
Load balancers distribute incoming traffic across multiple instances. When configured appropriately, load balancers make applications fault-tolerant and increase application availability.
AWS's Elastic Load Balancing (ELB) service allows you to direct traffic to your instances within one or several availability zones in a given region. The ELB service performs regular health checks on each target instance, and redirects traffic if an instance becomes unhealthy. In addition, the ELB service can be integrated with AWS’s Auto Scaling service, adding and removing instances automatically when Auto Scaling scales them up or down.
When you create an Elastic Load Balancer, AWS provides a
which you can direct traffic. If you are using
Amazon’s Route 53
service, you can use Elastic Load Balancer as a root domain. Otherwise,
you have to use a
CNAME for the Elastic Load Balancer.
Like ELB, Compute Engine's load balancer directs traffic to backend instances in one or many zones. Compute Engine's load balancer also has some additional unique features:
- Compute Engine lets you choose between a network (Layer 4) load balancer, which balances both UDP and TCP traffic regionally, and an HTTP(S) (Layer 7) load balancer, which can balance traffic globally as well as regionally.
- When you provision a Compute Engine load balancer, you're given a
single, globally accessible IP address. This IP address can be used
for the lifetime of the load balancer, so it can be used for
ARecords, whitelists, or configurations in applications.
To summarize, AWS ELB and Compute Engine's load balancer compare as follows:
|Feature||Amazon ELB||Compute Engine load balancer|
|Network load balancing||Yes||Yes|
|Support for static IP address||No||Yes|
|Content-based load balancing||No||Yes|
|Cross-region load balancing||No||Yes|
ELB scales up and down in response to traffic. The more traffic that goes through the ELB, the more capacity it adds. The reverse is also true—the less traffic arrives, the more capacity the ELB removes. ELB changes capacity by either changing the size of the load balancing resources or changing the number of load balancing resources. For more details about how ELB scales its load balancing capacity, see Best Practices in Evaluating Elastic Load Balancing in the AWS documentation.
ELB does not scale instantly, and can take one to seven minutes to respond to changes in traffic. If you expect a sudden spike in traffic, you must request that AWS pre-warm your ELB to a certain traffic level.
Compute Engine's load balancer also scales its capacity up or down based on the traffic being passed through it. However, it responds in real time to the traffic, without a delay or pre-warming.
Both AWS and Google Cloud Platform load balancing services use the same pricing model. Each charges an hourly rate for the load balancer and a separate rate for the amount of traffic that passes through the load balancer.
The host machines on which instances run occasionally need to be removed for maintenance or replacement. On AWS, the user must manually migrate their affected instances from these host machines, either by rebooting the instances or recreating them using instance snapshots.
In contrast, Cloud Platform features live migration, in which Cloud Platform automatically and transparently migrates instances when their host hardware needs maintenance or replacement. For more information about the live migration feature, see the blog post.
A peering service allows customers to connect to a cloud service directly over a network. How this is done depends on the type of service.
Amazon's peering services compare to those of Compute Engine as follows:
|Virtual private network||VPC-VPN||Cloud VPN|
|Carrier peering||Direct Connect||Carrier Interconnect|
|Direct peering||N/A||Direct Peering|
|CDN peering||N/A||CDN Interconnect|
Virtual private network
AWS and Cloud Platform both provide a virtual private network (VPN) service. In each service, you create a tunnel from an external network to your internal Amazon EC2 or Compute Engine network, and then establish a secure connection over that tunnel.
To route traffic on Cloud Platform, you can use Google Cloud Router, which enables dynamic BGP route updates between your Compute Engine network and your non-Google network. AWS offers a similar routing service as a part of the Amazon VPC service.
In some scenarios, a VPN doesn’t provide the speed or security required by a particular workload. For these situations, Amazon and Google, in conjunction with partners, both allow you to lease a network line that has a guaranteed capacity level.
Amazon offers Direct Connect, which allows you to create a private leased line to AWS from a partner carrier facility. Each partner location services a specific region.
Google offers Carrier Interconnect, which allows you to create a private leased line into Cloud Platform from a partner facility. In contrast with Direct Connect, Carrier Interconnect connects your traffic to Cloud Platform's global network rather than to a specific region.
In addition to carrier peering, Cloud Platform also allows you to connect directly to Google's network rather than through a third-party partner. Amazon does not offer this service.
Content delivery network (CDN) peering
Content delivery network (CDN) peering provides a connection between your resources in the cloud and a CDN provider by way of network edge locations. Google provides CDN peering for several CDN providers through its CDN Interconnect service. Amazon only provides CDN peering for its own CDN service, Amazon CloudFront.
AWS and Cloud Platform both charge for VPN services at an hourly rate.
The cost of carrier peering is set by the partner leasing the line. In addition, AWS charges an additional amount based on the amount of capacity you have provisioned with the peering partner. Cloud Platform does not charge for carrier peering.
In addition, Cloud Platform does not charge for direct peering.
As with carrier peering, CDN peering costs are set by the peering partner. AWS also charges an additional amount based on the amount of capacity you have provisioned. Cloud Platform does not charge for CDN Interconnect.
A DNS service translates from human-readable domain names to the IP addresses that servers use to connect with each other. Managed DNS services, such as Amazon Route 53 and Google Cloud DNS, offer scalable managed DNS service in the cloud.
Amazon Route 53 and Cloud DNS are very similar. Both support nearly all DNS record types, as well as anycast-based serving. In addition, each service allows you to register domain names. Neither service currently supports DNSSEC.
Amazon Route 53 supports two kinds of routing that Cloud DNS does not: geography-based routing and latency-based routing. Geography-based routing lets you restrict your content to certain geographic regions of the world. Latency-based routing lets you direct traffic based on the latency measured by the DNS service.
To summarize, Amazon Route 53 maps to Cloud DNS as follows:
|Feature||Amazon Route 53||Cloud DNS|
|Zone||Hosted zone||Managed zone|
|Support for most DNS record types||Yes||Yes|
Amazon Route 53 and Cloud DNS both charge based on the number of zones hosted per month and queries per month. Route 53 charges a higher rate for geographic-based routing and latency-based routing queries.
Check out the other Google Cloud Platform for AWS Professionals articles: