Google Cloud Platform for AWS Professionals: Compute

Updated June 29, 2016

Compare the compute services that Amazon and Google provide in their respective cloud environments. Compute services are typically offered under three service models:

  • Infrastructure as a service (IaaS), in which users have direct, on-demand access to virtual machines, as well as a suite of related services to automate common tasks.
  • Platform as a service (PaaS), in which the machine layer is abstracted away completely, and users interact with resources by way of high-level services and APIs.
  • Containers as a service (CaaS), an IaaS/PaaS hybrid that abstracts away the machine layer but retains much of the flexibility of the IaaS model.

This article focuses on the IaaS and CaaS services offered by Google and Amazon.

IaaS comparison

For IaaS, Amazon Web Services (AWS) offers Amazon Elastic Compute Cloud (EC2), and Google Cloud Platform offers Google Compute Engine. Google and Amazon take similar approaches to their IaaS services: both are fundamental to their respective cloud environment, and almost every type of customer workload runs on them.

At a high level, Amazon EC2's terminology and concepts map to those of Compute Engine as follows:

Feature Amazon EC2 Compute Engine
Virtual machines Instances Instances
Machine images Amazon Machine Image Image
Temporary virtual machines Spot instances Preemptible VMs
Firewall Security groups Compute Engine firewall rules
Automatic instance scaling Auto Scaling Compute Engine autoscaler
Local attached disk Ephemeral disk Local SSD
VM import Supported formats: RAW, OVA, VMDK, and VHD Supported formats: RAW
Deployment locality Zonal Zonal

Virtual machine instances

Compute Engine and Amazon EC2 virtual machine instances share many of the same features. On both services, you can:

  • Create instances from stored disk images
  • Launch and terminate instances on demand
  • Manage your instances without restrictions
  • Tag your instances
  • Install a variety of available operating systems on your instance

Machine access

Compute Engine and Amazon EC2 approach machine access in slightly different ways. With Amazon EC2, you must include your own SSH key if you want terminal access to the instance. In contrast, on Compute Engine, you can create the key when you need it, even if your instance is already running. If you choose to use Compute Engine's browser-based SSH terminal, which is available in the Google Cloud Platform Console, you can avoid storing keys on your local machine altogether.

Instance types

Amazon EC2 and Compute Engine both offer a variety of predefined instance configurations with specific amounts of virtual CPU, RAM, and network. Amazon EC2 refers to these configurations as instance types, and Compute Engine refers to them as machine types. In addition, Compute Engine allows you to depart from the predefined configurations, customizing your instance's CPU and RAM resources to fit your workload.

The following table lists the instance types for both services as of May 2016.

Machine Type Elastic Compute Cloud Google Compute Engine
Shared Core (machines for tasks that don’t require a lot of resources but do have to remain online for long periods of time) t2.micro - t2.large f1-micro
g1-small
Standard (machines that provide a balance of compute, network and memory resources ideal for many applications) m3.medium - m3.2xlarge
m4.large - m4.10xlarge
n1-standard-1 - n1-standard-32
High Memory (machines for tasks that require more memory relative to virtual CPUs) r3.large - r3.8xlarge
x1.32xlarge
n1-highmem-2 - n1-highmem-32
High CPU (machines for tasks that require more virtual CPUs relative to memory) c3.large - c3.8xlarge
c4.large - c4.8xlarge
n1-highcpu-2 - n1-highcpu-32
GPU (machines that come with discrete GPUs) g2.2xlarge
g2.8xlarge
Add GPUs to most machine types
SSD Storage (machines that come with SSD local storage) i2.xlarge - i2.8xlarge n1-standard-1 - n1-standard-32
n1-highmem-2 - n1-highmem-32
n1-highcpu-2 - n1-highcpu-32*
Dense Storage (machines that come with increased amounts local HDD storage) d2.xlarge - d2.8xlarge N/A

* Though Compute Engine does not provide machine types that exactly match these AWS instance types, attaching SSD local storage to other machine types can accomplish the same thing.

Compute Engine and AWS share several high-level families of instance types, including standard, high memory, high CPU, and shared core. However, Compute Engine does not have specific categories for instances that use local SSD storage—all of Compute Engine's non-shared instance types support the addition of local SSD disks. See Locally attached storage for a more detailed comparison of how each environment implements locally attached SSDs.

Compute Engine does not currently offer large magnetic storage.

Temporary instances

Amazon EC2 and Compute Engine both offer temporary instances that become available when resources aren't being fully utilized. These instances—called spot instances in Amazon EC2 and preemptible VMs in Compute Engine—are cheaper than standard instances, but can be reclaimed by their respective compute services with little notice. Due to their ephemeral nature, these instances are most useful when applications have tasks that can be interrupted or that can use, but don't need, increased compute power. Examples of such tasks might include batch processing, rendering, testing, simulation, or web crawling.

Spot instances and preemptible VMs are functionally similar, but have different cost models. Spot instances have two models:

  • Regular spot instances are auctioned via the Spot Market and launched when a bid is accepted. If a user's bid is the current highest bid, Amazon EC2 creates one or more instances. These instances run until the user terminates them or AWS interrupts them.
  • Spot blocks have a fixed priced that is less than the regular on-demand rate. However, they can only run for a maximum of six hours at that fixed rate.

Aside from these rules about termination and price, spot instances behave similarly to on-demand Amazon EC2 instances. Each instance supports any Amazon Machine Image (AMI) and instance type, and you have full control over the instance while it’s running.

As with Amazon EC2 spot instances, Compute Engine's preemptible VMs are similar to normal instances and have the same performance characteristics. Preemptible VMs contrast with Amazon EC2 spot instances as follows:

  • Pricing is fixed. Depending on machine type, preemptible VM prices can be discounted to nearly 80% of the on-demand rate.
  • Unlike Amazon EC2's regular spot instances, Compute Engine's preemptible VMs run for a maximum of 24 hours and then are terminated. However, Compute Engine can terminate preemptible VMs sooner depending on its resource needs.
  • If you use a premium operating system with a license fee, you will be charged the full cost of the license while using that preemptible VM.

Machine images

Compute Engine and Amazon EC2 both use machine images to create new instances. Amazon calls these images Amazon Machine Images (AMIs), and Compute Engine simply calls them images.

Amazon EC2 and Compute Engine are similar enough that you can use the same workflow for image creation on both platforms. For example, both Amazon EC2 AMIs and Compute Engine images contain an operating system. They also can contain other software, such as web servers or databases. In addition, both services allow you to use images published by third-party vendors or custom images created for private use.

Amazon EC2 and Compute Engine store images in different ways. On AWS, you store images in either Amazon Simple Storage Service (S3) or Amazon Elastic Block Store (EBS). If you create an instance based on an image that is stored in Amazon S3, you will experience higher latency during the creation process than you would with Amazon EBS.

On Cloud Platform, images are stored within Compute Engine. To view available images or to create or import images, you can visit the Cloud Console Images page or use the gcloud command-line tool in the Google Cloud SDK.

Unlike Amazon EC2, Compute Engine does not have a mechanism for making an image publicly available, nor does it have a community repository of available images to draw from. However, you can share images informally by exporting your images to Google Cloud Storage and making them publicly available.

Amazon's machine images are available only within a specific region. In contrast, Compute Engine's machine images are globally available.

Public images

Amazon EC2 and Compute Engine both provide a variety of public images with commonly used operating systems. On both platforms, if you choose to install a premium image with an operating system that requires a license, you pay a license fee in addition to normal instance costs.

On both services, you can access machine images for most common operating systems. For the complete list of images that are available on Compute Engine, see the public images list.

Amazon EC2 provides support for some operating system images that are not available as public images on Compute Engine:

  • Amazon Linux
  • Windows Server 2003 (Premium)
  • Oracle Linux (Premium)

Custom image import

Amazon EC2 and Compute Engine both provide ways to import existing machine images to their respective environments.

Amazon EC2 provides a service called VM Import/Export. This service supports a number of virtual machine image types—such as RAW, OVA, VMDK and VHD—as well as a number of operating systems, including varieties of Windows, Red Hat Enterprise Linux, CentOS, Ubuntu, and Debian. To import a virtual machine, you use a command-line tool that bundles the virtual machine image and uploads it to Amazon Simple Storage Service (S3) as an AMI.

Similarly, Compute Engine provides the ability to import virtual machine images into Google Compute Engine. This process allows you to import RAW image files from almost any platform. For example, you can convert AMI or VirtualBox VDI files to RAW format and then import them to Compute Engine. The import process is similar to Amazon's process, though less automated. While this process requires more manual effort than VM Import/Export, it does have the benefit of increased flexibility. After you convert your image, you upload the image to Cloud Storage, and then Compute Engine makes a private copy of the image to use. For more details about importing an existing image into Compute Engine, see Importing an Existing Image.

If you build your own custom operating systems and plan to run them on Compute Engine, ensure that they meet the hardware support and kernel requirements for custom images.

Apart from the cost of storing an image in Amazon S3 or Cloud Storage, neither AWS nor Cloud Platform charge for their respective import services.

Automatic instance scaling

Both Compute Engine and Amazon EC2 support autoscaling, in which instances are created and removed according to user-defined policies. Autoscaling can be used to maintain a specific number of instances at any given point, or to adjust capacity in response to certain conditions. Autoscaled instances are created from a user-defined template.

Compute Engine and Amazon EC2 implement autoscaling in similar ways:

  • Amazon's Auto Scaling scales instances within a group. The Auto Scaler creates and removes instances according to your chosen scaling plan. Each new instance within the group is created from a launch configuration.
  • Compute Engine's autoscaler scales instances within a managed instance group. The autoscaler creates and removes instances according to an autoscaling policy. Each new instance within the instance group is created from an instance template.

Amazon's Auto Scaling allows for three scaling plans:

  • Manual, in which you manually instruct Auto Scaling to scale up or down.
  • Scheduled, in which you configure Auto Scaling to scale up or down at scheduled times.
  • Dynamic, in which Auto Scaling scales based on a policy. You can create policies based on either Amazon CloudWatch metrics or Amazon Simple Queue Service (SQS) queues.

In contrast, Compute Engine's autoscaler supports only dynamic scaling. You can create policies based on average CPU utilization, HTTP load balancing serving capacity, or Stackdriver Monitoring metrics.

Internal networks

In both Compute Engine and Amazon EC2, new instances are automatically connected to a default internal network. In addition, you create an alternative network and launch instances into that network in both services. For a full comparison of Cloud Platform networking and AWS networking, see the Networking article.

Firewalls

Amazon EC2 and Compute Engine both allow users to configure firewall policies to selectively allow and deny traffic to virtual machine instances. By default, both services block all incoming traffic from outside a network, and users must set a firewall rule for packets to reach an instance.

Amazon EC2 and Amazon Virtual Private Cloud (VPC) use security groups and network access control lists (NACLs) to allow or deny incoming and outgoing traffic. Amazon EC2 security groups secure instances in Amazon EC2-Classic, while Amazon VPC security groups and NACLs secure both instances and network subnets in an Amazon VPC.

Compute Engine uses firewall rules to secure Compute Engine virtual machine instances and networks. You create a rule by specifying the source IP address range, protocol, ports, or user-defined tags that represent source and target groups of virtual machines instances. However, Compute Engine firewall rules can’t block outbound traffic. To do that, you can use a different kind of technology, such as iptables.

Block storage

Amazon EC2 and Compute Engine both support networked and locally attached block storage. For a detailed comparison of their block storage services, see Block storage.

Costs

This section compares the pricing models for Compute Engine and Amazon EC2.

On-demand pricing

Compute Engine and Amazon EC2 have similar on-demand pricing models for running instances:

  • Amazon EC2 charges by the hour, rounding up to the nearest hour.
  • Compute Engine charges by the minute, with a minimum charge of ten minutes of usage.

Both services allow you to run your instance indefinitely.

Discount pricing

Compute Engine and Amazon EC2 approach discount pricing in very different ways.

To get discounted pricing in Amazon EC2, you can provision reserved instances. In this model, you must commit to a certain number of instances for either one or three years. In exchange, you receive a lower cost for those instances. A three-year commitment results in a larger discount than a one-year commitment. The more you pay up front, the greater the discount.

With reserved instances, you trade resource flexibility for a lower instance price. Reserved instances are tied a specific instance type and availability zone at purchase. You can switch availability zones and exchange reserved instances only with different instance types in the same family.

In contrast, Compute Engine uses a sustained-use discount model. In this model, Compute Engine automatically applies discounts to your instances depending on how long the instances are active in a given month. The longer you use an instance in a given month, the greater the discount. Sustained-use discounts can save you as much as 30% of the standard on-demand rate.

CaaS comparison

For CaaS, AWS offers Amazon EC2 Container Service (ECS), and Cloud Platform offers Google Container Engine.

Container Engine and Amazon ECS have very similar service models. In each service, you create a cluster of container nodes. Each node is a virtual machine instance, and each runs a node agent to signal its inclusion in the cluster. Each node also runs a container daemon, such as Docker, so that the node can run containerized applications. You create a Docker image that contains both your application files and the instructions for running your application, and then you deploy the application in your cluster.

Amazon ECS is developed and maintained by Amazon. Container Engine is built on Kubernetes, an open-source container management system.

At a high level, Amazon ECS's terminology and concepts map to those of Container Engine as follows:

Feature Amazon ECS Container Engine
Cluster nodes Amazon EC2 Instances Compute Engine Instances
Supported daemons Docker Docker or rkt
Node agent Amazon ECS Agent Kubelet
Container group Task Pod
Deployment sizing service Service Replication Controller
Command line tool Amazon ECS CLI kubectl or gcloud
Portability Runs only on AWS Runs wherever Kubernetes runs

Platform components

Clusters

In both Amazon ECS and Container Engine, a cluster is a logical grouping of virtual machine nodes. In Amazon ECS, a cluster uses Amazon EC2 instances as nodes. To create a cluster, you simply provide a name for the cluster, and Amazon ECS creates the cluster. However, this cluster is empty by default. You must launch container instances into the cluster before your application can be launched.

In Container Engine, a cluster uses Compute Engine instances as nodes. To create a cluster, you first provide basic configuration details: your desired cluster name, your desired deployment zones, your desired Compute Engine machine types, and your desired cluster size. After you configure your cluster, Container Engine creates the cluster in the requested zone or zones.

Container groups

Both Amazon ECS and Container Engine group sets of interdependent or related containers into higher-level service units. In Amazon ECS, these units are called tasks and are defined by task definitions. In Container Engine, these units are called pods, and are defined by a PodSpec. In both tasks and pods, containers are colocated and coscheduled, and run in a shared context with a shared IP address and port space.

Container daemons

Each node machine must run a container daemon to support containerized services. Amazon ECS supports Docker as a container daemon. Container Engine supports both Docker and rkt.

Node agents

In Amazon ECS, each Amazon EC2 node runs an Amazon ECS agent that starts containers on behalf of Amazon ECS. Similarly, in Container Engine, each Container Engine node runs a kubelet that maintains the health and stability of the containers running on the node.

Service discovery

Amazon ECS does not provide a native mechanism for service discovery within a cluster. As a workaround, you can configure a third-party service-discovery service such as Consul to enable service discovery within a given cluster.

Container Engine clusters enable service discovery by way of the Kubernetes DNS add-on, which is enabled by default. Each Kubernetes service is assigned a virtual IP address that is stable for as long as the service exists. The DNS server watches the Kubernetes API for new services and then creates a set of DNS records for each. These records allow Kubernetes pods to perform name resolution of Kubernetes services automatically.

Scheduling

Amazon ECS supports only its native scheduler.

Container Engine is built on top of Kubernetes, which is a pluggable architecture. As such, Container Engine is fully compatible with the Kubernetes scheduler, which is open source and can run in any environment that Kubernetes can run in.

Deployment automation

On Amazon ECS, you can create a script to deploy tasks on each node. However, this behavior is not built into the service.

On Container Engine, you can use a DaemonSet to run a copy of a specific pod on every node in a cluster. When nodes are added or removed from the cluster, the DaemonSet automatically copies the pod to the new nodes or garbage collects the old nodes. When you delete the DaemonSet, the DaemonSet cleans up any pods it has created.

Disk management

Because Amazon ECS disks are host-specific, a container that relies on a given specific disk cannot be moved to a different host.

In contrast, Container Engine can mount disks dynamically on a node and assign the disks to a given pod automatically. You don’t need to run a pod on a specific node to use a specific disk.

Identity and access management

Amazon ECS is fully integrated with the AWS Identity and Access Management (IAM) service. Container Engine doesn’t currently support Cloud Platform's IAM service.

Multi-tenancy

In Amazon ECS, you can achieve multi-tenancy by creating separate clusters, and then manually configuring AWS IAM to limit usage on each cluster.

In contrast, Container Engine supports namespaces, which allow users to logically group clusters and provide a scope to support multi-tenant clusters. Namespaces allow for:

  • Creation of multiple user communities on a single cluster
  • Delegation of authority over partitions of the cluster to trusted users
  • Restriction of the amount of resources each community can consume
  • Restriction of available resources to those resources that are pertinent to a specific user community
  • Isolation of resources used by a given user community from other user communities on the cluster

Health checks

In Amazon ECS, you can detect failures by using Amazon ELB health checks. Amazon ELB performs health checks over HTTP/TCP. To receive health checks, all containerized services—even those that don't otherwise need to listen to a TCP port—must set up an HTTP server. In addition, each service must be bound to an ELB, even if the service does not otherwise need to be load balanced.

In Container Engine, you can perform health checks by using readiness and liveness probes:

  • Readiness probes allow you to check the state of a pod during initialization.
  • Liveness probes allow you to detect and restart pods that are no longer functioning properly.

These probes are included in the Kubernetes API and can be configured as part of a PodSpec.

Portability

Because the Amazon ECS agent can only be run on Amazon EC2 instances, Amazon ECS configurations effectively can only be run on Amazon ECS. In contrast, because Container Engine is built on Kubernetes, Container Engine configurations can be run on any Kubernetes installation.

Costs

Amazon charges for the Amazon EC2 instances and Amazon EBS disk volumes you use in your deployment. There is no additional charge for Amazon ECS's container management service.

Cloud Platform charges for the Compute Engine instances and persistent disks you use in your deployment. In addition, Container Engine charges an hourly fee for cluster management. Clusters with five or fewer nodes do not incur this fee. See Container Engine pricing for more information.

What's next?

Check out the other Google Cloud Platform for AWS Professionals articles:

Send feedback about...

Google Cloud Platform for AWS Professionals