Updated November 21, 2017
Compare the compute services that Amazon Web Services (AWS) and Google Cloud Platform (GCP) provide in their respective cloud environments. Compute services are typically offered under three service models:
- Infrastructure as a service (IaaS), in which users have direct, on-demand access to virtual machines, as well as a suite of related services to automate common tasks.
- Platform as a service (PaaS), in which the machine layer is abstracted away completely, and users interact with resources by way of high-level services and APIs.
- Functions as a service (FaaS), a serverless computing platform that allows you to run individual functions in response to a variety of triggers.
- Containers as a service (CaaS), an IaaS/PaaS hybrid that abstracts away the machine layer but retains much of the flexibility of the IaaS model.
This article focuses on the IaaS, PaaS, FaaS, and CaaS services offered by GCP and AWS.
For IaaS, AWS offers Amazon Elastic Compute Cloud (EC2), and GCP offers Compute Engine. Google and Amazon take similar approaches to their IaaS services: both are fundamental to their respective cloud environment, and almost every type of customer workload runs on them.
At a high level, Amazon EC2's terminology and concepts map to those of Compute Engine as follows:
|Feature||Amazon EC2||Compute Engine|
|Machine images||Amazon Machine Image||Image|
|Temporary virtual machines||Spot instances||Preemptible VMs|
|Firewall||Security groups||Compute Engine firewall rules|
|Automatic instance scaling||Auto Scaling||Compute Engine autoscaler|
|Local attached disk||Ephemeral disk||Local SSD|
|VM import||Supported formats: RAW, OVA, VMDK, and VHD||Supported formats: RAW|
Virtual machine instances
Compute Engine and Amazon EC2 virtual machine instances share many of the same features. On both services, you can:
- Create instances from stored disk images
- Launch and terminate instances on demand
- Manage your instances without restrictions
- Tag your instances
- Install a variety of available operating systems on your instance
Compute Engine and Amazon EC2 approach machine access in slightly different ways. With Amazon EC2, you must include your own SSH key if you want terminal access to the instance. In contrast, on Compute Engine, you can create the key when you need it, even if your instance is already running. If you choose to use Compute Engine's browser-based SSH terminal, which is available in the Google Cloud Platform Console, you can avoid storing keys on your local machine altogether.
Amazon EC2 and Compute Engine both offer a variety of predefined instance configurations with specific amounts of virtual CPU, RAM, and network. Amazon EC2 refers to these configurations as instance types, and Compute Engine refers to them as machine types. In addition, Compute Engine allows you to depart from the predefined configurations, customizing your instance's CPU and RAM resources to fit your workload.
The following table lists the instance types for both services as of May 2016.
|Machine Type||Elastic Compute Cloud||Google Compute Engine|
|Shared Core (machines for tasks that don’t require a lot of resources but do have to remain online for long periods of time)||t2.micro - t2.large||f1-micro
|Standard (machines that provide a balance of compute, network and memory resources ideal for many applications)||m3.medium - m3.2xlarge
m4.large - m4.10xlarge
|n1-standard-1 - n1-standard-32|
|High Memory (machines for tasks that require more memory relative to virtual CPUs)||r3.large - r3.8xlarge
|n1-highmem-2 - n1-highmem-32|
|High CPU (machines for tasks that require more virtual CPUs relative to memory)||c3.large - c3.8xlarge
c4.large - c4.8xlarge
|n1-highcpu-2 - n1-highcpu-32|
|GPU (machines that come with discrete GPUs)||g2.2xlarge
|Add GPUs to most machine types|
|SSD Storage (machines that come with SSD local storage)||i2.xlarge - i2.8xlarge||n1-standard-1 - n1-standard-32
n1-highmem-2 - n1-highmem-32
n1-highcpu-2 - n1-highcpu-32*
|Dense Storage (machines that come with increased amounts local HDD storage)||d2.xlarge - d2.8xlarge||N/A|
* Though Compute Engine does not provide machine types that exactly match these AWS instance types, attaching SSD local storage to other machine types can accomplish the same thing.
Compute Engine and AWS share several high-level families of instance types, including standard, high memory, high CPU, and shared core. However, Compute Engine does not have specific categories for instances that use local SSD storage—all of Compute Engine's non-shared instance types support the addition of local SSD disks. See Locally attached storage for a more detailed comparison of how each environment implements locally attached SSDs.
Compute Engine does not currently offer large magnetic storage.
Amazon EC2 and Compute Engine both offer temporary instances that become available when resources aren't being fully utilized. These instances—called spot instances in Amazon EC2 and preemptible VMs in Compute Engine—are cheaper than standard instances, but can be reclaimed by their respective compute services with little notice. Due to their ephemeral nature, these instances are most useful when applications have tasks that can be interrupted or that can use, but don't need, increased compute power. Examples of such tasks might include batch processing, rendering, testing, simulation, or web crawling.
Spot instances and preemptible VMs are functionally similar, but have different cost models. Spot instances have two models:
- Regular spot instances are auctioned via the Spot Market and launched when a bid is accepted. If a user's bid is the current highest bid, Amazon EC2 creates one or more instances. These instances run until the user terminates them or AWS interrupts them.
- Spot blocks have a fixed priced that is less than the regular on-demand rate. However, they can only run for a maximum of six hours at that fixed rate.
Aside from these rules about termination and price, spot instances behave similarly to on-demand Amazon EC2 instances. Each instance supports most instance types but only regular Linux or Windows Amazon Machine Image (AMI), it does not support premiun operational systems such as RedHat or Oracle Linux. You have full control over the instance while it’s running.
As with Amazon EC2 spot instances, Compute Engine's preemptible VMs are similar to normal instances and have the same performance characteristics. Preemptible VMs contrast with Amazon EC2 spot instances as follows:
- Pricing is fixed. Depending on machine type, preemptible VM prices can be discounted to nearly 80% of the on-demand rate.
- Unlike Amazon EC2's regular spot instances, Compute Engine's preemptible VMs run for a maximum of 24 hours and then are terminated. However, Compute Engine can terminate preemptible VMs sooner depending on its resource needs.
- If you use a premium operating system with a license fee, you will be charged the full cost of the license while using that preemptible VM.
Compute Engine and Amazon EC2 both use machine images to create new instances. Amazon calls these images Amazon Machine Images (AMIs), and Compute Engine simply calls them images.
Amazon EC2 and Compute Engine are similar enough that you can use the same workflow for image creation on both platforms. For example, both Amazon EC2 AMIs and Compute Engine images contain an operating system. They also can contain other software, such as web servers or databases. In addition, both services allow you to use images published by third-party vendors or custom images created for private use.
Amazon EC2 and Compute Engine store images in different ways. On AWS, you store images in either Amazon Simple Storage Service (S3) or Amazon Elastic Block Store (EBS). If you create an instance based on an image that is stored in Amazon S3, you will experience higher latency during the creation process than you would with Amazon EBS.
On Cloud Platform, images are stored within Compute Engine.
To view available images or to create or import images, you can visit the
Cloud Console Images page or use
gcloud command-line tool in the Google Cloud SDK.
Unlike Amazon EC2, Compute Engine does not have a mechanism for making an image publicly available, nor does it have a community repository of available images to draw from. However, you can share images informally by exporting your images to Google Cloud Storage and making them publicly available.
Amazon's machine images are available only within a specific region. In contrast, Compute Engine's machine images are globally available.
Amazon EC2 and Compute Engine both provide a variety of public images with commonly used operating systems. On both platforms, if you choose to install a premium image with an operating system that requires a license, you pay a license fee in addition to normal instance costs.
On both services, you can access machine images for most common operating systems. For the complete list of images that are available on Compute Engine, see the public images list.
Amazon EC2 provides support for some operating system images that are not available as public images on Compute Engine:
- Amazon Linux
- Windows Server 2003 (Premium)
- Oracle Linux (Premium)
Custom image import
Amazon EC2 and Compute Engine both provide ways to import existing machine images to their respective environments.
Amazon EC2 provides a service called VM Import/Export. This service supports a number of virtual machine image types—such as RAW, OVA, VMDK and VHD—as well as a number of operating systems, including varieties of Windows, Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, and Debian. To import a virtual machine, you use a command-line tool that bundles the virtual machine image and uploads it to Amazon Simple Storage Service (S3) as an AMI.
Similarly, Compute Engine provides the ability to import virtual machine images into Google Compute Engine. This process allows you to import RAW image files from almost any platform. For example, you can convert AMI or VirtualBox VDI files to RAW format and then import them to Compute Engine. The import process is similar to Amazon's process, though less automated. While this process requires more manual effort than VM Import/Export, it does have the benefit of increased flexibility. After you convert your image, you upload the image to Cloud Storage, and then Compute Engine makes a private copy of the image to use. For more details about importing an existing image into Compute Engine, see Importing an Existing Image.
If you build your own custom operating systems and plan to run them on Compute Engine, ensure that they meet the hardware support and kernel requirements for custom images.
Apart from the cost of storing an image in Amazon S3 or Cloud Storage, neither AWS nor Cloud Platform charge for their respective import services.
Automatic instance scaling
Both Compute Engine and Amazon EC2 support autoscaling, in which instances are created and removed according to user-defined policies. Autoscaling can be used to maintain a specific number of instances at any given point, or to adjust capacity in response to certain conditions. Autoscaled instances are created from a user-defined template.
Compute Engine and Amazon EC2 implement autoscaling in similar ways:
- Amazon's Auto Scaling scales instances within a group. The Auto Scaler creates and removes instances according to your chosen scaling plan. Each new instance within the group is created from a launch configuration.
- Compute Engine's autoscaler scales instances within a managed instance group. The autoscaler creates and removes instances according to an autoscaling policy. Each new instance within the instance group is created from an instance template.
Amazon's Auto Scaling allows for three scaling plans:
- Manual, in which you manually instruct Auto Scaling to scale up or down.
- Scheduled, in which you configure Auto Scaling to scale up or down at scheduled times.
- Dynamic, in which Auto Scaling scales based on a policy. You can create policies based on either Amazon CloudWatch metrics or Amazon Simple Queue Service (SQS) queues.
In contrast, Compute Engine's autoscaler supports only dynamic scaling. You can create policies based on average CPU utilization, HTTP load balancing serving capacity, or Stackdriver Monitoring metrics.
In both Compute Engine and Amazon EC2, new instances are automatically connected to a default internal network. In addition, you create an alternative network and launch instances into that network in both services. For a full comparison of Cloud Platform networking and AWS networking, see the Networking article.
Amazon EC2 and Compute Engine both allow users to configure firewall policies to selectively allow and deny traffic to virtual machine instances. By default, both services block all incoming traffic from outside a network, and users must set a firewall rule for packets to reach an instance.
Amazon EC2 and Amazon Virtual Private Cloud (VPC) use security groups and network access control lists (NACLs) to allow or deny incoming and outgoing traffic. Amazon EC2 security groups secure instances in Amazon EC2-Classic, while Amazon VPC security groups and NACLs secure both instances and network subnets in an Amazon VPC.
Compute Engine uses firewall rules to secure Compute Engine virtual machine instances and networks. You create a rule by specifying the source IP address range, protocol, ports, or user-defined tags that represent source and target groups of virtual machines instances. However, Compute Engine firewall rules can’t block outbound traffic. To do that, you can use a different kind of technology, such as iptables.
Amazon EC2 and Compute Engine both support networked and locally attached block storage. For a detailed comparison of their block storage services, see Block storage.
This section compares the pricing models for Compute Engine and Amazon EC2.
Compute Engine and Amazon EC2 have similar on-demand pricing models for running instances:
- Amazon EC2 charges by second, with a minimum charge of one minute. Premium operational systems such as Windows or RHEL are charged by the hour, rounded up to the next hour.
- Compute Engine charges by the minute, with a minimum charge of ten minutes of usage.
Both services allow you to run your instance indefinitely.
Compute Engine and Amazon EC2 approach discount pricing in very different ways.
To get discounted pricing in Amazon EC2, you can provision reserved instances. In this model, you must commit to a certain number of instances for either one or three years. In exchange, you receive a lower cost for those instances. A three-year commitment results in a larger discount than a one-year commitment. The more you pay up front, the greater the discount.
With reserved instances, you trade resource flexibility for a lower instance price. Reserved instances are tied a specific instance type and availability zone at purchase. You can switch availability zones and exchange reserved instances only with different instance types in the same family.
In contrast, Compute Engine uses a sustained-use discount model. In this model, Compute Engine automatically applies discounts to your instances depending on how long the instances are active in a given month. The longer you use an instance in a given month, the greater the discount. Sustained-use discounts can save you as much as 30% of the standard on-demand rate.
For FaaS (Functions as a Service), Amazon provides AWS Lambda and GCP provides Google Cloud Functions. AWS Lambda and Cloud Functions have similar service models. Both are serverless computing platforms that allow you to run individual functions in response to a variety of triggers. On both platforms, serverless functions only incur a cost while they are running, creating an economical way to host services with uneven usage patterns.
Service model comparison
AWS Lambda terms and concepts map to those of Cloud Functions as follows:
Launching and scaling
AWS Lambda and Cloud Functions share a number of features. Both offer serverless code execution with a variety of triggers and both automatically scale out when needed. Both offer simple deployment, scaling, and fault recovery. The architecture launches a compute container with a copy of your code and automatically scales the number of instances to handle your request load. No configuration or management is required to scale after the function is deployed.
Google employs a preemptive image-generation architecture that significantly reduces latency for first requests on any new instance. This can be a significant benefit for real-time applications or situations where your application must scale very quickly.
Supported languages and triggers
Lambda has been available longer than Cloud Functions and consequently supports more languages and trigger types. Cloud Functions supports Firebase Triggers, Stackdriver logs, and Cloud Pub/Sub. With these tools, you can trigger a Cloud Function from just about any other GCP service or event.
Because FaaS platforms are designed to be purely transactional, deploying instances on a per-request basis, you cannot rely on them to continuously execute code beyond the initial request. You should design your application to be stateless and short-running. This applies to both Lambda and Cloud Functions. AWS will terminate execution after 5 minutes and Cloud Functions will do so after 9.
Amazon and Google take slightly different approaches to deploying FaaS. AWS Lambda supports deploying from a zip or jar file or through CloudFormation or S3. In addition to zip files, GCP Cloud Functions can be deployed from a Git repository either in GitHub or Cloud Source Repositories. Git support closely links Cloud Functions to your deployment process. You can even configure automated updates based on webhooks.
AWS Lambda pricing includes a base rate per request plus a variable rate based on RAM allocation and compute time. AWS Lambda bills data transfer at the standard EC2 rate.
Cloud Functions charges a variable rate based on the amount of memory and CPU provisioned in addition to the invocation rate. Like AWS Lambda, Cloud Functions charges standard fees for egress bandwidth and does not charge for ingress traffic.
For PaaS, AWS offers AWS Elastic Beanstalk and Cloud Platform offers Google App Engine. Both services support workflows that allow developers to publish applications by pushing their code to a managed platform service. The service manages underlying infrastructure along with automatic scaling and application version control.
Service model comparison
AWS Elastic Beanstalk configures and deploys a set of underlying AWS resources, such as Amazon EC2 instances or Amazon RDS databases, to create an appropriate runtime for your application based on your input configuration.
Google App Engine employs this same model. However, App Engine provides two distinct environments: App Engine standard environment and App Engine flexible environment, each designed with specific use cases in mind.
- In App Engine standard environment, your code is deployed into container instances that run on Google's infrastructure. Because App Engine standard environment does not rely on underlying Compute Engine VM instances, it scales more quickly than App Engine flexible environment. However, App Engine standard environment's customization options are more limited than those of App Engine flexible environment, and you must use a runtime environment from the service's predefined set of runtime environments.
- In App Engine flexible environment, your code is deployed into Docker containers running on Compute Engine VM instances, and these containers are managed by App Engine. You can choose from a larger list of supported runtimes than with App Engine standard environment, and you can create partially or fully customized runtimes. However, during high traffic spikes, your application might scale more slowly than it does on App Engine standard environment. For information about determining which App Engine environment is best for you, see Choosing an App Engine Environment.
Elastic Beanstalk could be integrated with such services:
- Amazon DynamoDB: Fully managed NoSQL database that can store and retrieve any amount of data.
- Amazon RDS: Managed relational database service powered by MySQL, PostgreSQL, MariaDB, Amazon Aurora, Oracle, Microsoft SQL Server.
- Automatic scaling and load balancing.
- Worker Tier environments with SQS integration allow processing of background and periodic tasks.
- Integration with other AWS services and APIs.
Both Google App Engine environments can use the same set of platform services, such as:
- Google Cloud Datastore: Persistent storage with queries, sorting, and transactions.
- Google CloudSQL: Relational database powered by MySQL or PostgreSQL (currently in beta).
- Automatic scaling and load balancing.
- Asynchronous task queues for performing work outside the scope of a request.
- Scheduled tasks for triggering events at specified times or regular intervals.
- Integration with other Google Cloud Platform services and APIs.
Both AWS Elastic Beanstalk and App Engine operate on a similar set of key components. From a developer's perspective, AWS Elastic Beanstalk consists of the following key components:
- Application version: A named/labeled iteration of deployable code submitted by developer.
- Environment: A running instance of a specific application version deployed on AWS resources.
- Environment configuration: A collection of parameters and settings that control how Elastic Beanstalk deploys and configures underlying AWS resources for a particular application version associated with a specific environment.
- Application: A logical bucket for a collection of environments, environment configurations, and application versions.
Google App Engine consists of the following key components:
- Version: A named iteration of deployable code submitted by developer, and a configuration file specifying how to deploy this code on App Engine to create a service.
- Service: An App Engine application is made up of one or more services, which can be configured to use different runtimes and to operate with different performance settings. Each service consists of source code and a configuration file.
- Service configuration file: A file that specifies how URL paths correspond to request handlers and static files. This file also contains information about the application code, such as the application ID and the latest version identifier.
- Instance: An underlying compute resource on which a service is running. For App Engine flexible environment, an instance is a Docker container on a Compute Engine VM instance. For App Engine standard environment, an instance is a container running on Google's infrastructure.
- Application: A logical bucket including one or more services. This bucket can be configured to use different runtimes and to operate with different performance settings.
At a high level, the platforms can be compared as follows:
|Feature||AWS Elastic Beanstalk||Google App Engine standard environment||Google App Engine flexible environment|
|Supported language runtimes||Java, PHP, .NET, Node.js, Python (2.6, 2.7, 3.4), Ruby, Go||Python 2.7, Java 7, PHP 5.5, Go 1.6||Python (2.7, 3.5), Java 8, Node.js, Go 1.8, Ruby, PHP (5.6, 7), .NET|
|Free tier available||Yes (based on the free tier of underlying AWS resources)||Yes (28 instance-hours per day)||No|
|Storage options||Amazon S3, Amazon RDS, Amazon EFS, Amazon ElastiCache, Amazon DynamoDB||Cloud Storage, CloudSQL, Memcache, Cloud Datastore||Cloud Storage, CloudSQL, Memcache, Cloud Datastore|
|Available locations||US, EMEA, APAC||US, EMEA, APAC||US, EMEA, APAC|
|Application user authentication and authorization||No, must be developed within the application||Yes, with Firebase (multiple identity providers), Google Cloud Identity, OAuth 2.0 and OpenID||Yes, with Firebase (multiple identity providers), Google and G Suite accounts, OAuth 2.0 and OpenID|
|Task and message queues||Yes, using SQS||Yes, using Cloud Pub/Sub and the Task Queue API||Yes, using Cloud Pub/Sub and the Task Queue API|
|Application upgrades and A/B testing||Rolling update with scaling based on backend capacity, blue/green deployment with one-time traffic swap. Weighted round-robin is possible with a DNS-based approach.||Yes, with granular traffic splitting between versions||Yes, with granular traffic splitting between versions|
|Monitoring||Yes, health reporting, instance logs and environment event stream||Yes, with Stackdriver (request/response and activity logs), uptime checks||Yes, with Stackdriver (request/response and activity logs), uptime checks, custom metrics and alerts for underlying Compute Engine resources|
|Networking||Can be placed into a VPC||No network controls, only IP endpoints exposed to Internet||Can be placed into a VPC network|
|Pricing||Based on the cost of underlying AWS resources||Based on chosen instance per hour||Price consist of 3 parameters:
vCPU per core hour,
memory per GB hour, and
persistent disk per GB per month.
|Debugging||Yes, with X-ray||Yes, with Stackdriver||Yes, with Stackdriver|
AWS Elastic Beanstalk and App Engine flexible environment both allow you to create your own custom runtime. This feature allows you to use other programming languages or use a different version or implementation of the platform's standard runtimes.
AWS Elastic Beanstalk supports customization through custom platforms, which are Amazon Machine Images (AMIs) that contain the binaries needed to run your application. You create custom platforms with Packer, an open source tool for creating identical machine images for multiple platforms from a single source configuration. Using a Packer configuration template, you can customize operating systems, languages, and frameworks, as well as metadata and configuration options that you need to serve your application.
App Engine flexible environment supports customization through custom runtimes. To create a custom runtime, you create a Dockerfile with a base image of your choice, and then add the Docker commands that build your desired runtime environment. Your Dockerfile might include additional components like language interpreters or application servers. You can leverage any software that can service HTTP requests.
In both cases, you are responsible for ensuring that all components are compatible and working with expected performance.
AWS Elastic Beanstalk uses Packer and allows users to control the entire OS image. App Engine flexible environment builds only Docker containers, allowing users to control only application code and it's dependencies. An App Engine flexible environment user is not allowed to control things like the OS kernel version on underlying VMs.
Both AWS Elastic Beanstalk and App Engine support autoscaling. Autoscaling allows you to automatically increase or decrease the number of running backend instances based on your application's resource usage.
AWS Elastic Beanstalk autoscaling can be configured based on these scaling options:
- Launch Configuration allows you to to choose scaling triggers and describe parameters for them.
- Manual scaling allows you set up minimum and maximum instance count, availability zones, and scaling cool-down.
- Automatic scaling allows you to set up metrics-based parameters. Supported triggers include: Network in/out, CPU utilization, Disk r/w ops/bytes, Latency, Request Count, Healthy/Unhealthy host Count.
- Time-based Scaling allows you to scale in or out instances (set up maximum, minimum and/or desired number of instances) based on your prediction on a recurring basis or a planned one-time event.
App Engine standard environment offers three scaling options:
- Launch Configuration allows you to pick a scaling option and describe parameters.
- Manual scaling allows you to set the number of instances to the service at the start.
- Basic scaling allows you to set up maximum number of instances and idle time, after which the instance is shut down after receiving its last request.
- Automatic scaling allows you to set minimum and maximum levels for number of instances, latency, and concurrent connections for a service.
App Engine flexible environment offers two scaling options:
- Launch Configuration allows you to pick a scaling option and describe parameters.
- Manual scaling works the same way that it does in App Engine standard environment.
- Automatic scaling allows you to set a minimum and maximum number of instances, cool-down period, and target CPU utilization.
Application upgrades and A/B testing
You follow similar steps to deploy or upgrade an application in AWS Elastic Beanstalk and App Engine. You describe your application in a configuration file and then deploy the new version using a command-line tool. AWS Elastic Beanstalk also allows you to upload your application using the AWS Management Console.
On AWS Elastic Beanstalk, performing in-place updates can cause your application to suffer a short outage. To avoid downtime, you can perform a blue/green deployment, in which you deploy the new version to a separate environment and then swap environment URLs. You can serve only one active version per environment. Weighted round-robin is possible with a DNS-based approach.
App Engine allows you to roll out updates to the current version without downtime, creating a new version and switching to it. You can also configure your App Engine application to split traffic based on either the requester's IP address or a cookie. You can serve a number of versions per service per application.
For debugging, the AWS Elastic Beanstalk console allows you to run an AWS X-Ray daemon on the instances in your development environment. This daemon gathers data about requests to servers and collaboration of application with other services. You can construct a service map that helps you troubleshoot problems in your application and find possible ways to optimize it.
On Cloud Platform, you can use Stackdriver Debugger to inspect the state of an App Engine application at any code location without using logging statements and without stopping or slowing down your applications. Your users are not impacted during debugging. Using the debugger in production, you can capture the local variables and call-stack and link those variables back to a specific line location in your source code. You can use Stackdriver Debugger to analyze the production state of your application and understand the behavior of your code in production.
Monitoring of AWS Elastic Beanstalk environments in AWS Management Console consists of basic metrics such as CPU Utilization and Network in and out. Amazon CloudWatch adds Environment Health as well as basic EC2 monitoring metrics for underlying instances. User also can add Alarms for each of these metrics. All EC2 instances generate logs that you can use for troubleshooting problems with applications.
Monitoring of App Engine applications in the Google Cloud Platform Console dashboard provides visibility into parameters such as total requests, CPU and memory utilization, latency, and instance uptime. Stackdriver allows you to create alerting for different parameters and stores logs.
AWS Elastic Beanstalk allows you to manage the connectivity of AWS resources used by Environment. You can prompt Elastic Beanstalk to attach EC2 VMs to a specific VPC and configure Security Groups on those instances. This allows Elastic Beanstalk applications to achieve a level of network connectivity transparency that is similar to App Engine flexible environment.
Google App Engine standard environment completely abstracts networking
configuration. The only networking-related settings you must work with apply to
Load Balancing and Custom Domain name for your application's external address.
You can also apply DOS Protection Service by using the
dos.yaml file to
establish a network "blacklist". When deployed alongside with your application,
it prompts App Engine to serve an error page in response to requests from
prohibited IP ranges.
Google App Engine flexible environment uses Compute Engine virtual machines (VMs) to host your application in Docker containers. You can:
- Configure VMs managed by App Engine flexible environment to be attached to VPC networks. VPC networks can reside in the same project as Flex app or be shared across multiple projects.
- Configure VPC networks with App Engine flexible environment instances in them, to be connected to other destinations using Cloud VPN.
- Apply Firewall settings to App Engine flexible environment instances using Instance Tags.
- Map network ports on Flex instances to Docker container ports for debugging and profiling.
This flexibility helps when you're building App Engine flexible environment applications that require transparent connectivity to other services, running on GCP or outside of it (on-premises or in another cloud).
AWS does not charge any additional fees for Elastic Beanstalk. You pay only for underlying EC2 instances and other resources.
Google App Engine standard environment charges by the minute of instance running. Billing begins when an instance starts and ends fifteen minutes after a manual instance shuts down or fifteen minutes after a basic instance has finished processing its last request. There is no billing for idle instances above the maximum number of idle instances set in the Performance Settings. There is also a free quota: 28 free instance-hours per day for Automatic Scaling Modules and 9 free instance-hours per day for Basic and Manual Scaling Modules.
Google App Engine flexible environment charges for resources of virtual machine types that you specify. Cost consists of 3 parameters: vCPU (per core hour), memory (per GB hour), and persistent disk (per GB per month). These charges are by the minute, with a minimum charge of ten minutes of usage.
For CaaS, AWS offers Amazon EC2 Container Service (ECS), and Cloud Platform offers Google Kubernetes Engine.
Kubernetes Engine and Amazon ECS have very similar service models. In each service, you create a cluster of container nodes. Each node is a virtual machine instance, and each runs a node agent to signal its inclusion in the cluster. Each node also runs a container daemon, such as Docker, so that the node can run containerized applications. You create a Docker image that contains both your application files and the instructions for running your application, and then you deploy the application in your cluster.
Amazon ECS is developed and maintained by Amazon. Kubernetes Engine is built on Kubernetes, an open-source container management system.
At a high level, Amazon ECS's terminology and concepts map to those of Kubernetes Engine as follows:
|Feature||Amazon ECS||Kubernetes Engine|
|Cluster nodes||Amazon EC2 Instances||Compute Engine Instances|
|Supported daemons||Docker||Docker or rkt|
|Node agent||Amazon ECS Agent||Kubelet|
|Deployment sizing service||Service||Replication Controller|
|Command line tool||Amazon ECS CLI||
|Portability||Runs only on AWS||Runs wherever Kubernetes runs|
In both Amazon ECS and Kubernetes Engine, a cluster is a logical grouping of virtual machine nodes. In Amazon ECS, a cluster uses Amazon EC2 instances as nodes. To create a cluster, you simply provide a name for the cluster, and Amazon ECS creates the cluster. However, this cluster is empty by default. You must launch container instances into the cluster before your application can be launched.
In Kubernetes Engine, a cluster uses Compute Engine instances as nodes. To create a cluster, you first provide basic configuration details: your desired cluster name, your desired deployment zones, your desired Compute Engine machine types, and your desired cluster size. After you configure your cluster, Kubernetes Engine creates the cluster in the requested zone or zones.
Both Amazon ECS and Kubernetes Engine group sets of interdependent or related containers into higher-level service units. In Amazon ECS, these units are called tasks and are defined by task definitions. In Kubernetes Engine, these units are called pods, and are defined by a PodSpec. In both tasks and pods, containers are colocated and coscheduled, and run in a shared context with a shared IP address and port space.
Each node machine must run a container daemon to support containerized services. Amazon ECS supports Docker as a container daemon. Kubernetes Engine supports both Docker and rkt.
In Amazon ECS, each Amazon EC2 node runs an Amazon ECS agent that starts containers on behalf of Amazon ECS. Similarly, in Kubernetes Engine, each Kubernetes Engine node runs a kubelet that maintains the health and stability of the containers running on the node.
Amazon ECS does not provide a native mechanism for service discovery within a cluster. As a workaround, you can configure a third-party service-discovery service such as Consul to enable service discovery within a given cluster.
Kubernetes Engine clusters enable service discovery by way of the Kubernetes DNS add-on, which is enabled by default. Each Kubernetes service is assigned a virtual IP address that is stable for as long as the service exists. The DNS server watches the Kubernetes API for new services and then creates a set of DNS records for each. These records allow Kubernetes pods to perform name resolution of Kubernetes services automatically.
Amazon ECS supports only its native scheduler.
Kubernetes Engine is built on top of Kubernetes, which is a pluggable architecture. As such, Kubernetes Engine is fully compatible with the Kubernetes scheduler, which is open source and can run in any environment that Kubernetes can run in.
On Amazon ECS, you can create a script to deploy tasks on each node. However, this behavior is not built into the service.
On Kubernetes Engine, you can use a DaemonSet to run a copy of a specific pod on every node in a cluster. When nodes are added or removed from the cluster, the DaemonSet automatically copies the pod to the new nodes or garbage collects the old nodes. When you delete the DaemonSet, the DaemonSet cleans up any pods it has created.
Because Amazon ECS disks are host-specific, a container that relies on a given specific disk cannot be moved to a different host.
In contrast, Kubernetes Engine can mount disks dynamically on a node and assign the disks to a given pod automatically. You don’t need to run a pod on a specific node to use a specific disk.
Identity and access management
Amazon ECS is fully integrated with the AWS Identity and Access Management (IAM) service. Kubernetes Engine doesn’t currently support Cloud Platform's IAM service.
In Amazon ECS, you can achieve multi-tenancy by creating separate clusters, and then manually configuring AWS IAM to limit usage on each cluster.
In contrast, Kubernetes Engine supports namespaces, which allow users to logically group clusters and provide a scope to support multi-tenant clusters. Namespaces allow for:
- Creation of multiple user communities on a single cluster
- Delegation of authority over partitions of the cluster to trusted users
- Restriction of the amount of resources each community can consume
- Restriction of available resources to those resources that are pertinent to a specific user community
- Isolation of resources used by a given user community from other user communities on the cluster
In Amazon ECS, you can detect failures by using Amazon ELB health checks. Amazon ELB performs health checks over HTTP/TCP. To receive health checks, all containerized services—even those that don't otherwise need to listen to a TCP port—must set up an HTTP server. In addition, each service must be bound to an ELB, even if the service does not otherwise need to be load balanced.
In Kubernetes Engine, you can perform health checks by using readiness and liveness probes:
- Readiness probes allow you to check the state of a pod during initialization.
- Liveness probes allow you to detect and restart pods that are no longer functioning properly.
These probes are included in the Kubernetes API and can be configured as part of a PodSpec.
Because the Amazon ECS agent can only be run on Amazon EC2 instances, Amazon ECS configurations effectively can only be run on Amazon ECS. In contrast, because Kubernetes Engine is built on Kubernetes, Kubernetes Engine configurations can be run on any Kubernetes installation.
Amazon charges for the Amazon EC2 instances and Amazon EBS disk volumes you use in your deployment. There is no additional charge for Amazon ECS's container management service.
Cloud Platform charges for the Compute Engine instances and persistent disks you use in your deployment. In addition, Kubernetes Engine charges an hourly fee for cluster management. Clusters with five or fewer nodes do not incur this fee. See Kubernetes Engine pricing for more information.
Check out the other Google Cloud Platform for AWS Professionals articles: