Updated September 28, 2018
Compare the storage services that Amazon and Google provide in their respective cloud environments. This article discusses the following service types:
- Distributed object storage, or redundant key-value stores in which you can store data objects.
- Block storage, or virtual disk volumes that you can attach to virtual machine instances.
- File storage, or network-attached, file-server-based storage.
- Cool storage, or storage services designed to store data backups.
- Cold (archival) storage, or storage services designed to store archival data for compliance or analysis purposes.
This article does not discuss databases or message queues.
Distributed object storage
Amazon Simple Storage Service (S3) and Google Cloud Storage are hosted services for storing and accessing large numbers of binary objects, or blobs, of varying sizes. Each service can be understood as a highly scalable key-value store, where the keys are strings and values are arbitrary binary objects.
This section discusses the Amazon S3 Standard storage class and Cloud Storage Multi-Regional and Regional storage classes. Amazon S3 and Cloud Storage both also offer lower-cost storage classes for infrequently accessed data and archival data. For information on these other storage classes, see the Cool storage and Cold or archival storage sections below.
Service model comparison
Cloud Storage and Amazon S3 have very similar service models. In both services, you store objects in a bucket. Each object within a bucket is identified by a unique key within that bucket, and each object has an associated metadata record. This metadata record contains information such as object size, date of last modification, and media type. If you have the appropriate permissions, you can view and modify some of this metadata. You can also add custom metadata.
Though buckets are key-value stores, the user experience for buckets is designed
to be similar, though not identical, to that of filesystems. By convention,
object keys are usually paths such as "/foo/bar.txt" or "/foo/subdir/baz.txt."
Amazon S3 and Cloud Storage both also provide filesystem-like
APIs—for example, the provided
list method lists all object keys with a
common prefix, not unlike
ls -R would on a Unix-like filesystem.
In addition to their most obvious use, distributed object storage, both services can be used to host static web content and media.
Amazon S3's features and terminology map to those of Cloud Storage as follows:
|Feature||Amazon S3||Cloud Storage|
|Unit of deployment||Bucket||Bucket|
|Deployment identifier||Globally unique key||Globally unique key|
|File system emulation||Limited||Limited|
|Object lifecycle management||Yes||Yes|
|Update notifications||Event notifications||Cloud Pub/Sub notifications for Cloud Storage, Cloud Storage triggers for Cloud Functions, and object change notifications|
|Service classes||Standard, Standard-Infrequent Access, One Zone-Infrequent Access, Amazon Glacier*||Multi-Regional, Regional, Nearline, Coldline|
|Deployment locality||Regional||Regional and multi-regional|
|Pricing||Priced by amount of data stored per month, network egress, and number of common API requests||Priced by amount of data stored per month, network egress, and number of common API requests|
* Amazon Glacier is a separate product from Amazon S3.
Amazon S3 and Cloud Storage both support object versioning, in which distinct versions of an object with a given key are stored under a distinct version ID. By enabling versioning, you can mitigate the risk of accidental data loss due to an object being overwritten.
In Cloud Storage, you can use preconditions to support conditional updates for PUT and DELETE operations. In a conditional update, the update request will succeed only if the object version being updated matches the object version specified in the request. This mechanism helps prevent the possibility of race conditions during updates. Amazon S3 does not support conditional updates.
Object lifecycle management
Amazon S3 and Cloud Storage both allow you to automate object migration or deletion according to user-specified lifecycle policies.
Amazon S3 and Cloud Storage both allow you to configure your buckets to issue notifications when objects are created, deleted, or updated. In Amazon S3, this feature is called event notifications. Cloud Storage has multiple services that provide similar functionality, but are differentiated by the intended endpoint of the notification:
- Cloud Pub/Sub notifications for Cloud Storage
- Cloud Storage triggers for Cloud Functions
- Cloud Storage's native object change notifications
Amazon S3 event notifications supports three possible notification endpoints:
- An Amazon Simple Notification Service (SNS) topic.
- An Amazon Simple Queue Service (SQS) queue.
- An AWS Lambda function.
Similarly, GCP's various Cloud Storage notification services support the following endpoints:
- Cloud Pub/Sub notifications for Cloud Storage: a Cloud Pub/Sub topic.
- Cloud Storage triggers: a Cloud Functions function.
- Cloud Storage object change notifications: a target URL, or webhook, that handles the notification payload.
Service level agreement
Amazon and Google both provide uptime guarantees, and have policies in place for crediting customer accounts in the event that these guarantees are not met. Amazon defines the guarantees and policies for Amazon S3 Standard in the standard Amazon S3 service level agreement (SLA). Google defines the guarantees and policies for Cloud Storage in the Cloud Storage SLA.
Amazon S3 Standard
Amazon S3 Standard is priced by amount of data stored per month and by network egress. Pricing also varies by storage region. Amazon S3 Standard charges for common API requests.
Cloud Storage Multi-regional and Regional
Like Amazon S3 Standard, Cloud Storage Multi-regional and Regional are priced by amount of data stored per month and by network egress, and pricing varies by storage region. Cloud Storage Multi-regional and Regional also charge for common API requests.
GCP and Amazon Web Services both offer block storage options as part of their compute services. Google Compute Engine provides persistent disks, and Amazon Elastic Compute Cloud (EC2) provides Elastic Block Store (EBS). Each service has several block storage types that cover a range of price and performance characteristics.
Service model comparison
Compute Engine persistent disks and Amazon EBS are very similar in most ways. In both cases, disk volumes are network-attached, though both Compute Engine and Amazon EC2 also provide the ability to locally attach a disk if necessary. While networked disks have higher operational latency and less throughput than their locally attached counterparts, they have many benefits as well, including built-in redundancy, snapshotting, and ease of disk detachment and reattachment.
Compute Engine persistent disks map to Amazon EBS as follows:
|Feature||Amazon EBS||Compute Engine|
|Volume types||EBS Provisioned IOPS SSD, EBS General Purpose SSD, Throughput Optimized HDD, Cold HDD||Zonal standard persistent disks (HDD), regional persistent disks, zonal SSD persistent disks, regional SSD persistent disks|
|Volume locality rules||Must be in same zone as instance to which it is attached||Must be in same zone as instance to which it is attached|
|Volume attachment||Can be attached to only one instance at a time||Read-write volumes: Can be attached to only one instance at a time
Read-only volumes: Can be attached to multiple instances
|Attached volumes per instance||Up to 40||Up to 128|
|Maximum volume size||16TiB||64TB|
|Redundancy||Zonal||Zonal or multi-zonal depending on volume type|
Compute Engine's locally attached disks compare to those of Amazon EC2 as follows:
|Feature||Amazon EC2||Compute Engine|
|Service name||Instance store (also known as ephemeral store)||Local SSD|
|Volume attachment||Tied to instance type||Can be attached to any non-shared-core instance|
|Device type||Varies by instance type||SSD|
|Attached volumes per instance||Varies by instance type||Up to 8|
|Storage capacity||Varies by instance type||375 GB per volume|
Volume attachment and detachment
After creating a disk volume, you can attach the volume to a Compute Engine or Amazon EC2 instance. The instance can then mount and format the disk volume like any other block device. Similarly, you can unmount and detach a volume from an instance, allowing it to be reattached to other instances.
An Amazon EBS volume can be attached to only one Amazon EC2 instance at a time. Compute Engine persistent disks in read/write mode have the same limitation. However, Compute Engine persistent disks in read-only mode can be attached to multiple instances simultaneously.
In Amazon EC2, you can attach up to 40 disk volumes to a Linux instance. In Compute Engine, you can attach up to 128 disk volumes.
Compute Engine and Amazon EBS both allow users to capture and store snapshots of disk volumes. These snapshots can be used to create new volumes at a later time.
In both services, snapshots are differential. The initial snapshot creates a full copy of the volume, but subsequent snapshots only copy the blocks that have changed since the previous snapshot.
Amazon EBS snapshots are available in only one region by default, and must be explicitly copied to other regions if needed. This extra step incurs additional data transfer charges. In contrast, Compute Engine persistent disk snapshots are global, and can be used in any region without additional operations or charges.
For both Compute Engine persistent disks and Amazon EBS, disk performance depends on several factors, including:
- Volume type: Each service offers several distinct volume types. Each has its own set of performance characteristics and limits.
- Available bandwidth: The throughput of a networked volume depends on the network bandwidth available to the Compute Engine or Amazon EC2 to which it is attached.
This section discusses additional performance details for each service.
Amazon EC2 instance types vary widely in networking performance. Instance types with a small number of cores, such as the T2 instance type, might not have sufficient network capacity to achieve the advertised maximum IOPs or throughput for a given Amazon EBS disk type. See Amazon EC2 Instance Configuration for more information.
In addition, some Amazon EC2 instance types are EBS-optimized, which means that they have a dedicated connection to their attached Amazon EBS volumes. If you use an Amazon EC2 instance type that is not EBS-optimized, you have no guarantees as to how much network capacity will be available between the instance and its EBS volumes at a given time.
Compute Engine persistent disks
Compute Engine allocates throughput on a per-core basis. You get 2 Gbps of network egress per virtual CPU core, with a maximum of 16 Gbps for a single instance. Because Compute Engine has a data redundancy factor of 3.3x, each logical write actually requires 3.3 writes' worth of network bandwidth. As such, machine types with a small number of cores might not have sufficient network capacity to achieve the advertised maximum IOPs or throughput for a given persistent disk type. See Network egress caps for more information.
For each Compute Engine disk type, the total I/O available is related to the total size of the volumes that are connected to a given instance. For example, if you have two 2TB standard persistent disks connected to an instance, your total available I/O comes out to 3072 read IOPS and 6144 write IOPS.
Locally attached disks
In addition to standard networked block storage, Amazon EC2 and Compute Engine both allow users to use disks that are locally attached to the physical machine running the instance. These local disks offer much faster transfer rates. However, unlike networked block storage, they are not redundant and cannot be snapshotted.
On Amazon EC2, local disks are called instance store or ephemeral store. These disks can be either HDD or SSD, depending on the instance type family. The number and size of these disks depends on the specific instance type and is not adjustable.
On Compute Engine, local disks are referred to as local SSD. Local SSDs are, as their name implies, SSD-only, and can be attached to almost any machine type, with the exception of shared-core types like f1-micro and g1-small. Local SSDs have a fixed size of 375 GB per disk, and a maximum of 8 can be attached to a single instance.
Compute Engine migrates local SSDs automatically and seamlessly when their host machines are down for maintenance. See Live Migration for more information.
While Amazon EC2 instance store comes at no additional cost, Compute Engine local SSDs do incur additional expenses. See Local SSD pricing for details.
Amazon EBS volumes are priced per GB per month and, for some volume types, per provisioned IOPS per month. In addition, Amazon EBS pricing varies by region. See Amazon EBS pricing for details.
Compute Engine persistent disks
Compute Engine persistent disks and disk snapshots are priced per GB per month. See Compute Engine Pricing for details.
GCP and AWS both offer file storage options as part of their compute services. GCP provides Cloud Filestore, and AWS provides Elastic File System (EFS). As of September 2018, Cloud Filestore is in beta.
AWS's filer storage services map to those of GCP as follows:
|Feature||Amazon EFS||Cloud Filestore|
|Tiers/Modes||General Purpose and Max I/O. Each supports either Bursting Throughput or Provisioned Throughput mode.||Standard and premium|
|Encryption||Can be enabled at rest and in transit||By default, encrypted at rest and in transit|
|Max number of mounts per drive||N/A (no published data)||500|
|Max file size||47.9TB||16TB|
|Max write throughput||Depends on file system size and whether you are using Bursting Throughput mode or Provisioned Throughput mode.||Standard: 100 MB/s up to 10TB, 180 MB/s above 10TB.
Premium: 700 MB/s
|Max read throughput||Depends on file system size and whether you are using Bursting Throughput mode or Provisioned Throughput mode.||
Standard: 100 MB/s for storage up to 10TB, 120 MB/s for storage above 10TB.
Premium: 350 MB/s
|Max IOPS||N/A (no published data)||Standard tier: 5000, Premium tier: 30000|
|Size limit||N/A (no published data)||63.9TB|
|Pricing||Priced by average number of GB used per month and by region of deployment. If using Provisioned Throughput mode, also priced by provisioned MB of throughput.||Priced per allocated GB per second, by region of deployment, and by service tier.|
** Amazon EFS supports a subset of NFSv4's features. For details, see Amazon EFS Limits.
When using Amazon EFS, you deploy your file system within a specific region. Amazon EC2 instances within that region can then access that file system. Using Amazon VPC with AWS Direct Connect, you can also mount your file system to an on-premises client machine.
When using Cloud Filestore, you deploy instances within a specific zone. You can mount your Cloud Filestore instance to a Compute Engine instance in any GCP zone as long as both instances are in the same network. As of September 2018, Cloud Filestore does not provide a way to mount volumes from outside your GCP environment.
Amazon EFS is priced by average number of GB used per month. If you use the Provisioned Throughput mode, you are also charged by provisioned MB of throughput. Pricing for Amazon EFS also varies according to the region in which your Amazon EFS deployment is located.
Cloud Filestore is priced by number of GB allocated per second and by service tier (standard or premium). Pricing also varies according to the region in which your Cloud Filestore instance is deployed.
Cloud Storage and Amazon S3 each offer a reduced-cost storage class for data that does not require the availability of the standard storage tier. Cloud Storage offers Cloud Storage Nearline, and Amazon S3 offers Standard - Infrequent Access (Standard-IA) and One Zone - Infrequent Access (One Zone-IA).
AWS's cool storage services map to those of GCP as follows:
|Feature||Amazon S3 Standard-IA||Amazon S3 One Zone-IA||Cloud Storage Nearline|
|First-byte latency||Milliseconds (identical to Amazon S3 standard)||Milliseconds (identical to Amazon S3 standard)||Milliseconds (identical to Cloud Storage Regional)|
|Minimum storage period||30 days||30 days||30 days|
|SLA||Yes (Amazon S3 SLA)||Yes (Amazon S3 SLA)||Yes (Cloud Storage SLA)|
|Pricing||Priced by amount of data stored per month, network egress, minimum object size, storage period, storage region, and number of common API requests||Priced by amount of data stored per month, network egress, minimum object size, storage period, storage region, and number of common API requests||Priced by amount of data stored per month, network egress, storage period, storage region, and number of common API requests|
Amazon S3 Standard-IA and One Zone-IA
Amazon S3 Standard-IA and One Zone-IA are both priced by amount of data stored per month and by network egress. Pricing also varies by storage region. Because Amazon S3 has a minimum object size of 128KB, objects smaller than 128KB are charged as if they are 128KB in size. If you delete or modify your data before the minimum storage period, you will be charged for the remainder of the period. For example, if you delete an object 5 days after storing the object, you will be charged for the remaining 25 days of storage for that object.
In addition, Amazon S3 Standard-IA and One Zone-IA charge for common API requests, and for data retrieval on a per-GB basis.
Cloud Storage Nearline
Cloud Storage Nearline is priced by amount of data stored per month and by network egress. Pricing also varies by storage region. If you delete or modify your data before the minimum storage period, you will be charged for the remainder of the period. For example, if you delete an object 5 days after storing the object, you will be charged for the remaining 25 days of storage for that object.
Cloud Storage Nearline also charges for common API requests.
For more information about Cloud Storage Nearline pricing, see Cloud Storage Nearline pricing.
Cold or archival data storage
Google and Amazon each offer cold storage options for data that does not need to be accessed regularly or retrieved quickly. Cloud Storage offers an additional class called Cloud Storage Coldline, and Amazon offers Amazon Glacier.
AWS's cold storage services map to those of GCP as follows:
|Feature||Amazon Glacier||Cloud Storage Coldline|
|First-byte latency||Minutes to hours||Milliseconds (identical to Cloud Storage Regional)|
|Retrieval types||Expedited, Standard, Bulk. Expedited users can also choose between On-Demand or Provisioned retrieval.||N/A|
|Minimum storage period||90 days||90 days|
|SLA||No||Yes (Cloud Storage SLA)|
|Pricing||Priced by amount of data stored per month, size of files stored, retrieval type, number of retrieval requests, network egress, storage region, storage period, and number of common API requests||Priced by amount of data stored per month, network egress, storage region, storage period, and number of common API requests|
Amazon Glacier is priced by the following:
- Amount of data stored per month
- Size of files stored
- Retrieval type
- Number of retrieval requests
- Network egress
- Storage region
- When using the Expedited retrieval type, choice of On-Demand or Provisioned retrieval
If you delete or modify your data before the minimum storage period, you will be charged for the remainder of the period. For example, if you delete an object 5 days after storing the object, you will be charged for the remaining 85 days of storage for that object.
As with other Amazon S3 storage classes, Amazon Glacier also charges for common API requests.
For more information about Amazon Glacier pricing, see Amazon Glacier pricing.
Cloud Storage Coldline
Cloud Storage Coldline is priced by amount of data stored per month and by network egress. Pricing also varies by storage region. If you delete or modify your data before the minimum storage period, you will be charged for the remainder of the period. For example, if you delete an object 5 days after storing the object, you will be charged for the remaining 85 days of storage for that object.
Cloud Storage Nearline also charges for common API requests.
For more information about Cloud Storage Coldline pricing, see Cloud Storage Coldline pricing.
Check out the other GCP for AWS Professionals articles: