Compute Engine offers several storage options for your VM instances. Each of the following storage options has unique price and performance characteristics:
- Persistent Disk volumes provide high-performance and redundant network
storage. Each Persistent Disk volume is
striped
across hundreds of physical disks.
- By default, VMs use zonal persistent disks, and store your
data on volumes located within a single zone, such as
us-west1-c
. - You can also create regional persistent disks, which synchronously replicate data between disks located in two zones and provide protection if a zone becomes unavailable.
- By default, VMs use zonal persistent disks, and store your
data on volumes located within a single zone, such as
- Hyperdisk volumes offer the fastest redundant network storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.
- Local SSDs are physical drives attached directly to the same server as your VM. They can offer better performance, but are ephemeral.
- You can also use Cloud Storage buckets and Filestore with your VMs.
Each storage option has unique price and performance characteristics. For cost comparisons, see disk pricing. If you are not sure which option to use, the most common solution is to add a persistent disk to your instance.
Introduction
By default, each Compute Engine instance has a single boot disk that contains the operating system. The boot disk data is typically stored on a Persistent Disk (PD) volume. When your applications require additional storage space, you can provision one or more of the following storage volumes to your instance.
To learn more about each storage option, review the following table:
Zonal standard PD |
Regional standard PD |
Zonal balanced PD |
Regional balanced PD |
Zonal SSD PD |
Regional SSD PD |
Zonal extreme PD |
Hyperdisk Extreme | Local SSDs | Cloud Storage buckets | |
---|---|---|---|---|---|---|---|---|---|---|
Storage type | Efficient and reliable block storage | Efficient and reliable block storage with synchronous replication across two zones in a region | Cost-effective and reliable block storage | Cost-effective and reliable block storage with synchronous replication across two zones in a region | Fast and reliable block storage | Fast and reliable block storage with synchronous replication across two zones in a region | Highest performance Persistent Disk block storage option with customizable IOPS | Fastest block storage option with customizable IOPS | High performance local block storage | Affordable object storage |
Minimum capacity per disk | 10 GiB | 200 GiB | 10 GiB | 10 GiB | 10 GiB | 10 GiB | 500 GiB | 64 GiB | 375 GiB | n/a |
Maximum capacity per disk | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 375 GiB | n/a |
Capacity increment | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | Depends on the machine type† | n/a |
Maximum capacity per instance | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 9 TiB | Almost infinite |
Scope of access | Zone | Zone | Zone | Zone | Zone | Zone | Zone | Zone | Instance | Global |
Data redundancy | Zonal | Multi-zonal | Zonal | Multi-zonal | Zonal | Multi-zonal | Zonal | Zonal | None | Regional, dual-regional or multi-regional |
Encryption at rest | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Custom encryption keys | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes |
How-to | Add a standard persistent disk | Add a regional standard persistent disk | Add a balanced persistent disk | Add a regional balanced persistent disk | Add an SSD persistent disk | Add a regional SSD persistent disk | Add an extreme persistent disk | Add a Hyperdisk | Add a local SSD | Connect a bucket |
In addition to the storage options listed in the preceding table, Google Cloud offers the following storage services for your instances:
- File services: You can create a file server or distributed file system on Compute Engine to use as a network file system with NFSv3 and SMB3 capabilities.
- RAM disk: You can mount a RAM disk within VM instance memory to create a block storage volume with low latency and high throughput.
Block storage resources have different performance characteristics. Consider your storage size and performance requirements when determining the correct block storage type for your VM instances.
For information about performance limits for each disk type, see:
- Persistent Disk performance limits
- Local SSD performance limits
- Hyperdisk Extreme performance limits
Persistent disks created in multi-writer mode have specific IOPS and throughput limits. For details, see performance of persistent disks in multi-writer mode.
Persistent disks
Persistent disks are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each persistent disk is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance.
Persistent disks are located independently from your virtual machine (VM) instances, so you can detach or move persistent disks to keep your data even after you delete your instances. Persistent disk performance scales automatically with size, so you can resize your existing persistent disks or add more persistent disks to an instance to meet your performance and storage space requirements.
Persistent disk types
When you configure a persistent disk, you can select one of the following disk types:
- Standard persistent disks (
pd-standard
)- Suitable for large data processing workloads that primarily use sequential I/Os.
- Backed by standard hard disk drives (HDD).
- Balanced persistent disks (
pd-balanced
)- An alternative to performance (pd-ssd) persistent disks
- Balance of performance and cost. For most VM shapes, except very large ones, these disks have the same maximum IOPS as SSD persistent disks and lower IOPS per GB. This disk type offers performance levels suitable for most general-purpose applications at a price point between that of standard and performance (pd-ssd) persistent disks.
- Backed by solid-state drives (SSD).
- Performance (SSD) persistent disks (
pd-ssd
)- Suitable for enterprise applications and high-performance databases that require lower latency and more IOPS than standard persistent disks provide.
- Designed for single-digit millisecond latencies; the observed latency is application specific.
- Backed by solid-state drives (SSD).
- Extreme persistent disks (
pd-extreme
)- Offer consistently high performance for both random access workloads and bulk throughput.
- Designed for high-end database workloads.
- Allow you to provision the target IOPS.
- Backed by solid-state drives (SSD).
- Available with a limited number of machine types.
If you create a disk in the Google Cloud console, the default disk type is
pd-balanced
. If you create a disk using the gcloud CLI or the
Compute Engine API, the default disk type is pd-standard
.
For information about machine type support, refer to the following:
Durability of persistent disks
Disk durability represents the probability of data loss, by design, for a typical disk in a typical year, using a set of assumptions about hardware failures, the likelihood of catastrophic events, isolation practices and engineering processes in Google data centers, and the internal encodings used by each disk type. Persistent disk data loss events are extremely rare and have historically been the result of coordinated hardware failures, software bugs, or a combination of the two. Google also takes many steps to mitigate the industry-wide risk of silent data corruption. Human error by a Google Cloud customer, such as when a customer accidentally deletes a disk, is outside the scope of persistent disk durability.
There is a very small risk of data loss occurring with a regional persistent disk due to its internal data encodings and replication. Regional persistent disks provide twice as many replicas as zonal persistent disks, with their replicas distributed between two zones in the same region, so they provide high availability and can be used for disaster recovery if an entire data center is lost and cannot be recovered (although that has never happened). The additional replicas in a second zone can be accessed immediately if a primary zone becomes unavailable during a long outage.
Note that durability is in the aggregate for each disk type, and does not represent a financially-backed service level agreement (SLA).
The table below shows durability for each disk type's design. 99.999% durability means that with 1,000 disks, you would likely go a hundred years without losing a single one.
Zonal standard persistent disk | Zonal balanced persistent disk | Zonal SSD persistent disk | Zonal extreme persistent disk | Regional standard persistent disk | Regional balanced persistent disk | Regional SSD persistent disk |
---|---|---|---|---|---|---|
Better than 99.99% | Better than 99.999% | Better than 99.999% | Better than 99.9999% | Better than 99.999% | Better than 99.9999% | Better than 99.9999% |
Zonal persistent disks
Ease of use
Compute Engine handles most disk management tasks for you so that you do not need to deal with partitioning, redundant disk arrays, or subvolume management. Generally, you don't need to create larger logical volumes, but you can extend your secondary attached persistent disk capacity to 257 TB per instance and apply these practices to your persistent disks if you want. You can save time and get the best performance if you format your persistent disks with a single file system and no partition tables.
If you need to separate your data into multiple unique volumes, create additional disks rather than dividing your existing disks into multiple partitions.
When you require additional space on your persistent disks, resize your disks rather than repartitioning and formatting.
Performance
Persistent disk performance is predictable and scales linearly with provisioned capacity until the limits for an instance's provisioned vCPUs are reached. For more information about performance scaling limits and optimization, see Configure disks to meet performance requirements.
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD or extreme persistent disks. SSD persistent disks are designed for single-digit millisecond latencies. Observed latency is application specific.
Compute Engine optimizes performance and scaling on persistent disks automatically. You don't need to stripe multiple disks together or pre-warm disks to get the best performance. When you need more disk space or better performance, resize your disks and possibly add more vCPUs to add more storage space, throughput, and IOPS. Persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has.
For boot devices, you can reduce costs by using a standard persistent disk. Small, 10 GB persistent disks can work for basic boot and package management use cases. However, to ensure consistent performance for more general use of the boot device, use a balanced persistent disk as your boot disk.
Each persistent disk write operation contributes to the cumulative network egress traffic for your instance. This means that persistent disk write operations are capped by the network egress cap for your instance.
Reliability
Persistent disks have built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all persistent disk operations, so we can ensure that what you read is what you wrote.
Additionally, you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running instances.
Multi-writer mode
You can attach an SSD persistent disk in multi-writer mode to up to two N2 VMs simultaneously so that both VMs can read and write to the disk. Persistent disks in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed Network File System (NFS) and similar highly available services. However, persistent disks with multi-writer mode require specialized file systems such as GlusterFS or GFS2. Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information about the best practices when sharing persistent disks between VMs, see Best practices. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.
To enable multi-writer mode for new persistent disks, create a new persistent
disk and specify the --multi-writer
flag in the gcloud CLI or the
multiWriter
property in the Compute Engine API. For more information, see
Share persistent disks between VMs.
Persistent disk encryption
Compute Engine automatically encrypts your data before it travels outside of your instance to persistent disk storage space. Each persistent disk remains encrypted either with system-defined keys or with customer-supplied keys. Google distributes persistent disk data across multiple physical disks in a manner that users do not control.
When you delete a persistent disk, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.
If you want to control the encryption keys that are used to encrypt your data, create your disks with your own encryption keys.
Restrictions
You cannot attach a persistent disk to an instance in another project.
You can attach a balanced persistent disk to a maximum of 10 VM instances in read-only mode.
For custom machine types or predefined machine types with a minimum of 1 vCPU, you can attach up to 128 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 128 persistent disks and up to 257 TB of total persistent disk space attached. Total persistent disk space for an instance includes the size of the boot persistent disk.
Shared-core machine types are limited to 16 persistent disks and 3 TB of total persistent disk space.
Creating logical volumes larger than 64 TB might require special consideration. For more information about larger logical volume performance see logical volume size.
Regional persistent disks
Regional persistent disks have storage qualities that are similar to zonal persistent disks. However, regional persistent disks provide durable storage and replication of data between two zones in the same region.
If you are designing robust systems or high availability services on Compute Engine, use regional persistent disks combined with other best practices such as backing up your data using snapshots. Regional persistent disks are also designed to work with regional managed instance groups.
In the unlikely event of a zonal outage, you can usually failover your workload
running on regional persistent disks to another zone by using the
--force-attach
flag. The --force-attach
flag lets you attach the regional persistent disk to
a standby VM instance even if the disk can't be detached from the original VM
due to its unavailability. To learn more, see Regional persistent disk
failover. You cannot force attach a zonal
persistent disk to an instance.
Performance
Regional persistent disks are designed for workloads that require a lower Recovery Point Objective (RPO) and Recovery Time Objective (RTO) compared to using persistent disk snapshots.
Regional persistent disks are an option when write performance is less critical than data redundancy across multiple zones.
Like zonal persistent disks, regional persistent disks can achieve greater IOPS and throughput performance on instances with a greater number of vCPUs. For more information about this and other limitations, see Configure disks to meet performance requirements.
When you need more disk space or better performance, you can resize your regional disks to add more storage space, throughput, and IOPS.
Reliability
Compute Engine replicates data of your regional persistent disk to the zones you selected when you created your disks. The data of each replica is spread across multiple physical machines within the zone to ensure redundancy.
Similar to zonal persistent disks, you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running instances.
Restrictions
- You can attach regional Persistent Disk only to VMs that use E2, N1, N2, and N2D machine types.
- You can't use regional Persistent Disk as boot disks.
- When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
- The minimum size of a regional standard Persistent Disk is 200 GiB.
- You can only increase the size of regional Persistent Disk; you can't decrease its size.
- Regional Persistent Disk volumes have different performance characteristics than zonal Persistent Disk volumes. For more information, see Block storage performance.
- If you create a regional Persistent Disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your regional Persistent Disk is fully replicated.
Hyperdisks
Google Cloud Hyperdisk Extreme for Compute Engine offers the fastest block storage available. It is suitable for high-end workloads that need the highest throughput and IOPS.
Hyperdisk Extreme volumes let you independently tune the capacity and IOPS for your workloads. Storage size and performance are decoupled from instance type and size, providing more flexibility.
Hyperdisk Extreme volumes are created and managed like Persistent Disk, with the additional ability to set the provisioned IOPS level and change that value at any time. There is no direct migration path from Extreme Persistent Disk to Hyperdisk Extreme volumes. Instead, you can create a snapshot and restore the snapshot to a new Hyperdisk Extreme volume.
For more information about Hyperdisks, see About Hyperdisks.
Hyperdisk encryption
Compute Engine automatically encrypts your data upon writing to a Hyperdisk volume.
Data persistence on Hyperdisk
Disk durability represents the probability of data loss, by design, for a typical disk in a typical year. Durability is calculated using a set of assumptions about hardware failures, such as:
- The likelihood of catastrophic events
- Isolation practices
- Engineering processes in Google data centers
- The internal encodings used by each disk type
Hyperdisk Extreme offers greater than 99.9999% durability.
Local SSDs
Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. Each local SSD is 375 GB in size, but you can attach multiple local SSD partitions to your instance, depending on the number of vCPUs.
Create an instance with Local SSDs when you need a fast scratch disk or cache and don't want to use instance memory.
Create an instance with Local SSDs
Performance
Local SSDs are designed to offer very high IOPS and low latency. Unlike persistent disks, you must manage the striping on local SSDs yourself. Combine multiple local SSD partitions into a single logical volume to achieve the best local SSD performance per instance, or format local SSD partitions individually.
Local SSD performance depends on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces.
The following table provides an overview of local SSD capacity and estimated performance using NVMe. To reach maximum performance limits with an N1 machine type, use 32 or more vCPUs. To reach maximum performance limits on an N2 and N2D machine type, use 24 or more vCPUs.
Storage space | Partitions | IOPS | Throughput (MB/s) |
Read | Write | Read | Write |
---|---|---|---|---|---|---|---|
3 TB | 8 | 680,000 | 360,000 | 2,650 | 1,400 | ||
6 TB | 16 | 1,600,000 | 800,000 | 6,240 | 3,120 | ||
9 TB | 24 | 2,400,000 | 1,200,000 | 9,360 | 4,680 |
For more information, see Local SSD performance and Optimizing Local SSD performance.
Local SSD encryption
Compute Engine automatically encrypts your data when it is written to local SSD storage space. You can't use customer-supplied encryption keys with local SSDs.
Data persistence on Local SSDs
Please read Local SSD data persistence to learn what events preserve your Local SSD data and what events can cause your Local SSD data to be unrecoverable.
General limitations
- You can create a VM instance with a maximum of 16 or 24 local SSD partitions for 6 TB or 9 TB of local SSD space, respectively, using N1, N2, and N2D machine types.
- For C2, C2D, A2, M1, and M3 machine types, you can create an instance with a maximum of 8 local SSD partitions, for a total of 3 TB local SSD space.
To reach the maximum IOPS limits, use a VM instance with 32 or more vCPUs.
Instances with shared-core machine types can't attach local SSD partitions.
You can't attach Local SSDs to E2, Tau T2D, Tau T2A, and M2 machine types.
Local SSDs and machine types
You can attach Local SSDs to most machine types available on Compute Engine, unless otherwise noted. However, there are constraints around how many local SSDs you can attach based on each machine type:
N1 machine types | Number of local SSD partitions allowed per VM instance |
---|---|
All N1 machine types | 1 to 8, 16, or 24 |
N2 machine types | |
Machine types with 2 to 10 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 |
Machine types with 12 to 20 vCPUs, inclusive | 2, 4, 8, 16, or 24 |
Machine types with 22 to 40 vCPUs, inclusive | 4, 8, 16, or 24 |
Machine types with 42 to 80 vCPUs, inclusive | 8, 16, or 24 |
Machine types with 82 to 128 vCPUs, inclusive | 16 or 24 |
N2D machine types | |
Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 |
Machine types with 32 or 48 vCPUs | 2, 4, 8, 16, or 24 |
Machine types with 64 or 80 vCPUs | 4, 8, 16, or 24 |
Machine types with 96 to 224 vCPUs, inclusive | 8, 16, or 24 |
C2 machine types | |
Machine types with 4 or 8 vCPUs | 1, 2, 4, or 8 |
Machine types with 16 vCPUs | 2, 4, or 8 |
Machine types with 30 vCPUs | 4 or 8 |
Machine types with 60 vCPUs | 8 |
C2D machine types | |
Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, or 8 |
Machine types with 32 vCPUs | 2, 4, or 8 |
Machine types with 56 vCPUs | 4 or 8 |
Machine types with 112 vCPUs | 8 |
A2 machine types | |
a2-highgpu-1g |
1, 2, 4, or 8 |
a2-highgpu-2g |
2, 4, or 8 |
a2-highgpu-4g |
4 or 8 |
a2-highgpu-8g or a2-megagpu-16g |
8 |
G2 machine types | |
g2-standard-4 |
1 |
g2-standard-8 |
1 |
g2-standard-12 |
1 |
g2-standard-16 |
1 |
g2-standard-24 |
2 |
g2-standard-32 |
1 |
g2-standard-48 |
4 |
g2-standard-96 |
8 |
M1 machine types | |
m1-ultramem-40 |
Not available |
m1-ultramem-80 |
Not available |
m1-megamem-96 |
1 to 8 |
m1-ultramem-160 |
Not available |
M3 machine types | |
m3-ultramem-32 |
4, 8 |
m3-megamem-64 |
4, 8 |
m3-ultramem-64 |
4, 8 |
m3-megamem-128 |
8 |
m3-ultramem-128 |
8 |
E2, Tau T2D, Tau T2A, and M2 machine types | These machine types don't support local SSD drives. |
Local SSDs and preemptible VM instances
You can start a preemptible VM instance with a local SSD and Compute Engine charges you discounted spot prices for the local SSD usage. Local SSDs attached to preemptible instances work like normal local SSDs, retain the same data persistence characteristics, and remain attached for the life of the instance.
Compute Engine doesn't charge you for local SSDs if their instances are preempted in the first minute after they start running.
For more information about local SSDs, see Adding local SSDs.
Reserving Local SSDs with committed use discounts
To reserve Local SSD resources in a specific zone, see Reservations of Compute Engine zonal resources.
To receive committed use discounts for Local SSDs in a specific zone, you must create and attach reservations to the commitments that you purchase for those Local SSD resources. For more information, see Attach reservations to commitments.
Cloud Storage buckets
Cloud Storage buckets are the most flexible, scalable, and durable storage option for your VM instances. If your apps don't require the lower latency of Persistent Disks and local SSDs, you can store your data in a Cloud Storage bucket.
Connect your instance to a Cloud Storage bucket when latency and throughput aren't a priority and when you must share data easily between multiple instances or zones.
Performance
The performance of Cloud Storage buckets depends on the storage class that you select and the location of the bucket relative to your instance.
The standard storage class used in the same location as your instance gives performance that is comparable to persistent disks but with higher latency and less consistent throughput characteristics. The standard storage class used in a multiregional location stores your data redundantly across at least two regions within a larger multiregional location.
Nearline and coldline storage classes are primarily for long-term data archival. Unlike the standard storage class, these archival classes have minimum storage durations and read charges. Consequently, they are best for long-term storage of data that is accessed infrequently.
Reliability
All Cloud Storage buckets have built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all Cloud Storage operations to help ensure that what you read is what you wrote.
Flexibility
Unlike persistent disks, Cloud Storage buckets aren't restricted to the zone where your instance is located. Additionally, you can read and write data to a bucket from multiple instances simultaneously. For example, you can configure instances in multiple zones to read and write data in the same bucket rather than replicate the data to persistent disks in multiple zones.
Cloud Storage encryption
Compute Engine automatically encrypts your data before it travels outside of your instance to Cloud Storage buckets. You don't need to encrypt files on your instances before you write them to a bucket.
Just like persistent disks, you can encrypt buckets with your own encryption keys.
Writing and reading data from Cloud Storage buckets
Write and read files from Cloud Storage buckets by using
the gsutil
command-line tool or the
Cloud Storage API.
gsutil
By default, the gsutil
command-line tool is installed on most VMs that use
public images.
If your VM doesn't have the gsutil
command-line tool, you can
install gsutil
as part of the Google Cloud CLI.
- In the Google Cloud console, go to the VM instances page.
-
In the list of virtual machine instances, click SSH in the row of
the instance that you want to connect to.
If you have never used
gsutil
on this instance before, use the gcloud CLI to set up credentials.gcloud init
Alternatively, if your instance is configured to use a service account with a Cloud Storage scope, you can skip this step.
Use the
gsutil
tool to create buckets, write data to buckets, and read data from those buckets. To write or read data from a specific bucket, you must have access to the bucket. You can read data from any bucket that is publicly accessible.Optionally, you can also stream data to Cloud Storage.
API
If you configured your instance to use a service account with a Cloud Storage scope, you can use the Cloud Storage API to write and read data from Cloud Storage buckets.
- In the Google Cloud console, go to the VM instances page.
-
In the list of virtual machine instances, click SSH in the row of
the instance that you want to connect to.
Install and configure a client library for your preferred language.
If necessary, follow the insert code samples to create a Cloud Storage bucket on the instance.
Follow the insert code samples to write data and read data, and include code in your app that writes or reads a file from a Cloud Storage bucket.
What's next
- Add a persistent disk to your instance
- Add a regional persistent disk to your instance
- Create an instance with Local SSDs
- Create a file server or distributed file system
- Review the quotas for disks
- Mount a RAM disk on your instance
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how Compute Engine performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
Try Compute Engine free