Machine families resource and comparison guide


This document describes the machine families, machine series, and machine types that you can choose from to create a virtual machine (VM) instance with the resources you need. When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. There are several machine families you can choose from and each machine family is further organized into machine series and predefined machine types within each series. For example, within the N2 series in the general-purpose machine family, you can select the n2-standard-4 machine type.

All machine series support Spot VMs (and preemptible VMs), with the exception of the M2, M3, and H3 machine series.

Note: This is a list of Compute Engine machine families. For a detailed explanation of each machine family, see the following pages:
  • General-purpose —best price-performance ratio for a variety of workloads.
  • Storage-optimized —best for workloads that are low in core usage and high in storage density.
  • Compute-optimized —highest performance per core on Compute Engine and optimized for compute-intensive workloads.
  • Memory-optimized —ideal for memory-intensive workloads, offering more memory per core than other machine families, with up to 12 TB of memory.
  • Accelerator-optimized —ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the best option for workloads that require GPUs.

VM terminology

This documentation uses the following terms:

  • Machine family: A curated set of processor and hardware configurations optimized for specific workloads. When you create a VM, you choose a predefined or custom machine type from your preferred machine family.

  • Machine series: Machine families are further classified by series and generation. For example, the N1 series within the general-purpose machine family is the older version of the N2 series. A higher generation or series number usually indicates newer underlying CPU platforms or technologies. For example, the M3 series is the newer generation of the M2 series.

  • Machine type: Every machine series has predefined machine types that provide a set of resources for your VM. If a predefined machine type does not meet your needs, you can also create a custom machine type for some machine series.

Generation Intel AMD Arm
3rd generation machine series C3, Z3, H3, M3, A3 C3D
2nd generation machine series E2, N2, C2, M2, A2, G2 N2D, C2D, T2D, E2 T2A
1st generation machine series N1, M1

Machine family and series recommendations

The following tables provide recommendations for different workloads.

General-purpose workloads
E2 N2, N2D, N1 C3, C3D Tau T2D, Tau T2A
Day-to-day computing at a lower cost Balanced price/performance across a wide range of machine types Consistently high performance for a variety of workloads Best per-core performance/cost for scale-out workloads
  • Low-traffic web servers
  • Back office apps
  • Containerized microservices
  • Microservices
  • Virtual desktops
  • Development and test environments
  • Low to medium traffic web and app servers
  • Containerized microservices
  • Business intelligence apps
  • Virtual desktops
  • CRM applications
  • Data Pipelines
  • High traffic web and app servers
  • Databases
  • In-memory caches
  • Ad servers
  • Game Servers
  • Data analytics
  • Media streaming and transcoding
  • CPU-based ML training and inference
  • Scale-out workloads
  • Web serving
  • Containerized microservices
  • Media transcoding
  • Large-scale Java applications

  • Optimized workloads
    Storage-optimized Compute-optimized Memory-optimized Accelerator-optimized
    Z3 (Preview) H3, C2, C2D M3, M2, M1 A3, A2, G2
    Highest block storage to compute ratios for storage-intensive workloads Ultra high performance for compute-intensive workloads Highest memory to compute ratios for memory-intensive workloads Optimized for accelerated high performance computing workloads
    • File servers
    • Flash-optimized databases
    • Scale-out analytics
    • Other databases
    • Compute-bound workloads
    • High-performance web servers
    • Game Servers
    • High performance computing (HPC)
    • Media transcoding
    • Modeling and simulation workloads
    • AI/ML
    • Medium to extra-large SAP HANA in-memory databases
    • In-memory data stores, such as Redis
    • Simulation
    • High Performance databases such as Microsoft SQL Server, MySQL
    • Electronic design automation
    • Generative AI models such as the following:
      • Large Language Models (LLM)
      • Diffusion Models
      • Generative Adversarial Networks (GAN)
    • CUDA-enabled ML training and inference
    • High-performance computing (HPC)
    • Massively parallelized computation
    • BERT natural language processing
    • Deep learning recommendation model (DLRM)
    • Video transcoding
    • Remote visualization workstation

    After you create a VM, you can use rightsizing recommendations to optimize resource utilization based on your workload. For more information, see Applying machine type recommendations for VMs.

    General-purpose machine family guide

    The general-purpose machine family offers several machine series with the best price-performance ratio for a variety of workloads.

    Compute Engine offers general-purpose machine series that run on either x86 or Arm architecture.

    x86

    • The E2 machine series has up to 32 virtual cores (vCPUs) with up to 128 GB of memory with a maximum of 8 GB per vCPU, and the lowest cost of all machine series. The E2 machine series has a predefined CPU platform, running either an Intel processor or the second generation AMD EPYC™ Rome processor. The processor is selected for you when you create the VM. This machine series provides a variety of compute resources for the lowest price on Compute Engine, especially when paired with committed-use discounts.
    • N2 machine series has up to 128 vCPUs, 8 GB of memory per vCPU, and is available on the Intel Ice Lake and Intel Cascade Lake CPU platforms.
    • N2D machine series has up to 224 vCPUs, 8 GB of memory per vCPU, and is available on second generation AMD EPYC Rome and third generation AMD EPYC Milan platforms.
    • The C3 machine series offers up to 176 vCPUs and 2, 4, or 8 GB of memory per vCPU on the Intel Sapphire Rapids CPU platform and Google's custom Intel Infrastructure Processing Unit (IPU). C3 VMs are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
    • The C3D machine series offers up to 360 vCPUs and 2, 4, or 8 GB of memory per vCPU on the AMD EPYC Genoa CPU platform and Google's custom Intel Infrastructure Processing Unit (IPU). C3D VMs are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
    • Tau T2D machine series provides an optimized feature set for scaling out. Each VM can have up to 60 vCPUs, 4 GB of memory per vCPU, and is available on third generation AMD EPYC Milan processors. The Tau T2D machine series doesn't use cluster-threading, so a vCPU is equivalent to an entire core.
    • N1 machine series VMs can have up to 96 vCPUs, up to 6.5 GB of memory per vCPU, and are available on Intel Sandy Bridge, Ivy Bridge, Haswell, Broadwell, and Skylake CPU platforms.

    The E2 and N1 series contain shared-core machine types. These machine types timeshare a physical core which can be a cost-effective method for running small, non-resource intensive apps.

    • E2: offers 2 vCPUs for short periods of bursting.

    • N1: offers f1-micro and g1-small shared-core machine types which have up to 1 vCPU available for short periods of bursting.

    Arm

    • Tau T2A machine series is the first machine series in Google Cloud to run on Arm processors. The Tau T2A machine is optimized to deliver compelling price for performance. Each VM can have up to 48 vCPUs with 4 GB of memory per vCPU. The Tau T2A machine series runs on a 64 core Ampere Altra processor with an Arm instruction set and an all-core frequency of 3 GHz. Tau T2A machine types support a single NUMA node and a vCPU is equivalent to an entire core.

    Storage-optimized machine family guide

    The storage-optimized machine family is ideal for useful for horizontal, scale out databases, log analytics, data warehouse offerings, and other database workloads. This family offers high density, high performing Local SSD.

    • Z3 VMs can have up to 176 vCPUs, 1,408 GB of memory, and 36 TiB of Local SSD. Z3 runs on the Intel Xeon Scalable processor (code name Sapphire Rapids) with DDR5 memory and Titanium offload processors. Z3 brings together the latest compute, networking, and storage innovations into one platform. Z3 VMs are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.

    Compute-optimized machine family guide

    The compute-optimized machine family is optimized for running compute-bound applications by providing the highest performance per core.

    • H3 VMs offer 88 vCPUs and 352 GB of DDR5 memory. H3 VMs run on the Intel Sapphire Rapids CPU platform and Google's custom Intel Infrastructure Processing Unit (IPU). H3 VMs are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance. H3 delivers performance improvements for a wide variety of HPC workloads such as molecular dynamics, computational geoscience, financial risk analysis, weather modeling, frontend and backend EDA, and computational fluid dynamics.
    • C2 VMs offer up to 60 vCPUs, 4 GB of memory per vCPU, and are available on the Intel Cascade Lake CPU platform.
    • C2D VMs offer up to 112 vCPUs, up to 8 GB of memory per vCPU, and are available on the third generation AMD EPYC Milan platform.

    Memory-optimized machine family guide

    The memory-optimized machine family has machine series that are ideal for OLAP and OLTP SAP workloads, genomic modeling, electronic design automation, and your most memory intensive HPC workloads. This family offers more memory per core than any other machine family, with up to 12 TB of memory.

    • M1 VMs offer up to 160 vCPUs, 14.9 GB to 24 GB of memory per vCPU, and are available on the Intel Skylake and Broadwell CPU platforms.
    • M2 VMs are available as 6 TB, 9 TB, and 12 TB machine types, and are available on the Intel Cascade Lake CPU platform.
    • M3 VMs offer up to 128 vCPUs, with up to 30.5 GB of memory per vCPU, and are available on the Intel Ice Lake CPU platform.

    Accelerator-optimized machine family guide

    The accelerator-optimized machine family is ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the optimal choice for workloads that require GPUs.

    • A3 VMs offer 208 vCPUs and 1872 GB of memory, and are available on the Intel Sapphire Rapids CPU platform.
    • A2 VMs offer 12 to 96 vCPUs, up to 1360 GB of memory, and are available on the Intel Cascade Lake CPU platform.
    • G2 VMs offer 4 to 96 vCPUs, up to 432 GB of memory, and are available on the Intel Cascade Lake CPU platform.

    Machine series comparison

    Use the following table to compare each machine family and determine which one is appropriate for your workload. If, after reviewing this section, you are still unsure which family is best for your workload, start with the general-purpose machine family. For details about all supported processors, see CPU platforms.

    To learn how your selection affects the performance of disk volumes attached to your VMs, see:

    Compare the characteristics of different machine types, from C3 to G2. You can select specific properties in the Choose VM properties to compare field to compare those properties across all VM machine types in the following table.

    General purpose General purpose General purpose General purpose General purpose General purpose General purpose Cost optimized Storage optimized Compute optimized Compute optimized Compute optimized Memory optimized Memory optimized Memory optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized
    Intel Sapphire Rapids AMD EPYC Genoa Intel Cascade Lake and Ice Lake AMD EPYC Rome and EPYC Milan Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge AMD EPYC Milan Ampere Altra Intel Skylake, Broadwell, and Haswell, AMD EPYC Rome and EPYC Milan Intel Sapphire Rapids Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Milan Intel Ice Lake Intel Cascade Lake Intel Skylake and Broadwell Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge Intel Sapphire Rapids Intel Cascade Lake Intel Cascade Lake
    x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86
    4 to 176 4 to 360 2 to 128 2 to 224 1 to 96 1 to 60 1 to 48 0.25 to 32 88 or 176 88 4 to 60 2 to 112 32 to 128 208 to 416 40 to 160 1 to 96 208 12 to 96 4 to 96
    Thread Thread Thread Thread Thread Core Core Thread Thread Core Thread Thread Thread Thread Thread Thread Thread Thread Thread
    8 to 1,408 GB 8 to 2,880 GB 2 to 864 GB 2 to 896 GB 1.8 to 624 GB 4 to 240 GB 4 to 192 GB 1 to 128 GB 704 or 1,408 GB 352 GB 16 to 240 GB 4 to 896 GB 976 to 3904 GB 5888 to 11776 GB 961 to 3844 GB 3.75 to 624 GB 1872 GB 85 to 1360 GB 16 to 432 GB
    NVMe NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe NVMe SCSI and NVMe NVMe NVMe SCSI and NVMe SCSI and NVMe NVMe SCSI SCSI and NVMe SCSI and NVMe NVMe SCSI and NVMe NVMe
    12 TiB 12 TiB 9 TiB 9 TiB 9 TiB 0 0 0 36 TiB 0 3 TiB 3 TiB 3 TiB 0 3 TiB 9 TiB 6 TB 3 TiB 3 TiB
    Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal and Regional Zonal
    Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
    Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
    gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net
    23 to 100 Gbps 20 to 100 Gbps 10 to 32 Gbps 10 to 32 Gbps 2 to 32 Gbps 10 to 32 Gbps 10 to 32 Gbps 1 to 16 Gbps 23 to 100 Gbps up to 200 Gbps 10 to 32 Gbps 10 to 32 Gbps 32 Gbps 32 Gbps 32 Gbps 2 to 32 Gbps 200 Gbps 24 to 100 Gbps 10 to 100 Gbps
    50 to 200 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 100 Gbps 200 Gbps 50 to 100 Gbps 50 to 100 Gbps
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 8 16 8
    Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs
    1.28 1.46 1.00 2.29 1.04 1.43 1.50 1.00 0.96

    GPUs and VMs

    GPUs are used to accelerate workloads, and are supported for N1, A3, A2, and G2 VMs. For VMs that use N1 machine types, you can attach GPUs to the VM during, or after VM creation. For VMs that use A3, A2, or G2 machine types, the GPUs are automatically attached when you create the VM. GPUs can't be used with other machine series.

    VMs with lower numbers of GPUs are limited to a maximum number of vCPUs. In general, a higher number of GPUs lets you create VMs with a higher number of vCPUs and memory. For more information, see GPUs on Compute Engine.

    What's next