Computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation.Try it free
General purpose (E2, N1, N2, N2D) machines provide a good balance of price and performance
Compute optimized (C2) machines offer high-end vCPU performance for compute-intensive workloads
Memory optimized (M2) machines offer the highest memory and are great for in-memory databases
Accelerator optimized (A2) machines are based on the A100 GPU, for very demanding applications
Integrate Compute with other Google Cloud services such as AI/ML and data analytics.
Scale globally as needed
Make reservations to help ensure your applications have the capacity they need as they scale.
Get more value
Save money just for running Compute with sustained-use discounts, and achieve greater savings when you use committed-use discounts.
Live migration for VMs
Compute Engine virtual machines can live-migrate between host systems without rebooting, which keeps your applications running even when host systems require maintenance.
OS patch management
Protect your running VMs against defects and vulnerabilities with OS patch management. You can apply operating system patches across a set of VMs, receive patch compliance data across your environments, and automate installation of your OS patches across VMs.
Sole-tenant nodes are physical Compute Engine servers dedicated exclusively for your use. Sole-tenant nodes simplify deployment for bring-your-own-license (BYOL) applications. Sole-tenant nodes give you access to the same machine types and VM configuration options as regular compute instances.
Faster provisioning helped launch more experiences quickly
MVP rewrite of core supply chain system was live in weeks
CIO: “Today I don’t worry that much about system stability”
Retail and consumer goods
Sign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.
Quickstart using a Linux VM
Create an instance and connect to that instance using SSH.
Boot disk images
Learn about the public images that you can use to create your VMs, or learn how to create and import your own custom images to Compute Engine.
Using the Compute Engine API through client libraries
Use client libraries to create and manage Compute Engine resources in Go, Python, Java, Node.js, and other languages.
Create managed instance groups
Managed instance groups maintain high availability of your applications by proactively keeping your VM instances available.
Virtual Private Cloud (VPC) network overview
A VPC network provides connectivity for your Compute Engine VM instances. Configure your VPC network and firewalls to handle network traffic for your applications.
Identity and Access Management (IAM) overview
Use IAM roles and permissions to manage access and permissions to your Compute Engine resources.
Compute Engine resources
Find Compute Engine pricing along with discounts, benchmarks, zonal resources, release notes and more.
Explore what you can build on GCP
Find out how to migrate and modernize workloads on Google’s global, secure, and reliable infrastructure.
Compute Engine provides tools to help you bring your existing applications to the cloud. You can have your applications running on Compute Engine within minutes while your data migrates transparently in the background. Bring your existing applications from your physical servers, VMware vSphere, Amazon EC2, or Azure VMs.
Process petabytes of genomic data in seconds with Compute Engine and our high performance computing solution. Our scalable and flexible infrastructure enables research to continue without disruptions. Competitive pricing and discounts help you stay within budget to convert ideas into discoveries, hypotheses into cures, and inspirations into products.
You can run your Windows-based applications either by bringing your own licenses and running them in Compute Engine sole-tenant nodes or using a license-included image. After you migrate to Google Cloud, optimize or modernize your license usage to achieve your business goals. Take advantage of the many benefits available to virtual machine instances such as reliable storage options, the speed of the Google network, and autoscaling.
|Predefined machine types||Compute Engine offers predefined virtual machine configurations for every need from small general purpose instances to large memory-optimized instances with up to 11.5 TB of RAM or fast compute-optimized instances with up to 60 vCPUs.|
|Custom machine types||Create a virtual machine with a custom machine type that best fits your workloads. By tailoring a custom machine type to your specific needs, you can realize significant savings.|
|Preemptible VMs||Low-cost, short-term instances designed to run batch jobs and fault-tolerant workloads. Preemptible VMs provide significant savings of up to 80% while still getting the same performance and capabilities as regular VMs.|
|Live migration for VMs||Compute Engine virtual machines can live-migrate between host systems without rebooting, which keeps your applications running even when host systems require maintenance.|
|Persistent disks||Durable, high performance block storage for your VM instances. You can create persistent disks in HDD or SSD formats. You can also take snapshots and create new persistent disks from that snapshot. If a VM instance is terminated, its persistent disk retains data and can be attached to another instance.|
|Local SSD||Compute Engine offers always-encrypted local solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.|
GPUs can be added to accelerate computationally intensive workloads like machine learning, simulation, and virtual workstation applications. Add or remove GPUs to a VM when your workload changes and pay for GPU resources only while you are using them.
Our new A2 VM family is based on the NVIDIA Ampere A100 GPU. You can learn more about the A2 VM family by requesting access to our alpha program.
|Global load balancing||Global load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.|
|Linux and Windows support||Run your choice of OS, including Debian, CentOS, CoreOS, SUSE, Ubuntu, Red Hat Enterprise Linux, FreeBSD, or Windows Server 2008 R2, 2012 R2, and 2016. You can also use a shared image from the Google Cloud community or bring your own.|
|Per-second billing||Google bills in second-level increments. You pay only for the compute time that you use.|
|Commitment savings||With committed-use discounts you can save up to 57% with no up-front costs or instance-type lock-in.|
|Sustained-use savings||Sustained-use discounts are automatic discounts for running Compute Engine resources for a significant portion of the billing month.|
|Container support||Run, manage, and orchestrate Docker containers on Compute Engine VMs with Google Kubernetes Engine.|
|Reservations||Create reservations for VM instances in a specific zone. Use reservations to ensure that your project has resources for future increases in demand. When you no longer need a reservation, delete the reservation to stop incurring charges for it.|
|Right-sizing recommendations||Compute Engine provides machine type recommendations to help you optimize the resource utilization of your virtual machine (VM) instances. Use these recommendations to resize your instance’s machine type to more efficiently use the instance’s resources.|
|OS patch management||With OS patch management, you can apply OS patches across a set of VMs, receive patch compliance data across your environments, and automate installation of OS patches across VMs - all from a centralized location.|
|Placement Policy||Use Placement Policy to specify the location of your underlying hardware instances. Spread Placement Policy provides higher reliability by placing instances on distinct hardware, reducing the impact of underlying hardware failures. Compact Placement Policy provides lower latency between nodes by placing instances close together within the same network infrastructure.|