Jump to

NVIDIA and Google Cloud

NVIDIA and Google Cloud deliver accelerator-optimized solutions that address your most demanding workloads, including machine learning, high performance computing, data analytics, graphics, and gaming workloads.  

Nvidia and Google Cloud logos

Benefits

The power of NVIDIA-accelerated computing at scale on Google Cloud

Increased performance for diverse workloads

With the latest NVIDIA GPUs on Google Cloud, you can easily provision Compute Engine instances with NVIDIA H100, A100, L4, T4, P100, P4, and V100 to accelerate a broad set of demanding workloads.

Reduce costs with per-second billing

Google Cloud's per-second pricing means you pay only for what you need, with up to a 30% monthly discount applied automatically. Save on upfront costs while enjoying the same uptime and scalable performance.

Optimize workloads with custom machine configurations

Optimize your workloads by precisely configuring an instance with the exact ratio of processors, memory, and NVIDIA GPUs you need instead of modifying your workload to fit within limited system configurations. 

Key features

NVIDIA technologies on Google Cloud

A3 VMs powered by NVIDIA H100 Tensor Core GPUs

A3 VMs, powered by NVIDIA H100 Tensor Core GPUs, are purpose-built to train and serve especially demanding gen AI workloads and LLMs. Combining NVIDIA GPUs with Google Cloud’s leading infrastructure technologies provides massive scale and performance and is a huge leap forward in supercomputing capabilities.

More info

A2 VMs powered by NVIDIA A100® Tensor Core GPUs

The accelerator-optimized A2 VMs are based on the NVIDIA Ampere A100 Tensor Core GPU. Each A100 GPU offers up to 20x the compute performance of the previous generation. These VMs are designed to deliver acceleration at every scale for AI, data analytics, and high performance computing to tackle the toughest computing challenges.

More info

G2 VMs powered by NVIDIA L4 Tensor Core GPUs

G2 was the industry’s first cloud VM powered by the newly announced NVIDIA L4 Tensor Core GPU, and is purpose-built for large inference AI workloads like generative AI. G2 delivers cutting-edge performance-per-dollar for AI inference workloads. As a universal GPU, G2 offers significant performance improvements on HPC, graphics, and video transcoding workloads.

More info

Autoscale with Google Kubernetes Engine

Using Google Kubernetes Engine (GKE) you can seamlessly create clusters with NVIDIA GPUs on demand, load balance, and minimize operational costs by automatically scaling GPU resources up or down. With support for multi-instance GPUs (MIG) in NVIDIA A100 GPUs, GKE can now provision the right-size GPU acceleration with finer granularity for multiuser, multimodel AI inference workloads.

NVIDIA CloudXR™ with RTX Virtual Workstations

NVIDIA CloudXR, a groundbreaking innovation built on NVIDIA RTX™ technology, makes high-quality XR accessible through Google Cloud Marketplace with NVIDIA RTX Virtual Workstation as a virtual machine image (VMI). Users can easily set up, scale, and consume high-quality immersive experience and stream XR workflows from the cloud.


Ready to get started? Contact us

Related services

Documentation

Technical resources for deploying NVIDIA technologies on Google Cloud

Google Cloud Basics
GPUs on Compute Engine

Compute Engine provides GPUs that you can add to your virtual machine instances. Learn what you can do with GPUs and what types of GPU hardware are available.

Google Cloud Basics
Using GPUs for training models in the cloud

Accelerate the training process for many deep-learning models, like image classification, video analysis, and natural language processing.

Tutorial
GPUs on Google Kubernetes Engine

Learn how to use GPU hardware accelerators in your Google Kubernetes Engine clusters’ nodes.

Google Cloud Basics
Attaching GPUs to Dataproc clusters

Attach GPUs to the master and worker Compute Engine nodes in a Dataproc cluster to accelerate specific workloads, such as machine learning and data processing.