Jump to Content
Networking

Networking support for AI workloads

February 10, 2025
https://storage.googleapis.com/gweb-cloudblog-publish/images/0-hero-hpc.max-2500x2500.png
Ammett Williams

Developer Relations Engineer

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

At Google Cloud, we strive to make it easy to deploy AI models onto our infrastructure. In this blog we explore how the Cross-Cloud Network solution supports your AI workloads.

Managed and Unmanaged AI options

Google Cloud provides both managed (Vertex AI) and do-it-yourself (DIY) approaches for running AI workloads. 

  • Vertex AI: A fully managed machine learning platform. Vertex AI offers both pre-trained Google models and access to third-party models through Model Garden. As a managed service, Vertex AI handles infrastructure management, allowing you to concentrate on training, tuning, and inferencing your AI models.

  • Custom infrastructure deployments: These deployments utilize various compute, storage and networking options based on the type of workload the user is running. AI Hypercomputer is one way to deploy both HPC workloads that may not require GPU and TPUs, and also AI workloads running TPUs or GPUs.

Networking for managed AI

With Vertex AI you don’t have to worry about the underlying infrastructure. For network connectivity by default the service is accessible via public API. Enterprises that want to use private connectivity have a choice of Private Service Access, Private Google Access, Private Service Connect endpoints and Private Service Connect for Google APIs. The option you choose will vary based on the specific Vertex AI service you are using. You can learn more in the Accessing Vertex AI from on-premises and multicloud documentation.

Networking AI infrastructure deployments

An organization has data located in another cloud, and would like to deploy an AI cluster with GPUs on Google Cloud. Let's look at a sample case. 

Based on this need, you need to analyze the networking based on planning, data ingestion, training and inference.

  • Planning: This crucial initial phase involves defining your requirements, the size of the cluster (number of GPUs), the type of GPUs needed, the desired region and zone for deployment, storage and anticipated network bandwidth for transfers. This planning informs the subsequent steps. For instance, training large language models like LLaMA which has billions of parameters requires a significantly larger cluster than fine-tuning smaller models.

  • Data ingestion: Since the data is located in another cloud, you need a high-speed connection so that the data can be accessed directly or transferred to a storage option in Google Cloud. To facilitate this, Cross-Cloud Interconnect offers a direct connection at high bandwidth with a choice of 10Gbps or 100Gbps per link. Alternatively if the data is located on-premises, you can use Cloud Interconnect.

  • Training: Training workloads demand high-bandwidth, low-latency, and lossless cluster networking. You can achieve GPU-to-GPU communication that bypasses the system OS with Remote Direct Memory Access (RDMA). Google Cloud networking supports the RDMA over converged ethernet (RoCE) protocol in special network VPCs using the RDMA network profile. Proximity is important so nodes and clusters need to be as close to each other as possible for best performance.

  • Inference: Inference requires low-latency connectivity to endpoints, which can be exposed via connectivity options like Network Connectivity Center (NCC), Cloud VPN, VPC network peering and Private Services Connect.

https://storage.googleapis.com/gweb-cloudblog-publish/images/1-neflowrdma.max-2200x2200.png

In the example above we use:

  • Cross-Cloud Interconnect to connect to Google Cloud to meet the high speed connection requirement

  • RDMA networking with RoCE, since we want to optimize our accelerators and have planned requirements.

  • Google Kubernetes Engine (GKE) as a compute option on which to deploy our cluster.

Learn more

To learn more about networking for AI workloads please explore the following:

Want to ask a question, find out more or share a thought? Please connect with me on Linkedin.

Posted in