Jump to Content
Partners

Google Cloud and NVIDIA’s enhanced partnership accelerates computing workloads

May 14, 2020
https://storage.googleapis.com/gweb-cloudblog-publish/images/google_cloud_nvidia.max-2600x2600.jpg
Manish Sainani

Director of Product Management, ML Infrastructure

Companies from startups to multinationals are striving to radically transform the way they solve their data challenges. As they continue to manage increasing volumes of data, these companies are searching for the best tools to help them achieve their goals—without heavy capital expenditures or complex infrastructure management.

Google Cloud and NVIDIA have been collaborating for years to deliver a powerful platform for machine learning (ML), artificial intelligence (AI), and data analytics to help you solve your complex data challenges. Organizations use NVIDIA GPUs on Google Cloud to accelerate machine learning training and inference, analytics, and other high performance computing (HPC) workloads. From virtual machines to open-source frameworks like TensorFlow, we have the tools to help you tackle your most ambitious projects. For instance, Google Cloud’s Dataproc now lets you use NVIDIA GPUs to speed up ML training and development by up to 44 times and reduce costs by 14 times.

To continue to help you meet your goals, we’re excited to announce forthcoming support for the new NVIDIA Ampere architecture and the NVIDIA A100 Tensor Core GPU. Google Cloud and the new A100 GPUs will come with enhanced hardware and software capabilities to enable researchers and innovators to further advance today’s most important AI and HPC applications, from conversational AI and recommender systems, to weather simulation research on climate change. We’ll be making the A100 GPUs available via Google Compute Engine, Google Kubernetes Engine, and Cloud AI Platform, allowing customers to scale up and out with control, portability, and ease of use. 

In addition, Google Cloud’s Deep Learning VM images and Deep Learning Containers will bring pre-built support for NVIDIA’s new generation of libraries to take advantage of A100 GPUs. The Google Cloud, NVIDIA, and TensorFlow teams are partnering to provide built-in support for this new software in all TensorFlow Enterprise versions, so TensorFlow users on Google Cloud can use the new hardware without changing any code or upgrading their TensorFlow versions. 

Avaya makes customer connections with Google Cloud and NVIDIA

Avaya, a leading global provider of unified communications and collaboration, uses Google Cloud and NVIDIA technology to address customers’ critical business challenges. Avaya Spaces, a born-in-the-cloud video collaboration solution, runs on Google Cloud and is deployed in multiple data centers globally. With COVID-19 changing the way we work, this solution has been especially helpful to organizations as they shift to social distancing and working from home. 

“Moving our video processing over to NVIDIA T4s on Google Cloud opens up new innovation opportunities for our platform. Our direction is to infuse real-time AI capabilities in our user experience to create unique value for our end users,” says Paul Relf, Senior Director of Product Management, Cloud Collaboration at Avaya. “We are heavy users of Google Cloud and the value-added capabilities that are available to us. We are also keenly interested in the new AI capabilities coming from NVIDIA and how we can leverage the combined ecosystem to create better outcomes for our Avaya Spaces users.” 

There is a wide range of use cases for NVIDIA on Google Cloud solutions, across industries and company sizes. We spoke about some of the AI platform uses—from edge computing to graphics visualization—at NVIDIA’s GTC Digital event. You can check out some of the on-demand sessions we think are particularly interesting below:

If you’re interested in learning more about the new A100 GPUs on Google Cloud, fill out this form and we’ll be in touch.

Posted in