Expanding support for AI developers on Hugging Face

Ryan J. Salva
Senior Director, Product Management, Google Cloud
For those building with AI, most are in it to change the world — not twiddle their thumbs. So when inspiration strikes, the last thing anyone wants is to spend hours waiting for the latest AI models to download to their development environment.
That’s why today we’re announcing a deeper partnership between Hugging Face and Google Cloud that:
-
reduces Hugging Face model download times through Vertex AI and Google Kubernetes Engine
-
offers native support for TPUs on all open models sourced through Hugging Face
-
provides a safer experience through Google Cloud’s built-in security capabilities.
We’ll enable faster download times through a new gateway for Hugging Face repositories that will cache Hugging Face models and datasets directly on Google Cloud. Moving forward, developers working with Hugging Face’s open models on Google Cloud should expect download times to take minutes, not hours.
We’re also working with Hugging Face to add native support for TPUs for all open models on the Hugging Face platform. This means that whether developers choose to deploy training and inference workloads on NVIDIA GPUs or on TPUs, they’ll experience the same ease of deployment and support.
Open models are gaining traction with enterprise developers, who typically work with specific security requirements. To support enterprise developers, we’re working with Hugging Face to bring Google Cloud’s extensive security protocols to all Hugging Face models deployed through Vertex AI. This means that any Hugging Face model on Vertex AI Model Garden will now be scanned and validated with Google Cloud’s leading cybersecurity capabilities powered by our Threat Intelligence platform and Mandiant.
A more open AI
Ultimately, we’re committed — through our robust and diverse AI ecosystem — to supporting developers with class-leading AI tools, a choice of AI-optimized infrastructure, and a selection of models in the hundreds; this includes a broad set of open models optimized to run Google Cloud through Hugging Face.
This expanded partnership with Hugging Face furthers that commitment and will ensure that developers have an optimal experience when serving AI models on Google Cloud, whether they choose a model from Google, from our many partners, or one of the thousands of open models available on Hugging Face.
You can read more on Hugging Face’s blog.


