Jump to Content
Google Cloud

The overwhelmed person’s guide to Google Cloud: week of December 19

December 24, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/overwhelmed_persons_guide.max-2600x2600.png
Richard Seroter

Chief Evangelist, Google Cloud

A weekly curation of the most helpful blogs, exciting new features, and useful events coming out of Google Cloud.

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

The content in this blog post was originally published last week as a members-only email to the Google Cloud Innovators community. To get this content directly in your inbox (not to mention lots of other benefits), sign up to be an Innovator today.


New and shiny

Three new things to know this week

  • Spanner Graph is now generally available. Get an amazing native graph experience along with full-text search, all powered by Cloud Spanner. With this GA release, Spanner Graph offers vector similarity search, among other features you enjoyed during preview. Read about the full features set here.
  • Take advantage of Compute Engine reservations in GKE and Vertex AI. Do you want assurances that the cloud hardware you need is available when you need it? Then you probably leverage reservations. GKE now makes it easy to consume Compute Engine reservations in GKE custom compute classes, and Vertex AI offers the use of reservations for training and prediction workloads.
  • Request eSignatures for your Google Drive PDFs. As developers, you might be using files stored in Google Drive to power your apps. If you require eSignatures, we’ve built on our support for Google Docs with new support for Drive-hosted PDFs. Great!
  • Major League Baseball is drafting you to build innovative fan experiences on Google Cloud. Knock it out of the park to win a chance to demo your creation during the Next developer keynote, and even get a trip to the 2025 MLB All-Star Game. Register today to compete.

Watch this

Get some cloud migration insights from a banking customer. Google Cloud is for customers of all sizes. In this episode of Serverless Expeditions, Martin is joined by Christian from Commerzbank to talk about what to know when moving to the cloud.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_HLX9KwY.max-2000x2000.png

Community cuts

https://storage.googleapis.com/gweb-cloudblog-publish/images/Community_Cuts.max-600x600.png

Every week I round up some of my favorite links from builders around the Google Cloud-iverse. Want to see your blog or video in the next issue? Drop Richard a line!

  • Use GitHub Actions for Google Cloud serverless deployments. You folks like using GitHub Actions for CI/CD, and Darren provides a helpful look at what it takes to wire up a deployment pipeline.
  • Compute Engine is pretty awesome. Whether you’re explicitly using it or not, GCE powers a lot of the Google Cloud experience. Sachin explains why GCE is a “game changer” and how to get started.
  • Ship secure Memorystore clusters using Private Service Connect. It makes good sense to run services on private networks. Derek gives us a detailed look at PSC Connection Policies and what it takes to set them up.

Learn and grow

Three ways to build your cloud muscles this week
  • It’s worth your time to go deep on Artifact Registry. Are you storing operating system or application packages for use by your infrastructure and apps? Artifact Registry is a robust managed offering in Google Cloud, and this code lab is an excellent resource for learning it in depth.
  • Can AI make continuous delivery better? Yes, yes it can. A few of our creative field engineers wrote up an interesting post that shows off an open demo tool that adds change summaries, pull request comments, and release notes for code commits.
  • Find performance issues by comparing AlloyDB snapshots. This new functionality makes it simpler to compare point-in-time system metrics between two snapshots. I can imagine this being handy for identifying workload issues! Read through this new guide.
  • How can you serve a giant LLM that won’t fit on a single VM?  Using GKE for LLM inference? Good choice. But what if you’re using an open model like Llama 3.1 which requires 750GB of GPU memory? Read this to learn about multi-host deployment and serving.

One more thing

Happy birthday to Gemini!. We’ve celebrated one year of Gemini, and also found ourselves atop (for now!) the leaderboard rankings with our latest model.


Become an Innovator to stay up-to-date on the latest news, product updates, events, and learning opportunities with Google Cloud.

Posted in