Jump to Content
Containers & Kubernetes

Google Cloud discounts: Five ways Kubernetes experts get more for less

January 9, 2024
Ameenah Burhan

Solutions Architect

Try Gemini 1.5 models

Google's most advanced multimodal models in Vertex AI

Try it

As Kubernetes adoption grows, so do the challenges of managing costs for medium and large-scale environments. The State of Kubernetes Cost Optimization report highlights a compelling trend: “Elite performers take advantage of cloud discounts 16.2x more than Low performers.”

There are probably lots of reasons for this, but a couple might be these teams’ in-house expertise, proficiency in overseeing large clusters, and targeted strategies that prioritize cost-efficiencies, including Spot VMs and committed use discounts (CUD). In this blog post, we list the best practices you can follow to create a cost-effective environment on Google Kubernetes Engine (GKE), and an overview of the top various cloud discounts available to GKE users.

1. Understand your workload demands

Before selecting a cloud discount model to apply to your GKE environment, you need to know how much computing power your applications use, or you may end up overestimating your resource needs. You can do this by setting resource requests and rightsizing your workloads, helping to reduce costs and improve reliability. Alternatively, you can also create a VerticalPodAutoscaler (VPA) object to automate the analysis and adjustment of CPU and memory resources for Pods. And be sure to understand how VPA works before enabling it — it can either provide recommended resource values for manual updates or be configured to automatically update those values for your Pods.

2. Save up to 45% with GKE Autopilot Mode CUDs

GKE Autopilot shifts the mental model of Kubernetes cost optimization, in that you only pay for what resources your Pods request. But did you know that you can still take advantage of committed use on a per-Pod level? Reduce your GKE Autopilot costs with Kubernetes Engine (Autopilot Mode) committed use discounts. Autopilot Mode CUDs, which are based on one- and three-year commitments, can help you save 20% and 45% off on-demand prices, respectively. These discounts are measured in dollars per hour of equivalent on-demand spend. However, they do not cover GKE Standard, Spot pods, or management fees.

3. Save up to 46% off with Flexible CUDs

Flexible CUDs add flexibility to your spending capabilities by eliminating the need to restrict your commitments to a single project, region, or machine series. With Flexible CUDs, you can see a 28% discount over your committed hourly spend amount for a one-year commitment and 46% for a three-year commitment. With these spend-based commits, you can use vCPUs and/or memory in any of the projects within a Cloud Billing account, across any region, and that belong to any eligible general-purpose and/or compute-optimized machine type.

4. Save up to 70% off with resource-based CUDs

For GKE Standard, resources-based CUDs offer up to 37% off the on-demand prices for a one-year commitment, and up to 70% for a three-year commitment for memory-optimized workloads. GKE Standard CUDs cover only memory and vCPUs, and GPU commitments are subject to availability constraints. To guarantee that the hardware in your commit is available, we recommend you purchase commitments with attached reservations.

5. Save up to 91% with Spot VMs

Here’s a bold statement: Spot VMs can reduce your compute costs by up to 91%. They offer the same machine types, options, and performance as regular compute instances. But the preemptible nature of Spot VMs means that they can be terminated at any time. Therefore, they’re ideal for stateless, short-running batch jobs, or fault-tolerant workloads. If your application exhibits fault tolerance (meaning it can shut down gracefully within 15 seconds and is resilient to possible instance preemptions), then Spot instances can significantly reduce your costs.

CUDs, meanwhile, also offer substantial cost savings for businesses that leverage cloud services. To maximize these savings, be sure to allocate resources strategically, ensure that workloads are appropriately sized, and employ optimization tools to assist in sizing your CUDs. By allocating resources efficiently, you can avoid unnecessary costs while maintaining consistent application performance. Adhere to the guidelines in this article to enjoy notable savings on your cloud services.

To determine when to use Spot VMs and when to choose CUD, check out the diagram below.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Spot_VMs.max-900x900.png

The State of Kubernetes Cost Optimization report talks about all these techniques in depth. Get your copy now, and now and dive deeper into the insights by exploring the series of blogs derived from this report:

Posted in