Know more, spend less: how GKE cost optimization insights help you optimize Kubernetes
GKE Product Manager
If there’s one thing we learned talking to Kubernetes users, it’s that optimizing for reliability, performance, and cost efficiency is hard — especially at scale.
That is why, not long ago, we released GKE cost optimization insights in preview, a tab within the Google Cloud Console that helps you discover optimization opportunities at scale, across your Google Kubernetes Engine clusters and workloads, automatically with minimal friction.
The functionality allows you to figure out, over a selected period of time, the current state of your clusters by exposing the actual used, requested and allocated resources. For workloads running on your clusters, it shows your actual used and requested resources, as well as the set limits, so you can make granular, workload-level right-sizing optimizations.
GKE cost optimization insights have proved popular with users right out of the gate. For example, Arne Claus, Site Reliability Engineer at hotel search platform provider Trivago says that “The new GKE cost optimization insights view helped us to identify cost optimization opportunities at the cluster and workload level and take immediate action. In the first weeks of use, the Trivago team spotted and improved the cost/performance balance of several clusters.”
Today, we’re graduating GKE cost optimization insights from Preview to GA. It has undergone multiple improvements that we believe will help you with your day-to-day optimization routines. For instance, we’ve made it easier to spot under-provisioned workloads that could be at risk of instability due to insufficient resource requests.
Now that you have the insights into optimization opportunities, let’s recap what capabilities are helping the most with reliability, performance, and cost efficiency in GKE, and what resources are available for your teams to get up to speed with GKE cost optimization.
In public cloud managed Kubernetes services, there are four major pitfalls that lead to non-optimized usage of Kubernetes clusters:
Culture - Many teams that embrace the public cloud have never worked with a pay-as-you-go service like GKE before, so they’re unfamiliar with how resource allocation and app deployment processes can affect their costs. The new GKE cost optimization insights can help teams better understand such an environment and can help improve business value by providing insights into balancing cost, reliability and performance needs.
Bin packing - The more efficiently you pack apps into nodes, the more you save. You can pack apps into nodes efficiently by ensuring you’re requesting the right amount of resources based on your actual utilization. GKE cost optimization insights helps you identify bin-packing gaps by looking at the gray bar in the cluster view.
App right-sizing - You need to be able to configure the appropriate resource requests and workload autoscale targets for the objects that are deployed in your cluster. The more precise you are in setting accurate resource amounts to your pods, the more reliably your apps will run and, in the majority of cases, the more space you will open in the cluster. With GKE cost optimization insights, you can visualize the right-sizing information by looking at the green bar in both cluster and workload views.
Demand-based downscaling - To save money during low-demand periods such as nighttime, your clusters should be able to scale down with demand. However, in some cases, you can’t scale them down because there are workloads that cannot be evicted or because a cluster has been misconfigured.
GKE cost optimization insights help you better understand and visualize these pitfalls. In order to solve them, or make them non-issues right from the beginning, there are de-facto solutions available from Google Cloud. For example, you can use the new GKE cost optimization insights to help with monitoring and with the cultural shift toward FinOps. If you don’t want to deal with bin packing, you can use the Autopilot mode of operation. Set up node auto-provisioning along with optimize-utilization profile can also help optimize bin packing. To help with app right-sizing and demand-based downscaling you can take advantage of GKE Pod autoscalers — in addition to the classic Horizontal Pod Autoscaler, we also provide a Vertical Pod Autoscaler and a Multidimensional Pod Autoscaler.
We’ve extensively written about GKE features such as Autopilot, optimized VM types, Node auto-provisioning, pod autoscalers and others in our GKE best practices to lessen overprovisioning. This is a great place to learn how to solve for your newly discovered optimization opportunities.
If you want a deeper dive into technical details, check out these best practices for running cost-optimized Kubernetes applications on GKE, an exhaustive list of GKE best practices.
And finally, for the visual learner, there’s the GKE cost optimization video series on Youtube, where our experts will walk you through key concepts of cost optimization step by step.