[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Optimize your cloud resources\n\nBefore your peak capacity event occurs, manage and optimize the resources that\nare used by your Google Cloud workloads. This involves right-sizing resources\nbased on actual usage and demand, using autoscaling for dynamic resource\nallocation, and reviewing architecture and security recommendations. Both\n[Cloud Monitoring](/monitoring/docs/monitoring-overview) and\n[Recommender](/recommender) (Active Assist) can help you to\nidentify opportunities to optimize your cloud resources. By using these tools,\nyou can gain insights into resource usage and make informed decisions prior to\nyour event.\n\nReview Google Cloud best practices\n----------------------------------\n\nMany peak capacity event issues can be avoided by following the recommended best\npractices for the Google Cloud product that you are using. The following\nare examples of some best practice guides:\n\nReview scalability\n------------------\n\nAutoscaling can ensure that your cloud-based applications have the resources\nthat they need to handle varying workloads, while avoiding over provisioning and\nunnecessary costs. Google Cloud offers several product-specific\nautoscaling options, including the following:\n\n- [Compute Engine managed instance groups (MIGs)](/compute/docs/instance-groups#managed_instance_groups) are groups of VMs that are managed and scaled as a single entity. With MIGs, you can define autoscaling policies that specify the minimum and maximum number of VMs to maintain in the group, and the conditions that trigger autoscaling.\n- [Google Kubernetes Engine (GKE) autoscaling](/kubernetes-engine/docs/concepts/cluster-autoscaler) dynamically adjusts your cluster resources to match your application's needs. It offers tools that can optimize resource utilization, ensure application performance, and simplify cluster management.\n- [Cloud Run](/run/docs/about-instance-autoscaling) offers built-in autoscaling, which automatically adjusts the number of instances based on the incoming traffic.\n\nBefore your event, we recommend that you scale up manually. Although you might\nhave autoscaling configured, due to the velocity of event traffic, autoscaling\nmight not be able to catch up with demand. So pre-warm resources ahead of\ntime, including the following:\n\n- Virtual machines\n- Caches if you want to pre-load\n- Serverless components to prevent cold starts\n\n| **Note:** Google [Cloud Load Balancing](/load-balancing) doesn't require pre-warming. However, other cloud providers might require load balancer pre-warming. Make sure to check with those providers.\n\nReview Active Assist recommendations\n------------------------------------\n\nActive Assist refers to the portfolio of tools used in\nGoogle Cloud to generate recommendations and insights to help you optimize\nyour Google Cloud projects. For more information, see\n[What is Active Assist](/recommender/docs/whatis-activeassist).\n\nReview your product versions\n----------------------------\n\nEnsure that all your cloud products and services are up-to-date with the latest\nstable version.\n\nReview alerts and dashboards\n----------------------------\n\nProactively identify and address issues by evaluating the alerts and dashboards\nprovided to you through Google Cloud Observability tools and third-party solutions.\n\nCheck your [Google Cloud Observability metrics, logs, and traces](/stackdriver/docs) to\ngain insights into resource utilization, performance characteristics, and the\noverall health of your resources. Monitor important metrics that align with\nsystem health indicators such as CPU utilization, memory usage, network traffic,\ndisk I/O, and application response times. You should also consider\nbusiness-specific metrics. By tracking these metrics, you can identify potential\nbottlenecks, performance issues, and resource constraints. Additionally, you can\nset up alerts to notify relevant teams proactively about potential issues or\nanomalies.\n\nFor alerts, focus on critical metrics, set appropriate thresholds to minimize\nalert fatigue, and ensure timely responses to significant issues. This targeted\napproach lets you proactively maintain workload reliability. For more\ninformation, see the [Alerting overview](/monitoring/alerts).\n\nWhat's next\n-----------\n\n- [Conduct load testing](/support/docs/peak-events/conduct-load-testing)"]]