カスタム指標の作成と管理には、Managed Service for Prometheus を使用することをおすすめします。Prometheus Query Language(PromQL)を使用して、Monitoring 内のすべての指標をクエリできます。詳細については、Managed Service for Prometheus の水平 Pod 自動スケーリングをご覧ください。
アプリケーションから Monitoring にカスタム指標を報告できます。これらの指標に応答し、ワークロードを自動的に調整するように Kubernetes を構成できます。たとえば、秒間クエリ数、1 秒あたりの書き込み数、ネットワーク パフォーマンス、別のアプリケーションとの通信のレイテンシ、ワークロードに適したその他の指標などに基づいて、アプリケーションをスケーリングできます。詳細については、指標に基づいて Pod の自動スケーリングを最適化するをご覧ください。
外部指標
Kubernetes の外部にあるアプリケーションやサービスのパフォーマンスに基づいてワークロードをスケーリングする必要がある場合は、外部指標を構成します。たとえば、未配信のメッセージが増加傾向にある場合は、Pub/Sub からのメッセージを取り込むために、アプリケーションの容量増加が必要になることがあります。外部アプリケーションは、クラスタがアクセスできる Monitoring インスタンスに指標をエクスポートする必要があります。指標の傾向は時間とともに変化します。このため、HorizontalPodAutoscaler がワークロード内のレプリカ数を自動的に変更します。詳細については、指標に基づいて Pod の自動スケーリングを最適化するをご覧ください。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-01 UTC。"],[],[],null,["# About autoscaling workloads based on metrics\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page describes the ways that you can automatically increase or decrease\nthe number of replicas of a given workload using custom, external, or Prometheus\nmetrics.\n\nWhy autoscale based on metrics\n------------------------------\n\nConsider an application that pulls tasks from a queue and completes them. The\napplication might have a Service-Level objective (SLO) for time to process a\ntask, or for the number of tasks pending. If the queue is increasing, more\nreplicas of the workload might meet the workloads SLO. If the queue is empty or\nis decreasing more quickly than expected, you could save money by running fewer\nreplicas, while still meeting the workloads SLO.\n\nAbout custom, Prometheus, and external metrics\n----------------------------------------------\n\nYou can scale workloads based on custom, Prometheus, or external metrics.\n\nA *custom metric* is reported from your application running in Kubernetes. To\nlearn more, see\n[Custom and Prometheus metrics](#custom-prometheus-metrics).\n\nMetrics coming from\n[Managed Service for Prometheus](/stackdriver/docs/managed-prometheus) are\nconsidered a type of custom metric.\n\nAn *external metric* is reported from an application or service not running on\nyour cluster, but whose performance impacts your Kubernetes application. For\nexample, you can autoscale on any metric in Cloud Monitoring, including\nPub/Sub or Dataflow. Prometheus metrics contain data emitted\nfrom your cluster that you can use to autoscale on. To learn more, see\n[External metrics](#external-metrics).\n\n### Custom and Prometheus metrics\n\nWe recommend that you use Managed Service for Prometheus to create and manage\ncustom metrics. You can use Prometheus Query Language (PromQL) to query all\nmetrics in Monitoring. For more information, see\n[Horizontal Pod autoscaling for Managed Service for Prometheus](/stackdriver/docs/managed-prometheus/hpa).\n\nYour application can report a custom metric to Monitoring. You\ncan configure Kubernetes to respond to these metrics and scale your workload\nautomatically. For example, you can scale your application based on metrics such\nas queries per second, writes per second, network performance, latency when\ncommunicating with a different application, or other metrics that make sense for\nyour workload. For more information, see\n[Optimize Pod autoscaling based on metrics](/kubernetes-engine/docs/tutorials/autoscaling-metrics).\n\n### External metrics\n\nIf you need to scale your workload based on the performance of an application\nor service outside of Kubernetes, you can configure an external metric. For\nexample, you might need to increase the capacity of your application to ingest\nmessages from Pub/Sub if the number of undelivered messages is\ntrending upward. The external application needs to export the metric to a\nMonitoring instance that the cluster can access. The trend of\neach metric over time causes Horizontal Pod Autoscaler to change the number of\nreplicas in the workload automatically. For more information, see\n[Optimize Pod autoscaling based on metrics](/kubernetes-engine/docs/tutorials/autoscaling-metrics).\n\nImport metrics to Monitoring\n----------------------------\n\nTo import metrics to Monitoring, you can either:\n\n- Configure [Managed Service for Prometheus](/stackdriver/docs/managed-prometheus) (recommended), **or**\n- Export metrics from the application using the [Cloud Monitoring API](/monitoring/custom-metrics/creating-metrics).\n\nWhat's next\n-----------\n\n- Learn how to [enable horizontal Pod autoscaling for Managed Service for Prometheus](/stackdriver/docs/managed-prometheus/hpa).\n- Learn more about [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).\n- Learn more about [Vertical Pod autoscaling](/kubernetes-engine/docs/concepts/verticalpodautoscaler)."]]