API 할당량 한도를 초과했음을 나타내는 오류가 발생하면 같은 할당량 프로젝트 아래에 동일한 종류의 구성 커넥터 리소스를 너무 많이 만들었을 수 있습니다.
리소스를 많이 만들면 구성 커넥터가 사용하는 조정 전략으로 인해 이러한 리소스가 동일한 API 엔드포인트에 너무 많은 API 요청을 생성할 수 있습니다.
이 문제를 해결하는 한 가지 방법은 할당량 증가를 요청하는 것입니다. 할당량 증가 외에 구성 커넥터 리소스에서 관리하는 Google Cloud 리소스에 대한 GET 요청으로 인해 할당량 오류가 발생한다는 것을 확인했다면 다음 옵션 중 하나를 고려할 수 있습니다.
이 접근 방식은 구성 커넥터 리소스를 여러 프로젝트에 분산합니다. 이 접근 방식은 새 리소스를 추가할 때 효과적이지만 기존 리소스를 삭제하고 다른 프로젝트에서 다시 만들어야 하므로 기존 리소스를 분할하는 것은 위험할 수 있습니다. 리소스를 삭제하면 SpannerInstance 또는 BigtableTable 리소스와 같은 일부 리소스 유형에서 데이터가 손실될 수 있습니다. 데이터를 삭제하기 전에 백업해야 합니다.
기존 구성 커넥터 리소스를 여러 프로젝트로 분할하려면 다음 단계를 완료하세요.
다른 프로젝트로 이동할 구성 커넥터 리소스를 결정합니다.
구성 커넥터 리소스를 삭제합니다.
cnrm.cloud.google.com/deletion-policy 주석이 abandon으로 설정되지 않았는지 확인합니다.
새 프로젝트로 이동하려는 구성 커넥터 리소스의 YAML 구성에서 spec.projectRef 필드 또는 cnrm.cloud.google.com/project-id 주석을 업데이트합니다.
구성 커넥터에서 사용되는 IAM 서비스 계정에 새 프로젝트에 대한 적절한 권한을 부여합니다.
구성 커넥터의 ContainerCluster 리소스를 적용하여 클러스터를 생성한 후 업데이트된 ContainerCluster 구성을 적용하여 nodeConfig 또는 다른 노드 관련 필드를 업데이트하려고 할 때 오류가 발생할 수 있습니다. 이러한 오류는 nodeConfig, nodeConfig.labels, nodeConfig.taint와 같이 변경할 수 없는 필드로 인한 것으로, 이는 기본 Google Cloud API의 기술적 한계입니다.
이러한 필드를 업데이트해야 하는 경우 ContainerNodePool 리소스를 사용하여 이러한 필드를 변경할 수 없는 노드 풀을 관리할 수 있습니다. ContainerNodePool 리소스를 사용하여 노드 풀을 관리하려면 cnrm.cloud.google.com/remove-default-node-pool: "true" 주석을 지정해야 합니다. 이 주석은 클러스터 생성 중에 생성되는 기본 노드 풀을 삭제합니다. 그런 다음 별도의 노드 풀을 만들려면 ContainerCluster 대신 ContainerNodePool에 nodeConfig 필드를 지정합니다. 참고용 ContainerNodePool 리소스 예시를 확인하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-12-21(UTC)"],[[["\u003cp\u003eConfig Connector API quota limits can be managed by increasing the reconciliation interval, splitting resources across multiple projects, or switching to namespaced mode.\u003c/p\u003e\n"],["\u003cp\u003eIncreasing the reconciliation interval involves adjusting the time between resource reconciliations to avoid frequent API requests, and setting it to 1 hour is recommended.\u003c/p\u003e\n"],["\u003cp\u003eSplitting resources into multiple projects helps to distribute API requests, but requires deleting and recreating resources, potentially causing data loss.\u003c/p\u003e\n"],["\u003cp\u003eSwitching to namespaced mode lets users use different IAM service accounts for different namespaces, which mitigates API quota limits by separating resources into namespaces.\u003c/p\u003e\n"],["\u003cp\u003eManaging node pools in GKE clusters with \u003ccode\u003eContainerNodePool\u003c/code\u003e is recommended over updating \u003ccode\u003enodeConfig\u003c/code\u003e in \u003ccode\u003eContainerCluster\u003c/code\u003e, due to the immutability of certain fields, and \u003ccode\u003ecnrm.cloud.google.com/state-into-spec: absent\u003c/code\u003e should be specified for both \u003ccode\u003eContainerCluster\u003c/code\u003e and \u003ccode\u003eContainerNodePool\u003c/code\u003e to prevent errors.\u003c/p\u003e\n"]]],[],null,["# Best practices for Config Connector\n===================================\n\n*** ** * ** ***\n\nThis page explains best practices you should consider when using\nConfig Connector.\n\nManage API quota limits\n-----------------------\n\nIf you've run into errors indicating that you've exceeded the API quota limit,\nit could be that you have created too many Config Connector resources of the\nsame Kind under the same [quota project](https://cloud.google.com/docs/quota).\nWhen you create a lot of resources, those resources can generate too many API\nrequests to the same API endpoint because of the\n[reconciliation strategy](/config-connector/docs/concepts/reconciliation)\nthat Config Connector uses.\n\nOne way to resolve this issue is to request a quota increase. Besides a quota\nincrease, if you've confirmed that the quota error is caused by GET requests\nagainst the Google Cloud resources managed by your Config Connector\nresources, you might consider one of the following options:\n\n- [Increase the reconciliation interval](#quota-reconcile) for your Config Connector resources\n- [Split your resources](#quota-split) into multiple projects\n- [Switch Config Connector to namespaced mode](#quota-namespaced)\n\n### Increase the reconciliation interval\n\nYou can increase the time between Config Connector reconciling a resource to\navoid hitting API quotas. The recommendation is to set the reconciliation\ninterval to 1 hour.\n\nTo increase the reconciliation interval, follow the steps in\n[Configuring the reconciliation interval](/config-connector/docs/concepts/reconciliation#configuring_the_reconciliation_interval).\n\n### Split your resources into multiple projects\n\nThis approach spreads out your Config Connector resources across different\nprojects. This approach works well when adding new resources, but it can be\nrisky to split existing resources because you need to delete the existing\nresources and recreate them under different projects. Deleting resources can\ncause data loss with some types of resources, such as `SpannerInstance` or\n`BigtableTable` resources. You should backup your data before deleting it.\n\nTo split existing Config Connector resources into different projects, complete\nthe following steps:\n\n1. Decide which Config Connector resources you plan to move to different projects.\n2. [Delete the Config Connector resources](/config-connector/docs/how-to/managing-deleting-resources#deleting_the_dataset). Make sure the `cnrm.cloud.google.com/deletion-policy` annotation is not set to `abandon`.\n3. Update the `spec.projectRef` field or `cnrm.cloud.google.com/project-id` annotation in the YAML configuration of the Config Connector resources you plan to move to the new projects.\n4. Grant the IAM service account used by Config Connector proper permissions on the new projects.\n5. Apply the updated YAML configuration to [create the Config Connector resources](/config-connector/docs/how-to/managing-deleting-resources#creating_a_resource).\n\n### Switch to namespaced mode\n\n| **Note:** If you want to use `requestProjectPolicy` and `billingProject` in `ConfigConnectorContext` to specify the quota project, you don't need to use IAM service accounts owned by different Google Cloud projects, but you should use different Google Cloud projects as `billingProject` for different `ConfigConnectorContext` objects to mitigate the API quota limit issue.\n\nYou can bind different IAM service accounts owned by different\nGoogle Cloud projects to different namespaces where Config Connector is\ninstalled in\n[namespaced mode](/config-connector/docs/concepts/installation-types#namespaced),\nand split your resources into different namespaces. To achieve this,\ncomplete the following steps:\n\n1. [Configure Config Connector to run in namespaced mode](/config-connector/docs/how-to/advanced-install#configure_to_run_in_namespaced_mode).\n Create new IAM service accounts from different projects, and\n bind them to different namespaces following the\n [instructions to configure Config Connector for each project](/config-connector/docs/how-to/advanced-install#project-namespace).\n\n2. Grant the new IAM service accounts proper permissions to\n the project that contains the resources.\n\n3. Decide which Config Connector resources you plan to move to different\n namespaces.\n\n4. Update the YAML configuration of the Config Connector resources and set the\n `cnrm.cloud.google.com/deletion-policy` annotation `abandon`.\n\n5. Apply the updated YAML configuration to update the Config Connector\n resources' deletion policy.\n\n6. [Abandon the Config Connector resources](/config-connector/docs/how-to/managing-deleting-resources#deleting_the_dataset).\n\n7. Update the `metadata.namespace` field in the YAML configuration of the\n Config Connector resources you plan to move to the different namespaces.\n\n8. Apply the updated YAML configuration to\n [acquire the abandoned resources](/config-connector/docs/how-to/managing-deleting-resources#acquiring_an_existing_resource).\n\nManage node pools in GKE clusters\n---------------------------------\n\nYou might experience errors when you create a cluster by applying a\n`ContainerCluster` resource in Config Connector, and then attempt to update the\n`nodeConfig` or other node-related fields by applying an updated\n`ContainerCluster` configuration. These errors are due to immutible fields such\nas `nodeConfig`, `nodeConfig.labels`, `nodeConfig.taint`, which is a technical\nlimitation of the underlying\n[Google Cloud API](/config-connector/docs/reference/resource-docs/container/containercluster).\n\nIf you need to update these fields, you can use the\n[`ContainerNodePool`](/config-connector/docs/reference/resource-docs/container/containernodepool)\nresource to manage node pools where these fields are not immutable. To manage\nnode pools using the `ContainerNodePool` resource, you must specify an\nannotation `cnrm.cloud.google.com/remove-default-node-pool: \"true\"`. This\nannotation removes the default node pool that gets created during cluster\ncreation. Then, to create separate node pools, specify `nodeConfig` fields in\n`ContainerNodePool` instead of in `ContainerCluster`. See the\n[`ContainerNodePool` resource example](/config-connector/docs/reference/resource-docs/container/containernodepool#basic_node_pool)\nfor reference.\n\nYou should set the annotation\n[`cnrm.cloud.google.com/state-into-spec: absent`](/config-connector/docs/concepts/ignore-unspecified-fields)\nfor both the `ContainerCluster` and `ContainerNodePool` resources. This\nannotation avoids potential reconciliation errors during the interaction between\nthe Config Connector controller and the underlying APIs.\n\nThe following examples show a `ContainerCluster` and a `ContainerNodePool`\nconfiguration with these annotations set: \n\n```\napiVersion: container.cnrm.cloud.google.com/v1beta1\nkind: ContainerCluster\nmetadata:\n name: containercluster-sample\n annotations:\n cnrm.cloud.google.com/remove-default-node-pool: \"true\"\n cnrm.cloud.google.com/state-into-spec: absent\nspec:\n description: A sample cluster.\n location: us-west1\n initialNodeCount: 1\n``` \n\n```\napiVersion: container.cnrm.cloud.google.com/v1beta1\nkind: ContainerNodePool\nmetadata:\n labels:\n label-one: \"value-one\"\n name: containernodepool-sample\n annotations:\n cnrm.cloud.google.com/state-into-spec: absent\nspec:\n location: us-west1\n autoscaling:\n minNodeCount: 1\n maxNodeCount: 3\n nodeConfig:\n machineType: n1-standard-1\n preemptible: false\n oauthScopes:\n - \"https://www.googleapis.com/auth/logging.write\"\n - \"https://www.googleapis.com/auth/monitoring\"\n clusterRef:\n name: containercluster-sample\n```"]]