[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eSystem artifacts are located in the Artifact Registry of the admin cluster, and new artifacts should be pushed when bug fixes or patches are needed.\u003c/p\u003e\n"],["\u003cp\u003ePushing container images to the system Artifact Registry requires specific administrator roles: \u003ccode\u003eorganization-system-artifact-management-admin\u003c/code\u003e for the org admin cluster and \u003ccode\u003esystem-artifact-management-admin\u003c/code\u003e for the root admin cluster.\u003c/p\u003e\n"],["\u003cp\u003eBefore pushing, users must install the Distributed Cloud CLI and its \u003ccode\u003edocker-credential-gdcloud\u003c/code\u003e component, sign in with the configured identity provider, export the kubeconfig file, and configure Docker.\u003c/p\u003e\n"],["\u003cp\u003eIf the container image is stored in an S3 bucket, users need the \u003ccode\u003eproject-bucket-object-viewer\u003c/code\u003e role, the bucket's secret name, and must use the \u003ccode\u003es3cmd\u003c/code\u003e CLI tool to download it before pushing.\u003c/p\u003e\n"],["\u003cp\u003eTo push a container image to the system Artifact Registry, users need to load the image, tag it with the registry endpoint, and then push it using Docker commands, replacing placeholders with appropriate values.\u003c/p\u003e\n"]]],[],null,["# Push a container image from one cluster to another\n\nSystem artifacts exist in the Artifact Registry of the admin cluster. Push new system artifacts when the\nsystem shows bugs or outages that you can fix by patching new artifacts.\n\nThis document describes how to push individual artifacts from one cluster to\nanother.\n\nBefore you begin\n----------------\n\nTo get the permissions that you need to access resources in system Artifact Registry projects as an administrator, ask your Security Admin to grant you the following roles depending on the cluster you want to push the container image to:\n\n- **Org admin cluster:** To push the container image to the system Artifact Registry of the org admin cluster, you need the Organization System Artifact Management Admin (`organization-system-artifact-management-admin`) role.\n- **Root admin cluster:** To push the container image to the system Artifact Registry of the root admin cluster, you need the System Artifact Management Admin (`system-artifact-management-admin`) role.\n\nAfter obtaining the necessary permissions, work through the following steps before pushing an image to the system Artifact Registry of the root admin cluster or org admin cluster:\n\n1. Download and install the Distributed Cloud CLI following the instructions of [gdcloud command-line interface (CLI)](/distributed-cloud/hosted/docs/latest/appliance/resources/gdcloud-overview).\n\n2. Install the `docker-credential-gdcloud` component following the instructions of [Install components](/distributed-cloud/hosted/docs/latest/appliance/resources/gdcloud-install#install-components).\n\n gdcloud components install docker-credential-gdcloud\n\n3. [Sign in with the configured identity provider](/distributed-cloud/hosted/docs/latest/appliance/resources/gdcloud-auth).\n\n gdcloud auth login\n\n4. Export the kubeconfig file.\n\n gdcloud clusters get-credentials \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with the name of the cluster.\n5. Configure Docker.\n\n gdcloud auth configure-docker\n\nDownload a container image from an S3 bucket\n--------------------------------------------\n\n| **Note:** Refer to this section if the container image to upload is stored in a Simple Storage Service (S3) bucket. Otherwise, if the container image is in your workstation, you can skip to [Push the image to the system Artifact Registry](#push-the-image-to-the-registry).\n\nTo get the permissions that you need to download the container image from the S3\nbucket, ask your Security Admin to grant you the Project Bucket Object Viewer\n(`project-bucket-object-viewer`) role in the project's namespace.\n\nThe Security Admin grants you access by creating a role binding: \n\n kubectl create rolebinding \u003cvar translate=\"no\"\u003eIO_USER\u003c/var\u003e-bor-rb \\\n --role=project-bucket-object-viewer \\\n --user=\u003cvar translate=\"no\"\u003eUSER\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003ePROJECT_NAMESPACE\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eUSER\u003c/var\u003e: The account name of the user that requires the role binding.\n- \u003cvar translate=\"no\"\u003ePROJECT_NAMESPACE\u003c/var\u003e: The project's namespace with the S3 bucket.\n\nYou get read-only access on the bucket within the project and the objects in that bucket.\n\nAfter obtaining the necessary permissions, work through the following steps to download the container image from the S3 bucket of the project's namespace:\n\n1. Obtain the secret name of the bucket. The secret name looks like the following example:\n\n object-storage-key-std-user-\u003cvar translate=\"no\"\u003eID\u003c/var\u003e\n\n The secret name includes a unique \u003cvar translate=\"no\"\u003eID\u003c/var\u003e value to access the bucket.\n2. Copy the secret name of the bucket.\n\n3. Get bucket access credentials and configure the s3cmd CLI tool.\n\n SECRET_NAME=\u003cvar translate=\"no\"\u003eSECRET_NAME\u003c/var\u003e\n ACCESS_KEY=$(kubectl get secret ${SECRET_NAME} -n object-storage-access-keys -o=jsonpath='{.data.access-key-id}' | base64 -d)\n SECRET_KEY=$(kubectl get secret ${SECRET_NAME} -n object-storage-access-keys -o=jsonpath='{.data.secret-access-key}' | base64 -d)\n S3_ENDPOINT=objectstorage.$(kubectl get configmap dnssuffix -n gpc-system -o jsonpath='{.data.dnsSuffix}')\n\n echo \"Access Key: ${ACCESS_KEY}\" \\\n && echo \"Secret Key: ${SECRET_KEY}\" \\\n && echo \"S3 Endpoint: ${S3_ENDPOINT}\"\n\n s3cmd --configure\n\n Replace \u003cvar translate=\"no\"\u003eSECRET_NAME\u003c/var\u003e with the value you copied on the previous step.\n4. Download the container image from the S3 bucket to your workstation.\n\n s3cmd get s3://\u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e /g/\u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_NAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e: The name of the S3 bucket that has the container image.\n - \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_NAME\u003c/var\u003e: The name of the container image file that you want to download from the S3 bucket.\n\nPush the image to the system Artifact Registry\n----------------------------------------------\n\nFollow these steps to push the file of the container image that you have in your workstation to the system Artifact Registry in the admin cluster:\n\n1. Open the console.\n\n2. Get the path to the system Artifact Registry endpoint of the cluster where you want to push the container image.\n\n export REGISTRY_ENDPOINT=harbor.$(kubectl get configmap dnssuffix -n gpc-system -o jsonpath='{.data.dnsSuffix}')\n\n3. Load, tag, and push the container image to the system Artifact Registry endpoint of the cluster.\n\n docker load --input \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_PATH\u003c/var\u003e\n\n docker tag \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_PATH\u003c/var\u003e ${REGISTRY_ENDPOINT}/\u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_PATH\u003c/var\u003e\n\n docker push ${REGISTRY_ENDPOINT}/\u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_PATH\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE_PATH\u003c/var\u003e with the path of the container image file in your local file system. A valid value for this path is, for example, `oracle_db.tar`."]]