System artifacts exist in the Artifact Registry of the admin cluster. Push new system artifacts when the system shows bugs or outages that you can fix by patching new artifacts.
This document describes how to push individual artifacts from one cluster to another.
Before you begin
To get the permissions that you need to access resources in system Artifact Registry projects as an administrator, ask your Security Admin to grant you the following roles depending on the cluster you want to push the container image to:
- Org admin cluster: To push the container image to the system Artifact Registry of the org admin cluster, you need the Organization System Artifact Management Admin (
organization-system-artifact-management-admin
) role. - Root admin cluster: To push the container image to the system Artifact Registry of the root admin cluster, you need the System Artifact Management Admin (
system-artifact-management-admin
) role.
After obtaining the necessary permissions, work through the following steps before pushing an image to the system Artifact Registry of the root admin cluster or org admin cluster:
Download and install the Distributed Cloud CLI following the instructions of gdcloud command-line interface (CLI).
Install the
docker-credential-gdcloud
component following the instructions of Install components.gdcloud components install docker-credential-gdcloud
Sign in with the configured identity provider.
gdcloud auth login
Export the kubeconfig file.
gdcloud clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with the name of the cluster.
Configure Docker.
gdcloud auth configure-docker
Download a container image from an S3 bucket
To get the permissions that you need to download the container image from the S3
bucket, ask your Security Admin to grant you the Project Bucket Object Viewer
(project-bucket-object-viewer
) role in the project's namespace.
The Security Admin grants you access by creating a role binding:
kubectl create rolebinding IO_USER-bor-rb \
--role=project-bucket-object-viewer \
--user=USER \
-n PROJECT_NAMESPACE
Replace the following:
- USER: The account name of the user that requires the role binding.
- PROJECT_NAMESPACE: The project's namespace with the S3 bucket.
You get read-only access on the bucket within the project and the objects in that bucket.
After obtaining the necessary permissions, work through the following steps to download the container image from the S3 bucket of the project's namespace:
Obtain the secret name of the bucket. The secret name looks like the following example:
object-storage-key-std-user-ID
The secret name includes a unique
ID
value to access the bucket.Copy the secret name of the bucket.
Get bucket access credentials and configure the s3cmd CLI tool.
SECRET_NAME=SECRET_NAME ACCESS_KEY=$(kubectl get secret ${SECRET_NAME} -n object-storage-access-keys -o=jsonpath='{.data.access-key-id}' | base64 -d) SECRET_KEY=$(kubectl get secret ${SECRET_NAME} -n object-storage-access-keys -o=jsonpath='{.data.secret-access-key}' | base64 -d) S3_ENDPOINT=objectstorage.$(kubectl get configmap dnssuffix -n gpc-system -o jsonpath='{.data.dnsSuffix}') echo "Access Key: ${ACCESS_KEY}" \ && echo "Secret Key: ${SECRET_KEY}" \ && echo "S3 Endpoint: ${S3_ENDPOINT}" s3cmd --configure
Replace
SECRET_NAME
with the value you copied on the previous step.Download the container image from the S3 bucket to your workstation.
s3cmd get s3://BUCKET_NAME /g/CONTAINER_IMAGE_NAME
Replace the following:
- BUCKET_NAME: The name of the S3 bucket that has the container image.
- CONTAINER_IMAGE_NAME: The name of the container image file that you want to download from the S3 bucket.
Push the image to the system Artifact Registry
Follow these steps to push the file of the container image that you have in your workstation to the system Artifact Registry in the admin cluster:
Open the console.
Get the path to the system Artifact Registry endpoint of the cluster where you want to push the container image.
export REGISTRY_ENDPOINT=harbor.$(kubectl get configmap dnssuffix -n gpc-system -o jsonpath='{.data.dnsSuffix}')
Load, tag, and push the container image to the system Artifact Registry endpoint of the cluster.
docker load --input CONTAINER_IMAGE_PATH docker tag CONTAINER_IMAGE_PATH ${REGISTRY_ENDPOINT}/CONTAINER_IMAGE_PATH docker push ${REGISTRY_ENDPOINT}/CONTAINER_IMAGE_PATH
Replace CONTAINER_IMAGE_PATH with the path of the container image file in your local file system. A valid value for this path is, for example,
oracle_db.tar
.