Store artifacts in Artifact Registry

This page describes how to configure Cloud Build to store built artifacts in an Artifact Registry repository.

Before you begin

  1. If the target repository does not exist in Artifact Registry, create a new repository.
  2. If Cloud Build and your repository are in different projects or if you are using a user-specified service account to run builds, grant the Artifact Registry Writer role to the build service account in the project with the repositories.

    The default Cloud Build service account has permissions to upload and download from repositories in the same project.

Configuring a Docker build

After you have granted permissions to the target repository, you are ready to configure your build.

To configure your build:

  1. In your build config file, add the step to build and tag the image.

    steps:
    - name: 'gcr.io/cloud-builders/docker'
      args: [ 'build', '-t', '${_LOCATION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}', '.' ]
    images:
    - '${_LOCATION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}'
    

    This snippet uses Cloud Build substitutions. This approach is useful if you want to use the same build config file to push images to repositories for different environments, such as testing, staging, or production.

    • ${_LOCATION}, ${_REPOSITORY}, and ${_IMAGE} are user-defined substitution for the repository location, repository name, and image. You specify the values for these variables at build time.
    • $PROJECT_ID is a default substitution that Cloud Build resolves with the Google Cloud project ID for the build.

      • If you run the gcloud builds submit command, Cloud Build uses the active project ID in the gcloud session.
      • If you use a build trigger, Cloud Build uses the ID of the project where Cloud Build is running.

      Alternatively, you can use a user-defined substitution instead of $PROJECT_ID so that you can specify a project ID at build time.

  2. When you are ready to run the build, specify values for the user-defined substitutions. For example, this command substitutes:

    • us-east1 for the repository location
    • my-repo for the repository name
    • my-image for the image name
    gcloud builds submit --config=cloudbuild.yaml \
      --substitutions=_LOCATION="us-east1",_REPOSITORY="my-repo",_IMAGE="my-image" .
    

Configuring a Java build

After you have granted permissions to the target repository, you are ready to configure your build. The following instructions describe configuring your build to upload a Java package to a Maven repository.

To configure your build:

  1. Set up authentication for Maven. Ensure that you specify the correct target project and repository in your pom.xml file.

  2. In your Cloud Build build config file, add the step to upload the package with Maven:

    steps:
    - name: gcr.io/cloud-builders/mvn
      args: ['deploy']
    

Configuring a Node.js build

After you have granted permissions to the target repository, you are ready to configure your build. The following instructions describe configuring your build to upload a Node.js package to an npm repository.

To configure your build:

  1. Add your Artifact Registry repository to the .npmrc file in your Node.js project. The file is located in the directory with your package.json file.

    @SCOPE:registry=https://LOCATION-npm.pkg.dev/PROJECT_ID/REPOSITORY
    //LOCATION-npm.pkg.dev/PROJECT_ID/REPOSITORY:always-auth=true
    
    • SCOPE-NAME is the name of the npm scope to associate with the repository. Using scopes ensures that you always publish and install packages from the correct repository.
    • PROJECT_ID is your Google Cloud project ID.
    • LOCATION is the regional or multi-regional location of the repository.
    • REPOSITORY is the name of the repository.
  2. Add a script to the package.json file in your project that refreshes the access token for authentication with the repository.

    "scripts": {
     "artifactregistry-login": "npx google-artifactregistry-auth"
    }
    
  1. In your build config file, add the step to upload the package to the repository.

    steps:
    - name: gcr.io/cloud-builders/npm
      args: ['run', 'artifactregistry-login']
    - name: gcr.io/cloud-builders/npm
      args: ['publish', '${_PACKAGE}']
    

    ${_PACKAGE} is a Cloud Build substitution that represents your Node.js project directory. You can specify the directory when you run the command to run the build.

    For example, this command uploads the package from a directory named src:

    gcloud builds submit --config=cloudbuild.yaml \
        --substitutions=_PACKAGE="src" .
    

Configuring a Python build

After you have granted permissions to the target repository, you are ready to configure your build. The following instructions describe configuring your build to upload a Python package to a Python repository and install the package using pip.

To build and containerize a Python application and then push it to a Docker repository, see Building Python applications in the Cloud Build documentation.

To configure your build:

  1. In the directory with your Cloud Build build config file, create a file named requirements.txt with the following dependencies:

    twine
    keyrings.google-artifactregistry-auth
    
  2. To upload a Python package to your Python repository in your build, add the following steps to your build config file:

    steps:
    - name: python
      entrypoint: pip
      args: ["install", "-r", "requirements.txt", "--user"]
    - name: python
      entrypoint: python
      args:
      - '-m'
      - 'twine'
      - 'upload'
      - '--repository-url'
      - 'https://${_LOCATION}-python.pkg.dev/$PROJECT_ID/${_REPOSITORY}/'
      - 'dist/*'
    

    In this snippet, the first step installs Twine and the Artifact Registry keyring backend. The second step uploads your built Python files in the dist subdirectory. Adjust the paths to requirements.txt and your built Python files if they don't match the snippet.

    The repository path includes Cloud Build substitutions. This approach is useful if you want to use the same build config file to upload packages to repositories for different environments, such as testing, staging, or production.

    • ${_LOCATION} and ${_REPOSITORY} are user-defined substitutions for the repository location, repository name, and package name. You specify the values for these variables at build time.
    • $PROJECT_ID is a default substitution that Cloud Build resolves with the Google Cloud project ID for the build.

      • If you run the gcloud builds submit command, Cloud Build uses the active project ID in the gcloud session.
      • If you use a build trigger, Cloud Build uses the ID of the project where Cloud Build is running.

      Alternatively, you can use a user-defined substitution instead of $PROJECT_ID so that you can specify a project ID at build time.

  3. To install the package from the Python repository, add a step to your build config file that runs the pip install command.

      steps:
      - name: python
        entrypoint: pip
        args:
        - 'install'
        - '--index-url'
        - 'https://${_LOCATION}-python.pkg.dev/$PROJECT_ID/${_REPOSITORY}/simple/'
        - '${_PACKAGE}'
        - '--verbose'
    

    This snippet includes an additional ${_PACKAGE} substitution for the package name.

  4. When you are ready to run the build, specify values for the user-defined substitutions. For example, this command substitutes:

    • us-east1 for the repository location
    • my-repo for the repository name
    • my-package for the package name
    gcloud builds submit --config=cloudbuild.yaml \
      --substitutions=_LOCATION="us-east1",_REPOSITORY="my-repo",_PACKAGE="my-package" .
    

What's next