If your build produces artifacts such as container images, binaries, or tarballs, you can choose to store them in Container Registry, Cloud Storage, or any private third-party repositories.
This page explains how to store container images in Container Registry and non-container artifacts in Cloud Storage.
Storing images in Container Registry
To store a container image in Container Registry after your build
completes, add an images
field in your build config file, pointing to one or
more images:
YAML
images: [[IMAGE_NAME], [IMAGE_NAME], ...]
Where [IMAGE_NAME]
is the name of the image you want to store such as
gcr.io/myproject/myimage
.
JSON
{
"images": [
[
"IMAGE_NAME"
],
[
"IMAGE_NAME"
],
"..."
]
}
Where [IMAGE_NAME]
is the name of the image you want to store such as
gcr.io/myproject/myimage
.
The following example builds a Docker image named myimage
and stores it in
gcr.io/myproject/
:
YAML
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
images: ['gcr.io/myproject/myimage']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"build",
"-t",
"gcr.io/myproject/myimage",
"."
]
}
],
"images": [
"gcr.io/myproject/myimage"
]
}
To store the image in Container Registry as part of your build flow,
add a docker
build step and pass arguments to invoke the push
command:
YAML
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['push', '[IMAGE_NAME]']
Where [IMAGE_NAME]
is the name of the image you want to store in
Container Registry such as gcr.io/myproject/myimage
.
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"push",
"[IMAGE_NAME]"
]
}
]
}
Where [IMAGE_NAME]
is the name of the image you want to store in
Container Registry such as gcr.io/myproject/myimage
.
The following example builds a docker image named myimage
and stores the image
by adding another Docker build step and pushing the image to
Container Registry:
YAML
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/myproject/myimage']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"build",
"-t",
"gcr.io/myproject/myimage",
"."
]
},
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"push",
"gcr.io/myproject/myimage"
]
}
]
}
The difference between using the images
field and the Docker push
command is
that if you use the images
field, the stored image will be displayed in the
build results. This includes the Build description page for a build in the
Cloud Console, the results of
Build.get()
,
and the results of gcloud builds list
. However, if you use the
Docker push
command to store the built image, the image will not be displayed
in the build results.
To store an image as part of your build flow and to display the image in the
build results, use both the Docker push
command and the images
field in
your build config:
YAML
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['push', '[IMAGE_NAME]']
images: ['[IMAGE_NAME]']
Where [IMAGE_NAME]
is the name of the image such as
gcr.io/myproject/myimage
.
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"push",
"[IMAGE_NAME]"
]
}
],
"images": [
"[IMAGE_NAME]"
]
}
Where [IMAGE_NAME]
is the name of the image such as
gcr.io/myproject/myimage
.
The following example builds a docker image named myimage
and stores the image
as part of the build flow and after the build completes:
YAML
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/myproject/myimage']
images: ['gcr.io/myproject/myimage']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"build",
"-t",
"gcr.io/myproject/myimage",
"."
]
},
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"push",
"gcr.io/myproject/myimage"
]
}
],
"images": [
"gcr.io/myproject/myimage"
]
}
Storing artifacts in Cloud Storage
To store non-container artifacts in Cloud Storage, add an artifacts
field in your build config file with the location of the bucket to store the
artifact and the path to one or more artifacts:
YAML
artifacts:
objects:
location: [STORAGE_LOCATION]
paths: [[ARTIFACT_PATH],[ARTIFACT_PATH], ...]
Where,
[STORAGE_LOCATION]
: A Cloud Storage bucket or a folder within the bucket where Cloud Build must store the artifact, such asgs://mybucket
orgs://mybucket/some/folder
.[ARTIFACT_PATH]
: Path to one or more artifacts.
JSON
{
"artifacts": {
"objects": {
"location": [
"[STORAGE_LOCATION]"
],
"paths": [
[
"[ARTIFACT_PATH]"
],
[
"[ARTIFACT_PATH]"
],
"..."
]
}
}
}
Where,
[STORAGE_LOCATION]
: A Cloud Storage bucket or a folder within the bucket where Cloud Build must store the artifact, such asgs://mybucket
orgs://mybucket/some/folder
.[ARTIFACT_PATH]
: Path to one or more artifacts.
You can specify only one bucket to upload the artifacts and you must be the owner of the bucket. You can specify a valid directory path in the bucket.
You can upload any number of artifacts, but you can specify only up to one hundred artifact paths.
If you upload an artifact to a bucket that already has an artifact with the same name, the new artifact will replace the existing artifact. You can enable Object Versioning for your bucket if you don't want the newer artifact to replace an existing artifact with the same name.
After the build completes successfully, you can find the upload results in the
JSON manifest file located at [STORAGE_LOCATION]/artifacts-$BUILD_ID.json
.
The JSON manifest file has the following fields:
location
: this specifies the location in Cloud Storage where an artifact is stored and is of the form gs://[STORAGE_LOCATION]/[FILE_NAME]#[GENERATION_NUMBER]. You can use the generation number to uniquely identify a version of the data in Cloud Storage bucket.file_hash
: this specifies the hash type and the value. The hash type is always 2, which specifies that MD5 hash was performed.
Storing artifacts in Cloud Build Artifacts
Cloud Build Artifacts enables you to store artifacts in a scalable and integrated repository service. Cloud Build Artifacts is available as an alpha release and supports Maven and npm packages during the alpha period. You can manage repository access with IAM and interact with repositories via gcloud, the GCP Console, and native package tools. Cloud Build Artifacts integrates with Cloud Build and other CI/CD systems. To join the Alpha group, complete the sign up form.
Artifacts examples
The following examples show how you can use the Artifacts
field in a build
config file. In all of these examples replace [VALUES_IN_BRACKETS]
with the
appropriate values.
Uploading files and folders
The build config file below uploads helloworld.class
to
thegs://[STORAGE_LOCATION]/
:
YAML
steps:
- name: 'gcr.io/cloud-builders/javac'
args: ['HelloWorld.java']
artifacts:
objects:
location: 'gs://[STORAGE_LOCATION]/'
paths: ['HelloWorld.class']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/javac",
"args": [
"HelloWorld.java"
]
}
],
"artifacts": {
"objects": {
"location": "gs://[STORAGE_LOCATION]/",
"paths": [
"HelloWorld.class"
]
}
}
}
To upload more than one artifact, specify the path to each artifact separated by
a comma. The following example uploads HelloWorld.java
, HelloWorld.class
,
and cloudbuild.yaml
to gs://[STORAGE_LOCATION]/
:
YAML
steps:
- name: 'gcr.io/cloud-builders/javac'
args: ['HelloWorld.java']
artifacts:
objects:
location: 'gs://[STORAGE_LOCATION]/'
paths: ['HelloWorld.java', 'HelloWorld.class', 'cloudbuild.yaml']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/javac",
"args": [
"HelloWorld.java"
]
}
],
"artifacts": {
"objects": {
"location": "gs://[STORAGE_LOCATION]/",
"paths": [
"HelloWorld.java",
"HelloWorld.class",
"cloudbuild.yaml"
]
}
}
}
You can also upload the artifacts to a valid directory path in the bucket. The
following example uploads HelloWorld.java
and HelloWorld.class
to
gs://[BUCKET_NAME]/[FOLDER_NAME]
:
YAML
steps:
- name: 'gcr.io/cloud-builders/javac'
args: ['HelloWorld.java']
artifacts:
objects:
location: 'gs://[BUCKET_NAME]/[FOLDER_NAME]'
paths: ['HelloWorld.java', 'HelloWorld.class']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/javac",
"args": [
"HelloWorld.java"
]
}
],
"artifacts": {
"objects": {
"location": "gs://[BUCKET_NAME]/[FOLDER_NAME]",
"paths": [
"HelloWorld.java",
"HelloWorld.class"
]
}
}
}
Using wildcard characters to upload more than one artifact
When uploading multiple artifacts, you can use gsutil wildcard
characters
in paths
to specify multiple files.
The following example takes as an argument a file named classes
, which
contains the names of the .java
files to compile. It then uploads any .class
file to the specified Cloud Storage bucket:
YAML
steps:
- name: 'gcr.io/cloud-builders/javac'
args: ['@classes']
artifacts:
objects:
location: 'gs://[STORAGE_LOCATION]/'
paths: ['*.class']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/javac",
"args": [
"@classes"
]
}
],
"artifacts": {
"objects": {
"location": "gs://[STORAGE_LOCATION]/",
"paths": [
"*.class"
]
}
}
}
Using substitution variables in the bucket location
You can use substitution variables to specify a folder within the Cloud Storage bucket if the folder already exists within the bucket. The build will return an error if the folder does not exist.
The example below uploads the artifacts to a Cloud Storage path that includes the name of your Google Cloud project from which the build was run (such as gs://mybucket/myproject/):
YAML
steps:
- name: 'gcr.io/cloud-builders/javac'
args: ['@classes']
artifacts:
objects:
location: 'gs://[BUCKET_NAME]/$PROJECT_ID'
paths: ['helloworld.class']
JSON
{
"steps": [
{
"name": "gcr.io/cloud-builders/javac",
"args": [
"@classes"
]
}
],
"artifacts": {
"objects": {
"location": "gs://[BUCKET_NAME]/$PROJECT_ID",
"paths": [
"helloworld.class"
]
}
}
}
What's next
- Learn how to build, test, and deploy artifacts.
- Learn how to start a build manually and using triggers.