Build process overview

When you deploy your function's source code to Cloud Functions, that source is stored in a Cloud Storage bucket. Cloud Build then automatically builds your code into a container image and pushes that image to a image registry. Cloud Functions accesses this image when it needs to run the container to execute your function.

The process of building the image is entirely automatic and requires no direct input from you. All the resources used in the build process execute in your own user project.

Executing the build process within your project means that:

  • You have direct access to all build logs.

  • There is no preset build-time quota, although Cloud Build does have its own default concurrency quota.

  • You can view the current container image and the previously deployed container images, both of which are stored in Artifact Registry.

  • Cloud Storage is used directly in your project and the source code directory for your functions is stored in a bucket within your project. Note the following:

    • If you're using default encryption, this bucket is named gcf-sources-PROJECT_NUMBER-REGION.
    • If you're protecting your data with CMEK, the bucket is named gcf-sources-PROJECT_NUMBER-REGION-CMEK_KEY_HASH.
    • The bucket has no retention period.

Characteristics of the build process

The build process has the following characteristics:

  • The Cloud Build API must be enabled for your project.

    To enable the API manually, click the above link, select your project from the dropdown menu, and follow the prompts to enable the UI.

  • Because the entire build process takes place within the context of your project, the project is subject to the pricing of the included resources:

    • For Cloud Build pricing, see the Pricing page. This process uses the default instance size of Cloud Build, as these instances are pre-warmed and are available more quickly. Cloud Build does provide a free tier: review the pricing document for further details.

    • For Cloud Storage pricing, see the Pricing page. Cloud Storage does provide a free tier: review the pricing document for further details.

    • For Artifact Registry pricing, see the Pricing page.

    • For Container Registry (deprecated) pricing, see the Pricing page.

  • Because the build process is subject to billing, your project must have a Cloud Billing Account attached to it.

View your build image logs

A key benefit of having the build image process in your user project is access to build logs. You can use either the gcloud CLI or the Google Cloud console to reach the logs, which are available through Cloud Logging.

gcloud

  1. Deploy your function using the gcloud functions deploy command.

  2. The URL of the logs is shown as part of the response in your terminal window. For example:

    Deploying function (may take a while - up to 2 minutes)...⠹
    **For Cloud Build Stackdriver Logs**, visit:
    https://console.cloud.google.com/logs/viewer?project=&advancedFilter=resource.type%
    3Dbuild%0Aresource.labels.build_id%3D38d5b662-2315-45dd-8aa2-
    380d50d4f5e8%0AlogName%3Dprojects%2F%
    2Flogs%2Fcloudbuild
    Deploying function (may take a while - up to 2 minutes)...done.
    

Google Cloud console

  1. From the Cloud Functions Overview window, click the name of the function you're investigating.
  2. Click the Details tab.
  3. In the General Information pane, click the Container build log link to open the Logs explorer pane.
  4. Click any row to view the details of that build log entry. If it's an error entry associated with a file, these details include the name, line and column of the file.

Image registry

Cloud Functions (2nd gen) exclusively uses Artifact Registry to store the images built from your function source code. Images are stored in a repository named REGION-docker.pkg.dev/PROJECT_ID/gcf-artifacts.

Cloud Functions (1st gen) uses Artifact Registry by default. Container Registry is being deprecated.

Your Artifact Registry must be in the same project as your function. You can create or update an Artifact Registry-based function as follows:

gcloud

For Customer managed Artifact Registry, run the following command:

gcloud functions deploy FUNCTION \
--docker-repository=REPOSITORY
[FLAGS...]

Replace the following:

  • FUNCTION: The name of the function.
  • REPOSITORY: The fully qualified Artifact Registry repository name, in the following format: projects/PROJECT_NAME/locations/LOCATION/repositories/REPOSITORY.

For Google-managed Artifact Registry, use:

gcloud functions deploy FUNCTION \
--docker-registry=artifact-registry
[FLAGS...]

Google Cloud console

  1. Go to the Cloud Functions page in the Google Cloud console:
    Go to the Cloud Functions page

  2. Click the name of the function for which you want to use Artifact Registry.

  3. Click Edit.

  4. Click Runtime, build... to expand the advanced configuration options.

  5. Click Security and Image Repo from the menu bar to open the security tab.

  6. Under Image repository, select one of the following, depending on which type of Artifact Registry you are using:

    • Customer managed Artifact Registry. Use this option if you set up your own Docker repository.
    • Google managed Artifact Registry. Use this option if you want to use a Google-managed Docker repository instead of setting up your own.
  7. For Customer managed Artifact Registry, use the Artifact registry drop-down to select the Artifact Registry repository you want, or follow the prompts and create a new one.

  8. Click Next.

  9. Click Deploy.

For detailed pricing information, see Cloud Functions Pricing.

Secure your build with private pools

To allow your functions to use dependencies (for example, npm packages), Cloud Build has by default unlimited internet access during the build process. If you have set up a VPC Service Controls (VPC SC) perimeter and wish to limit the build's access only to dependencies stored inside the perimeter, you can use the Cloud Build private worker pools feature.

In general, follow these steps to set up your private pool:

  1. Create your private worker pool. See Creating and managing private pools.
  2. Configure your VPC Service Controls perimeter. See Using VPC Service Controls.

  3. If your private worker pool is in a different project than your function, you need to grant the Cloud Functions Service Agent Service Account (service-FUNCTION_PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com) the cloudbuild.workerPoolUser role so that the Cloud Build service can access the worker pool.

    gcloud projects add-iam-policy-binding PRIVATE_POOL_PROJECT_ID \
        --member serviceAccount:service-FUNCTION_PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com
        --role roles/cloudbuild.workerPoolUser
    

    where FUNCTION_PROJECT_NUMBER is the number of the project where the function runs and PRIVATE_POOL_PROJECT_ID is the id of the project in which the worker pool is located. See Running builds in a private pool for more information.

  4. Deploy your function to build using a private pool:

gcloud

gcloud functions deploy FUNCTION_NAME \
  --runtime RUNTIME \
  --build-worker-pool PRIVATE_POOL_NAME
  [FLAGS...]

where FUNCTION_NAME is the name of the function, RUNTIME is the runtime you are using, and PRIVATE_POOL_NAME is the name of your pool.

To stop using a given private pool and instead use the default Cloud Build pool, use the --clear-build-worker-pool flag when re-deploying.

gcloud functions deploy FUNCTION_NAME \
  --runtime RUNTIME \
  --clear-build-worker-pool
  [FLAGS...]

where FUNCTION_NAME is the name of the function and RUNTIME is the runtime you are using.

Google Cloud console

  1. From the Cloud Functions overview page, select Create function.

  2. In the Runtime, build... section, click the Build tab and enter the full resource name of your private pool in the Build worker pools text box.

For more information, see Run builds in a private pool.