Managed services for each
At Google Cloud, we provide fully managed tools, from code
and build to deploy and run, with the above standards and
best practices implemented by default.
Securing your software supply chain requires establishing,
verifying, and maintaining a chain of trust that establishes
the provenance of your code and ensures that what you’re
running in production is what you intended. At Google, we
accomplish this via attestations that are generated and
checked throughout the software development and deployment
process, enabling a level of ambient security through things
like code review, verified code provenance, and policy
enforcement. Together, these processes help us minimize
software supply chain risk while improving developer
At the base, we have common secure infrastructure services
like identity and access management and audit logging. Next,
we secure your software supply chain with a way to define,
check, and enforce attestations across your software
Let’s look more closely at how to achieve ambient security
in your development process through policies and provenance
on Google Cloud.
Phase 1: Code
Securing your software supply chain begins when your
developers start designing your application and writing
code. This includes both first-party software as well as
open source components, each of which comes with its own
Open source software and dependencies
Open source empowers developers to build things faster so
organizations can be more nimble and productive. But open
source software is not perfect by any means, and while our
industry depends on it, we often have very little insight
into its dependencies and the varying levels of risk that
come with it. For most enterprises, the risk primarily
results from either vulnerabilities or licenses.
The open source software, packages, base images, and other
artifacts you depend on form the foundation of your “chain
For example, consider that your organization is building
software “a.” This diagram shows the chain of trust; in
other words, the number of implicit dependencies in your
project. In the diagram, “b” through “h” are direct
dependencies and “i” through “m” are indirect dependencies.
Now consider that there is a vulnerability deep down in the
dependency tree. The problem can show up across many
components very quickly. Moreover, dependencies change quite
frequently: on an average day, 40,000 npm packages see a
change in their dependencies.
Open Source Insights
is a tool built by Google Cloud that provides a transitive
dependency graph so you can see your dependencies and their
dependencies, all down the dependency tree. Open Source
Insights is continuously updated with security advisories,
licensing information, and other security data across
multiple languages all in one place. When used in
conjunction with Open Source Scorecards, which provide a
risk score for open source projects, Open Source Insights
helps your developers make better choices across the
millions of open source packages available.
To address this concern, it is key to focus on the
dependencies as code. As these dependencies move toward the
end of the supply chain, it’s harder to inspect them. To
secure your dependencies, we recommend starting with the
- Use tools like
Open Source Insights
and OSS Scorecards to get a better understanding of your
- Scan and verify all code, packages, and base images
through an automated process that is a key part of your
- Control how people access these dependencies. It is
critical to tightly control the repositories for both
first-party and open source code, with constraints around
thorough code review and audit requirements.
We will cover the build and deploy processes in more detail
further on, but it’s also important to verify the provenance
of the build, leverage a secure build environment, and
ensure that the images are signed and subsequently validated
at deploy time.
There are also a number of
safe coding practices
developers can employ:
- Automate testing
- Use memory-safe software languages
- Mandate code reviews
- Ensure commit authenticity
- Identify malicious code early
- Avoid exposing sensitive information
- Require logging and build output
- Leverage license management
Phase 2: Build
The next step in securing your software supply chain
involves establishing a secure build environment at scale.
The build process in essence starts with importing your
source code in potentially one of many languages from a
repository and executing builds to meet the specifications
laid out in your config files.
Cloud providers like Google give you access to an
up-to-date managed build environment that lets you build
images at any scale you need.
As you go through the build process, there are a number of
things to think about:
- Are your secrets secure during the build process and
- Who has access to your build environments?
- What about relatively new attack vectors or exfiltration
To develop a secure build environment, you should start
They are critical and relatively easy to secure. Start by
ensuring that your secrets are never plaintext and as far as
possible not part of your build. Instead you should ensure
they are encrypted and your builds are parameterized to
refer to external secrets to use as needed. This also
simplifies periodic rotation of secrets and minimizes the
impact of any leaks.
The next step is to set up permissions for your build.
There are various users and service accounts involved in
your build process. For instance, some users may need to be
able to manage secrets, while others may need to manage the
build process by adding or modifying steps, and still others
may just need to view logs.
As you do this, it is important to follow these best
- The most important is the principle of least privilege.
Implement fine-grained permissions to give users and
service accounts the precise permissions they need to
effectively do their jobs.
- Make sure you know how users and service accounts
interact and have a clear understanding of the chain of
responsibility from setting up a build to executing it to
the downstream effects of the build.
Next, as you scale up this process, establish boundaries
around your build to the extent possible and then use
automation to scale up through config as code and
parameterization. This allows you to audit any changes to
your build process effectively. In addition, make sure you
meet compliance needs through approval gating for sensitive
builds and deployments, pull requests for infrastructure
changes, and regular human-driven reviews of audit logs.
Finally, make sure the network suits your
needs. In most cases, it is best to host your own source
code in private networks behind firewalls. Google Cloud
gives you access to features like Cloud Build Private Pools,
a locked-down serverless build environment within your own
private network perimeter, and features like VPC Service
Controls to prevent exfiltration of your intellectual
While IAM is both a must-have and a logical starting point,
it is not foolproof. Leaky credentials represent a serious
security risk, so to reduce your reliance on IAM you can
switch to an attestation-based system that’s less
error-prone. Google uses a system called binary
authorization, which allows only trusted workloads to be
The binary authorization service establishes, verifies, and
maintains a chain of trust via attestations and policy
checks throughout the process. Essentially, binary
authorization generates cryptographic
signatures—attestations—as code and other artifacts move
toward production, and then before deployment these
attestations are checked based on policies.
When using Google Cloud Build, a set of attestations is
captured and added to your overall chain of trust. For
example, attestations are generated for which tasks were
run, what build tools and processes were used, and so on.
Notably, Cloud Build helps you achieve SLSA Level 1 by
capturing the source of the build configuration, which can
be used to validate that the build was scripted. Scripted
builds are more secure than manual builds and are required
for SLSA Level 1. In addition, your build’s provenance and
other attestations can be looked up using the container
image digest, which creates a unique signature for any
image, and is also required for SLSA Level 1.
Phase 3: Package
Once your build is complete, you have a container image
that is almost ready for production. It is essential to have
a secure location to store your images that can prevent
tampering of existing images and uploads of unauthorized
images. Your package manager would likely need to have
images for both first-party and open-source builds as well
as language packages your applications use.
Google Cloud’s Artifact Registry
provides you with such a repository. Artifact Registry is a
single place for your organization to manage both container
images as well as language packages (such as Maven and npm).
It is fully integrated with Google Cloud’s tooling and
runtimes and comes with support for native artifact
protocols. This makes it simple to integrate with your CI/CD
tooling as you work to set up automated pipelines.
Similar to the build step, it is essential to ensure access
permissions to Artifact Registry are well thought through
and follow the principles of least privilege. Beyond
restricting unauthorized access, the package repository can
provide a lot more value. Artifact Registry for instance
includes vulnerability scanning to scan your images and
ensure they are safe to deploy. This service scans images
against a constantly refreshed and updated vulnerability
database to evaluate against new threats and can alert you
when a vulnerability is found.
This step generates additional metadata, including an
attestation for whether an artifact’s vulnerability results
meet certain security thresholds. This information is then
stored in our analysis service, which structures and
organizes the artifact’s metadata, making it readily
accessible to binary authorization. You can use this to
automatically prevent risky images from being deployed to
Google Kubernetes Engine (GKE).
Phase 4 and 5: Deploy and run
The final two phases of the software security supply chain
are deploy and run. While these are two separate steps, it
makes sense to think about them together as a way to ensure
that only authorized builds make it to production.
At Google, we’ve developed best practices for determining
what kind of builds should be authorized. This starts with
ensuring the integrity of your supply chain so that it
produces only artifacts that you can trust. Next, it
includes vulnerability management as part of the software
delivery lifecycle. Finally, we put those two pieces
together to enforce workflows based on policies for
integrity and vulnerability scanning.
When you get to this stage, you have already been
through the code, build, and package phases; attestations
captured along the supply chain can be verified for
authenticity by binary authorization. In enforcement mode,
an image is deployed only when the attestations meet your
organization’s policies, and in audit mode, policy
violations are logged and trigger alerts. You can also use
binary authorization to restrict builds from running
unless they were built using the sanctioned Cloud Build
process. Binary authorization ensures that only properly
reviewed and authorized code gets deployed.
Deploying your images to a trusted runtime
environment is essential. Our managed Kubernetes platform,
GKE, takes a security-first approach to
GKE takes care of much of the cluster security concerns you
need to care about. Automatic cluster upgrades allow you to
keep your Kubernetes patched and up-to-date automatically
using release channels. Secure boot, shielded nodes, and
integrity checks ensure that your node’s kernel and cluster
components haven’t been modified and are running what you
intend and that malicious nodes can’t join your cluster.
Finally, confidential computing allows you to run clusters
with nodes whose memory is encrypted so that data can be
kept confidential even while it’s being processed. Couple
that with data encryption while at rest and in transit over
the network, and GKE provides a very secure, private, and
confidential environment to run your containerized
Beyond this, GKE also enables better
security for your applications through certificate
management for your load balancers, workload identity, and
advanced network capabilities with a powerful way to
configure and secure the ingress into your cluster. GKE also
offers sandboxed environments to run untrusted applications
while protecting the rest of your workloads.
With GKE Autopilot, GKE’s security best practices and
features are automatically implemented, further reducing the
attack surface and minimizing misconfiguration that can lead
to security issues.
Of course, the need for verification doesn’t stop
at deployment. Binary authorization also supports
continuous validation, enabling continued conformance to
the defined policy even after deployment. If a running
application falls out of conformance with an existing or
newly added policy, an alert is created and logged, giving
you confidence that what you’re running in production is
exactly what you intended.
Along with ensuring integrity, another aspect of supply
chain security is ensuring that any vulnerabilities are
found quickly and patched. Attackers have evolved to
actively insert vulnerabilities into upstream projects.
Vulnerability management and defect detection should be
incorporated throughout all stages of the software delivery
Once the code is ready for deployment, use a CI/ CD
pipeline and take advantage of the many tools available to
do a comprehensive scan of the source code and the generated
artifacts. These tools include static analyzers, fuzzing
tools, and various types of vulnerability scanners.
After you’ve deployed your workload to production, and
while it’s running in production and serving your users,
it’s necessary to monitor emerging threats and have plans
for taking immediate remediation action.
To recap, securing a software supply chain is all about
taking best practices like SLSA and using trusted managed
services that help you implement these best practices.
It is essential to:
- Start with your code and dependencies and ensure you can
- Protect your build system and use attestations to verify
all necessary build steps were followed.
- Make sure all your packages and artifacts are trusted
and cannot be tampered with.
- Enforce controls over who can deploy what and maintain
an audit trail. Use binary authorization to validate
attestations for every artifact you want to deploy.
- Run your applications in a trusted environment and
ensure no one can tamper with them while they are running.
Keep a watch for any newly discovered vulnerabilities so
you can protect your deployment.
At Google, we build in best practices for each step along
this journey into our product portfolio so you have a
trusted foundation to build upon.