Edit on GitHub
Report issue
Page history

Deploy OWASP Dependency-Track to Google Cloud

Author(s): @dedickinson ,   Published: 2021-06-11

Duncan Dickinson | Customer Engineer | Google

Contributed by Google employees.

In this tutorial, you deploy Dependency-Track to Google Cloud and use it to alert you to vulnerabilities in a small Python demonstration system.

The OWASP Dependency-Track project is a component analysis platform for tracking dependencies, their licenses, and associated vulnerabilities. Dependency-Track is a useful tool as you build out your software supply chain.

Dependency-Track accepts software bills of materials (SBOMs) in CycloneDX format, which you can provide either on an ad-hoc basis or as part of your deployment system. This kind of system is useful in a number of scenarios:

  • Software vendors can provide you SBOMs when they deliver a software project.
  • Teams building and deploying software can submit SBOMs when new versions are deployed.
  • You can manually list dependencies for legacy systems.

Using Dependency-Track helps you to monitor and respond to vulnerabilites in components in your systems. Using components with known vulnerabilities is one of the top 10 web application security risks identified by the Open Web Application Security Project (OWASP). If you have an inventory of components in use across your environment, then you can use resources such as the National Vulnerability Database to determine whether you have vulnerable components, and respond according to your organization's processes.

You run the commands in this tutorial in Cloud Shell. This tutorial assumes that you are comfortable running commands in a Linux command shell.

This tutorial takes approximately 2-4 hours to complete.

Architecture overview

The following diagram illustrates the architecture of the solution described in this tutorial:

Architecture overview.

  • The Dependency-Track Frontend and API Service components are hosted as GKE pods.
  • Cloud Load Balancing manages traffic to the GKE pods.
  • Artifact Registry hosts the container images.
  • The GKE instance operates as a private cluster, so Cloud NAT handles outbound requests (primarily Dependency-Track downloading its various data sources).
  • A PostgreSQL Cloud SQL database holds Dependency-Track data.
  • Secret Manager securely stores database passwords.

Objectives

  1. Generate a bill of materials (BOM) for a basic project.
  2. Set up Artifact Registry and prepare the Dependency-Track images.
  3. Deploy Dependency-Track to Google Kubernetes Engine.
  4. Upload a BOM and integrate Cloud Build.

Costs

This tutorial uses billable components of Google Cloud, including the following:

Use the pricing calculator to generate a cost estimate based on your projected usage.

Before you begin

You need access to a domain for which you can create two subdomains, one for the frontend and one for the API server. If you do not have a domain, you can register one with Google Domains.

Set up your Google Cloud project

To complete this tutorial, you need a Google Cloud project with billing enabled. We recommend that you create a new project specifically for this tutorial.

You must have the project owner role for the project.

  1. In Cloud Shell, set a project ID environment variable, replacing [YOUR_PROJECT_ID] with your Google Cloud project ID:

    export GCP_PROJECT_ID=[YOUR_PROJECT_ID]
    
  2. Set the working project for the gcloud environment:

    gcloud config set project $GCP_PROJECT_ID
    
  3. Set an environment variable for your Google Cloud region:

    export GCP_REGION=us-central1
    

Get the tutorial files

  1. Clone the tutorial repository:

    git clone https://github.com/GoogleCloudPlatform/community.git
    
  2. Go to the tutorial directory:

    cd community/tutorials/deploy-dependency-track
    

Install pip and poetry packages

  1. Upgrade pip:

    pip3 install --upgrade pip
    
  2. Install poetry, a package for dependency management:

    python3 -m pip install poetry --user
    
  3. Add the installation directories to your PATH variable so that you can run the installed software:

    export PATH=$PATH:$HOME/.local/bin
    

Generate a software bill of materials

The demonstration project has no functional code; its purpose is to include the flask library and a very old version of the django library, to demonstrate the presence of a vulnerability.

WARNING: The demonstration project includes a very old version of Django with known vulnerabilities. Do not try to run the project. It is only set up to demonstrate Dependency-Track's ability to report on vulnerabilities.

The CycloneDX project defines a schema for software bills of materials (SBOMs), as well as providing tools that you can use with various programming languages and CI/CD tools.

The Python version (cyclonedx-bom) is included as a development dependency of the demonstration project.

  1. Go to the demonstration project directory:

    cd demo-project
    
  2. Install the demonstration project with poetry:

    poetry install
    
  3. Show the project's dependencies:

    poetry show --tree
    

    The following is an excerpt of the dependency graph output:

    django 1.2 A high-level Python Web framework that encourages rapid development and clean, pragmatic design.
    flask 1.1.2 A simple framework for building complex web applications.
    ├── click >=5.1
    ├── itsdangerous >=0.24
    ├── jinja2 >=2.10.1
    │   └── markupsafe >=0.23 
    └── werkzeug >=0.15
    
  4. Use poetry to generate a requirements.txt file:

    poetry export --without-hashes>requirements.txt
    
  5. Generate a CycloneDX BOM in JSON format:

    poetry run cyclonedx-py -j
    

    cyclonedx-py processes the requirements.txt file to produce the bom.json file.

  6. View the bom.json file in a text editor.

    The following is an excerpt that shows the details for the flask component, including its name, publisher, version, licenses, and package URL (purl):

    {
        "description": "A simple framework for building complex web applications.",
        "hashes": [
            {
                "alg": "MD5",
                "content": "1811ab52f277d5eccfa3d7127afd7f92"
            },
            {
                "alg": "SHA-256",
                "content": "8a4fdd8936eba2512e9c85df320a37e694c93945b33ef33c89946a340a238557"
            }
        ],
        "licenses": [
            {
                "license": {
                    "name": "BSD-3-Clause"
                }
            }
        ],
        "modified": false,
        "name": "flask",
        "publisher": "Armin Ronacher",
        "purl": "pkg:pypi/flask@1.1.2",
        "type": "library",
        "version": "1.1.2"
    }
    

Prepare the Dependency-Track images

In this section, you work with two images:

  • The frontend image provides the web-based user interface.
  • The apiserver image provides an OpenAPI-based interface that is used by the frontend and when interacting with Dependency-Track from other systems (such as submitting a BOM).

In this tutorial, you use the Artifact Registry service to store container images, and you use the Container Analysis service to scan the images for vulnerabilities.

Because Artifact Registry and Container Analysis have associated costs, you could choose to use the images directly from Docker Hub instead of using these services. However, there are advantages to using these services in a production system:

  • You have a copy of the images local to your project. This protects your environment from changes to the images, and the images remain available if Docker Hub becomes unavailable.
  • Container Analysis provides automatic vulnerability scanning on images, which you can use as part of a broader approach to monitoring for vulnerabilities.

You pull the required images from Docker Hub and push them to your repository, using a specific version number for each image instead of using latest. The image indicated by latest changes, which can cause issues such as broken integrations. Though the instructions in this section indicate a version that's current at the time of the writing of this tutorial, you should check to determine whether a newer version is available.

  1. Enable the APIs:

    gcloud services enable artifactregistry.googleapis.com \
                           containerscanning.googleapis.com
    
  2. Set the default location for image storage:

    gcloud config set artifacts/location $GCP_REGION
    
  3. Configure the dependency-track image repository:

    gcloud artifacts repositories create dependency-track \
      --repository-format=docker \
      --location=$GCP_REGION
    
    export GCP_REGISTRY=$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/dependency-track
    
  4. Configure Docker with the required authentication, so that you can push images to the repository:

    gcloud auth configure-docker $GCP_REGION-docker.pkg.dev
    
  5. Pull the Dependency-Track API server image from Docker Hub, and push the image to your repository:

    docker pull docker.io/dependencytrack/apiserver:4.2.1
    docker tag docker.io/dependencytrack/apiserver:4.2.1 $GCP_REGISTRY/apiserver:4.2.1
    docker push $GCP_REGISTRY/apiserver:4.2.1
    
  6. Pull the Dependency-Track Front End (UI) image from Docker Hub, and push the image to your repository:

    docker pull docker.io/dependencytrack/frontend:1.2.0
    docker tag docker.io/dependencytrack/frontend:1.2.0 $GCP_REGISTRY/frontend:1.2.0
    docker push $GCP_REGISTRY/frontend:1.2.0  
    
  7. Check your image collection:

    gcloud artifacts docker images list $GCP_REGISTRY
    

    You can use this command at any time to see what images you have stored with Artifact Registry.

Deploy to Google Kubernetes Engine and Cloud SQL

In this section, you configure the system to run on Google Kubernetes Engine (GKE) and use a Cloud SQL PostgreSQL database.

Enable services and set up the environment

  1. Enable the Compute Engine, GKE, Cloud SQL, Cloud SQL Admin, Secret Manager, and Service Networking services:

    gcloud services enable compute.googleapis.com \
                           container.googleapis.com \
                           sql-component.googleapis.com \
                           sqladmin.googleapis.com \
                           secretmanager.googleapis.com \
                           servicenetworking.googleapis.com
    
  2. Designate a default region for Compute Engine, which runs the GKE worker nodes:

    gcloud config set compute/region $GCP_REGION
    
  3. Add your domains to the commands below:

    export DT_DOMAIN_API=[YOUR_DOMAIN_NAME_FOR_THE_API_SERVER]
    export DT_APISERVER=https://$DT_DOMAIN_API
    export DT_DOMAIN_UI=[YOUR_DOMAIN_NAME_FOR_THE_FRONTEND]
    
    • DT_DOMAIN_API provides the API server (e.g., api.example.com).
    • DT_APISERVER is its URL (e.g., https://api.example.com).
    • DT_DOMAIN_UI provides the frontend (e.g., ui.example.com).

Create TLS certificates

In this section, you create TLS certificates for the API and user interface endpoints. You could do this using a GKE ManagedCertificate resource, but defining the TLS certificates outside of Kubernetes allows you to transfer them as needed. This isn't an important requirement for a tutorial environment, but you should consider this advantage for your production environment.

The provisioning of the certificates can take a long time, so it's best to get these started early.

  1. Create the TLS certificate for the API server:

    gcloud compute ssl-certificates create dependency-track-cert-api \
      --description="Certificate for the Dependency-Track API" \
      --domains=$DT_DOMAIN_API \
      --global
    
  2. Create the TLS certificate for the frontend:

    gcloud compute ssl-certificates create dependency-track-cert-ui \
      --description="Certificate for the Dependency-Track UI" \
      --domains=$DT_DOMAIN_UI \
      --global
    
  3. Check the progress:

    gcloud compute ssl-certificates list
    

    You can check the progress at any time with this command.

While waiting for the certificates to be provisioned, you can continue with the next sections of the tutorial. The setup of the certificates only completes when they're aligned to a load balancer.

Create external IP addresses

  1. Create two external IP addresses:

    gcloud compute addresses create dependency-track-ip-api --global
    gcloud compute addresses create dependency-track-ip-ui --global
    
  2. Set two environment variables, one for each of the external IP addresses:

    export DT_IP_API=$(gcloud compute addresses describe dependency-track-ip-api \
      --global --format="value(address)")
    
    export DT_IP_UI=$(gcloud compute addresses describe dependency-track-ip-ui \
      --global --format="value(address)")
    
  3. Check the addresses:

    echo "IP address for $DT_DOMAIN_API: $DT_IP_API"
    echo "IP address for $DT_DOMAIN_UI: $DT_IP_UI"
    

Configure your domains

Add your domain names and the IP addresses to your DNS system. DNS entries can take up to 48 hours to propagate.

For this tutorial, you need to create two subdomains, one for the frontend and one for the API server.

To configure your domains, create an A record with a TTL (time to live) of 1 hour using the subdomain in the record's Name field and the IP address in the record's Data Field. The Google Domains site provides a guide to resource records, and your hosting service should offer similar guidance.

For example, if you use the api subdomain for the Dependency-Track API server and dt subdomain for the Dependency-Track user interface, the two resource records should be configured as follows for your domain:

Name Type TTL Data
api A 1hr 1.2.3.4
dt A 1hr 1.2.3.5

Be sure to use the actual IP addresses that you created, not the 1.2.3.4 and 1.2.3.5 examples.

With the settings above for the example domain example.com, the following domain names would be available:

  • api.example.com will resolve to 1.2.3.4.
  • dt.example.com will resolve to 1.2.3.5.

Set up a VPC network for the GKE cluster

In this section, you create a VPC network for the private GKE cluster and to enable private service access, which allows you to create a Cloud SQL instance without a public IP address.

  1. Create the VPC network:

    gcloud compute networks create dependency-track \
      --description="A demo VPC network for hosting Dependency-Track" \
      --subnet-mode=custom
    
  2. Reserve an IP address:

    gcloud compute addresses create google-managed-services-dependency-track \
      --global \
      --purpose=VPC_PEERING \
      --prefix-length=20 \
      --network=dependency-track
    
  3. Connect to the service with VPC peering:

    gcloud services vpc-peerings connect \
      --service=servicenetworking.googleapis.com \
      --ranges=google-managed-services-dependency-track \
      --network=dependency-track \
      --project=$GCP_PROJECT_ID
    

Create the GKE cluster

In this section, you create a private GKE cluster, but the Kubernetes control plane is available on a public endpoint. For this tutorial, this allows you to access the cluster with kubectl from Cloud Shell. In a production environment, you should typically limit access to the control plane. For details, see Access to cluster endpoints.

  1. Create the GKE cluster:

    gcloud container clusters create-auto dependency-track \
      --region=$GCP_REGION \
      --create-subnetwork="name=dependency-track-subnet" \
      --network=dependency-track \
      --no-enable-master-authorized-networks \
      --enable-private-nodes
    
  2. Set up kubectl with the correct credentials:

    gcloud container clusters get-credentials dependency-track --region $GCP_REGION
    
  3. Check the client and server versions:

    kubectl version
    

    If this command returns details for the client and server, kubectl was able to connect to the GKE cluster. If the command returns Unable to connect to the server, then see Configuring cluster access for kubectl for help.

Set up Cloud NAT

The GKE nodes need outbound internet access so that the Dependency-Track system can download its required databases. This requires Cloud NAT for network address translation.

  1. Create a router:

    gcloud compute routers create dependency-track-nat-router \
      --network dependency-track \
      --region $GCP_REGION
    
  2. Add a Cloud NAT gateway:

    gcloud compute routers nats create dependency-track-nat \
      --router=dependency-track-nat-router \
      --auto-allocate-nat-external-ips \
      --nat-all-subnet-ip-ranges \
      --region $GCP_REGION \
      --enable-logging
    

Create a Kubernetes namespace

  1. Create a dependency-track Kubernetes namespace:

    kubectl create namespace dependency-track
    
  2. Switch the context to the new namespace:

    kubectl config set-context --current --namespace=dependency-track
    

Deploy the Dependency-Track frontend

The deployment process uses the kustomize functionality built into the kubectl package.

The various deployment files are in the deploy directory.

The envsubst command is used to process environment variables in the deployment files. The required package (gettext-base) is already installed in Cloud Shell.

To deploy the frontend workload to the GKE cluster, run the following commands from the base directory of the tutorial:

cd deploy/frontend
cat kustomization.base.yaml | envsubst >kustomization.yaml
kubectl apply -k .

Set up a service account for database access

The API server needs to access a database. In this section, you create a service account for database access with GKE workload identity, which is used by a PostgreSQL database that you create in the next section. This allows the SQL Auth Proxy pod to connect to Cloud SQL through the service account.

  1. Create a Kubernetes service account:

    kubectl create serviceaccount dependency-track
    
  2. Create a Google Cloud IAM service account:

    gcloud iam service-accounts create dependency-track
    
  3. Align the Kubernetes service account to the IAM service account:

    gcloud iam service-accounts add-iam-policy-binding \
      --role roles/iam.workloadIdentityUser \
      --member "serviceAccount:$GCP_PROJECT_ID.svc.id.goog[dependency-track/dependency-track]" \
      dependency-track@$GCP_PROJECT_ID.iam.gserviceaccount.com
    
  4. Align the IAM service account to the Kubernetes service account:

    kubectl annotate serviceaccount \
      --namespace dependency-track \
      dependency-track \
      iam.gke.io/gcp-service-account=dependency-track@$GCP_PROJECT_ID.iam.gserviceaccount.com
    
  5. Grant the cloudsql.client role to the IAM service account so that SQL Auth Proxy can connect to the database:

    gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
      --role roles/cloudsql.client  \
      --member "serviceAccount:dependency-track@$GCP_PROJECT_ID.iam.gserviceaccount.com"
    

Set up a Cloud SQL instance using PostgreSQL

  1. Generate a random password for each database account and store it in Secret Manager:

    cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1 | \
      gcloud secrets create dependency-track-postgres-admin \
      --data-file=-
    
    cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1 | \
      gcloud secrets create dependency-track-postgres-user \
      --data-file=-
    
  2. Set a variable for the database instance name:

    export DT_DB_INSTANCE=dependency-track
    
  3. Create the Cloud SQL instance:

    gcloud beta sql instances create $DT_DB_INSTANCE \
      --region=$GCP_REGION \
      --no-assign-ip \
      --network=projects/$GCP_PROJECT_ID/global/networks/dependency-track \
      --database-version=POSTGRES_13 \
      --tier=db-g1-small \
      --storage-auto-increase \
      --root-password=$(gcloud secrets versions access 1 --secret=dependency-track-postgres-admin)          
    

    Creating a new Cloud SQL instance can take several minutes.

    At time of writing, the private IP addresses for Cloud SQL feature is in preview, so it requires the gcloud beta sql instances create command.

  4. Set up a database user:

    gcloud sql users create dependency-track-user \
      --instance=$DT_DB_INSTANCE \
      --password=$(gcloud secrets versions access 1 --secret=dependency-track-postgres-user)
    
  5. Create the database:

    gcloud sql databases create dependency-track \
      --instance=$DT_DB_INSTANCE
    
  6. Set a variable for the connection details:

    export DT_DB_CONNECTION=$(gcloud sql instances describe $DT_DB_INSTANCE --format="value(connectionName)")
    
  7. Set the database password as a Kubernetes secret for the API server:

    kubectl create secret generic dependency-track-postgres-user-password \
      --from-literal ALPINE_DATABASE_PASSWORD=$(gcloud secrets versions access 1 --secret=dependency-track-postgres-user) 
    

Deploy and start the API server

To deploy the API server workload to the GKE cluster, run the following commands:

cd ../api
cat kustomization.base.yaml | envsubst >kustomization.yaml
kubectl apply -k .

Though the kubectl apply command returns very quickly, the GKE cluster needs to resize for the new workload, which can take a few minutes. Also, the API Server loads a lot of data and can take up to 30 minutes to be ready.

To check that the required pods have been deployed, run the following command:

kubectl get pods -w -l app=dependency-track-apiserver

When the pod's status is listed as RUNNING with 2/2 containers ready, exit by pressing Ctrl+C.

To track the progress of the API server's data load, open a separate Cloud Shell terminal and check the logs with this command:

kubectl logs -f dependency-track-apiserver-0 dependency-track-apiserver

Using Dependency-Track

When the API server has finished loading data and the TLS certificates are provisioned, you can visit the API server site.

  • /api/version is the service version.
  • /api/swagger.json is the OpenAPI definition.

When you access the frontend, enter admin and admin for the initial login username and password. You're prompted to set up a new password.

For more information, see Dependency-Track's initial startup document.

Upload a BOM with the frontend user interface

  1. In the frontend user interface, go to the Projects screen and click + Create Project.

    New project screen

  2. Use demo-project for the project name, set Classifier to Application, and click Create.

    New project dialog

  3. Click the new project.

  4. In the project screen, click the Components tab.

  5. Use the following command in the demo-project directory to download a copy of bom.json:

    cloudshell download bom.json
    
  6. Click Upload BOM and select the bom.json file for upload.

    Uploading the BOM

    When you return to the project screen you should see the components listed. If not, click the refresh button.

  7. Explore the information.

    Project screen with components listed

    The django component has a high risk score and seems to be the source of several issues.

  8. Click the django link.

    You're taken to the overview page for the component, where you see that django 1.2 has many known vulnerabilities.

    Project screen with components listed

  9. Click the Vulnerabilities tab to go to the listing for the known component vulnerabilities.

    You can then click through to each vulnerability (such as "CVE-2011-4137") to get further details about the vulnerability.

    Project screen with components listed

Upload a BOM from the terminal

In a production system, you will more often upload a BOM with the API, rather than through the graphical user interface.

  1. In the frontend user interface, go to the Administration screen, select Access Management, and then select Teams.

  2. Click the Automation team to view the team's configuration.

    The Teams listing screen

  3. Add the PROJECT_CREATION_UPLOAD permission to the Automation team.

    The permissions listing for the team

  4. Copy the API key that is displayed for the Automation team, and set the API key as a variable in your terminal:

    export DT_API_KEY=[YOUR_API_KEY]
    
  5. Generate the XML version of the BOM:

    poetry install
    poetry export --without-hashes>requirements.txt
    poetry run cyclonedx-py
    
  6. Upload the BOM:

    poetry run ./bom-loader.py --url $DT_APISERVER --api-key=$DT_API_KEY
    

    The bom-loader.py script performs the following steps:

    1. Reads the project name and version from the pyproject.toml file.
    2. Loads the BOM (bom.xml).
    3. Packages the information and submits it to the Dependency-Track API server.
  7. Go to your Dependency-Track frontend and open the Projects tab.

    You should see demo-project with version 0.1.0:

    Project listing with demo_project version 0.1.0

    As before, you can select the project and explore the dependencies.

Upload a BOM with Cloud Build

In this section, you set up Cloud Build and see how you can submit a BOM as part of your CI/CD workflow.

Cloud Build can use secrets stored in Secret Manager. This is extremely useful for automating build environments, because Secret Manager provides a central place for holding sensitive information such as keys and passwords. Builds can use this to quickly access required secrets without requiring command-line parameters. This also make it easier to rotate keys (such as the Dependency-Track API key) without needing to reconfigure every build.

  1. Enable the Cloud Build and Cloud Storage APIs:

    gcloud services enable cloudbuild.googleapis.com storage-component.googleapis.com
    
  2. Create a new repository called builders:

    gcloud artifacts repositories create builders \
      --repository-format=docker \
      --location=$GCP_REGION
    
  3. In the demo-project directory, submit the Poetry image to Cloud Build:

    gcloud builds submit support/poetry-image \
      --tag ${GCP_REGION}-docker.pkg.dev/${GCP_PROJECT_ID}/builders/poetry:1
    

    The Cloud Build job creates the image and store it in the builders repository. This image provides Python with the Poetry system ready to go.

  4. Create a Cloud Storage bucket:

    gsutil mb gs://${GCP_PROJECT_ID}-build
    

    The cloudbuild.yaml file contains an artifacts section that stores the generated bom.xml in this Cloud Storage bucket:

    artifacts:
      objects:
        location: gs://${PROJECT_ID}-build/$BUILD_ID
        paths: ["bom.xml"]
    
  5. Add the API key as a secret:

    printf $DT_API_KEY | gcloud secrets create dependency-track-api-key --data-file -
    
  6. Get the unique Google Cloud project number

    export GCP_PROJECT_NUM=$(gcloud projects describe ${GCP_PROJECT_ID} --format 'value(projectNumber)')
    
  7. Give Cloud Build the ability to read the secret by granting the secretAccessor role to the Cloud Build service account:

    gcloud secrets add-iam-policy-binding dependency-track-api-key  \
      --member serviceAccount:${GCP_PROJECT_NUM}@cloudbuild.gserviceaccount.com \
      --role roles/secretmanager.secretAccessor
    
  8. To clear the resources that you added in previous sections of this tutorial in Dependency-Track, go to the Dependency-Track frontend, select the project from the list, click View Details in the project screen, and click the Delete button.

    The View Details link is used to open the display to delete the project

  9. Submit the build:

    gcloud builds submit --config cloudbuild.yaml --substitutions=_DT_APISERVER=$DT_APISERVER . 
    

    The build starts and pushes the generated BOM to Dependency-Track.

  10. List all projects:

    curl --location --request GET \
      "$DT_APISERVER/api/v1/project" \
      --header "x-api-key: $DT_API_KEY" | jq
    
  11. Check basic project details:

    curl --location --request GET \
      "$DT_APISERVER/api/v1/project/lookup?name=demo-project&version=0.1.0" \
      --header "x-api-key: $DT_API_KEY" | jq
    

You can visit the API site, to access the OpenAPI definition for further API information. The address is of the form https://[DT_DOMAIN_API]/api/swagger.json.

Troubleshooting

Resolving TLS errors

When you visit the frontend or API server, you might get a TLS error such as ERR_SSL_VERSION_OR_CIPHER_MISMATCH.

If this occurs, check the TLS certificate status:

gcloud compute ssl-certificates list

Both of the certificates must be listed as ACTIVE. If you see FAILED_NOT_VISIBLE, then wait a while for the certificate to be provisioned to the load balancer. This can take up to an hour.

You can open a new terminal and set a watch on the certificate listing:

watch -n 30 gcloud compute ssl-certificates list

For more information, see Troubleshooting SSL certificates.

Checking GKE connectivity

You can check connectivity for the cluster with the following commands:

kubectl run --rm -it --image=busybox -- sh
wget -O - http://www.example.com

Checking the database

If you need to check the database using the psql tool, start by reviewing Connecting using the Cloud SQL Auth Proxy.

If the psql client is not installed, you can install it:

sudo apt install postgresql-client
  1. Start Cloud SQL Auth Proxy in your GKE cluster:

    kubectl run proxy --port 5432 --serviceaccount=dependency-track \
      --image=gcr.io/cloudsql-docker/gce-proxy:1.22.0 -- /cloud_sql_proxy \
        -instances=$DT_DB_CONNECTION=tcp:5432 \
        -ip_address_types=PRIVATE
    
  2. Set up port forwarding on the PostgreSQL port:

    kubectl port-forward pod/proxy 5432:5432
    
  3. Open a new terminal and connect to the PostgreSQL instance:

    psql "host=127.0.0.1 sslmode=disable dbname=dependency-track user=dependency-track-user"
    
  4. Delete the pod when you're done:

    kubectl delete pod/proxy
    

Considerations for a production system

Take some time to explore Dependency-Track and consider how and where you might use it. The scenarios provided in this tutorial (manual upload, command-line upload, and CI/CD integration) are good starting points for including dependency tracking as part of managing your software supply chain.

If Dependency-Track is right for your organization, it's important to consider a production approach to setting up the system.

Though this tutorial demonstrates several aspects of setup and configuration, you need to consider additional aspects for a production implementation, including the following:

  • Keep your container images up to date: You got a copy of the Dependency-Track images from Docker Hub and set them up in Artifact Registry with container scanning. Make sure to track new releases of Dependency-Track and also consider how to keep your container images updated.
  • Consider your GKE deployment: This tutorial provides a set of good practices, such as workload identity, Cloud SQL Auth Proxy, and private clusters. For a production instance, you must also plan out aspects such as network setup and how you serve the client and API to your organization. Don't forget to review access to cluster endpoints to determine whether public access to the Kubernetes control plane can be disabled.
  • Review all permissions: Some of the tutorial configuration needs tightening up for a long-term production service. For example, review the Cloud SQL user, because it has very broad access that you can reduce.
  • Review the Cloud SQL configuration: Consider aspects such as automated backups and the resources (memory and vCPU) of the underlying virtual machine.
  • Set up access: Consider how users will access the system. Dependency-Track supports OIDC, which can save you from managing authentication in the Dependency-Track service. You can also explore Identity-Aware Proxy for remote access to the system.
  • Rotate API keys regularly: Dependency-Track uses API keys for access to the API Server (such as from Cloud Build). Ensure that these keys are rotated regularly.
  • Use security and operations services: Consider tools such as Cloud Armor and Google Cloud's operations suite for the ongoing security and operation of your system.

Having a model to track dependecies is a great first step. Configuring the system to notify you when a vulnerability pops up is even better. Check out the Dependency-Track notifications document for options. The webhooks model is a useful approach to automating responses. Also consider your processes and how your organization will respond when a vulnerability is reported.

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the project.

  1. In the Cloud Console, go to the Projects page.
  2. In the project list, select the project you want to delete and click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Remove the two domain names (DNS entries) created in the "Create external IP addresses" section.

Submit a tutorial

Share step-by-step guides

Submit a tutorial

Request a tutorial

Ask for community help

Submit a request

View tutorials

Search Google Cloud tutorials

View tutorials

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.