Connect a PACS to the Cloud Healthcare API

This page explains how to use the open source Cloud Healthcare API DICOM adapter on Google Kubernetes Engine (GKE) to complete the following tasks:

  • Connect a Picture Archiving and Communication System (PACS) to the Cloud Healthcare API.
  • Import DICOM data from the PACS to a DICOM store in the Cloud Healthcare API.

This guide provides a simple way to set up a prototype using Google Kubernetes Engine and a Compute Engine virtual machine (VM). The Compute Engine VM simulates the on-premise PACS. For more detailed information, see the DICOM adapter README.

DICOM adapter overview

The adapter consists of two primary components: the import adapter and the export adapter. This guide shows how to use the import adapter to store DICOM images in a DICOM store.

Use the DICOM adapter to translate data between traditional protocols and RESTful protocols. For example, you can translate from the C-STORE format to the STOW-RS format.


This guide uses billable components of Google Cloud, including the following:

  • Cloud Healthcare API
  • Google Kubernetes Engine
  • Compute Engine

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New Cloud Platform users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the Cloud Healthcare API, Google Kubernetes Engine, and Container Registry APIs.

    Enable the APIs

  5. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  7. Enable the Cloud Healthcare API, Google Kubernetes Engine, and Container Registry APIs.

    Enable the APIs

  8. Wait for the GKE API and related services to be enabled. This can take several minutes.
  9. Create a DICOM store if you haven't already.

Choose a shell

To complete this guide, use Cloud Shell or your local shell.

Cloud Shell is a shell environment for managing resources hosted on Google Cloud. Cloud Shell comes preinstalled with the following tools, which you use in this guide:

  • gcloud tool: provides the primary command-line interface for Google Cloud
  • kubectl: provides the command-line interface for running commands against GKE clusters

To open Cloud Shell or configure your local shell, complete the following steps:

Cloud Shell

  1. Go to Google Cloud Console.

    Google Cloud Console

  2. From the top-right corner of the console, click the Activate Google Cloud Shell button:

A Cloud Shell session opens inside a frame at the bottom of the console. You use this shell to run gcloud and kubectl commands.

Local shell

If you prefer to use your local shell, you must install the Cloud SDK, which includes the gcloud tool and the kubectl tool. See Installing Cloud SDK for instructions.

Deploy the adapter using Google Kubernetes Engine

The import adapter and export adapter are containerized applications staged in a pre-built Docker image in Container Registry. In this guide, you deploy the dicom-import-adapter image to run on a GKE cluster.

Grant the Compute Engine service account permissions

Follow the instructions in Creating and enabling service accounts for instances to grant the Compute Engine default service account the roles/healthcare.dicomEditor role. For more information, see DICOM store roles.

Create the cluster

To create a cluster in GKE named dicom-adapter, run the gcloud container clusters create command:

gcloud container clusters create dicom-adapter \
    --zone=COMPUTE_ZONE \

COMPUTE_ZONE is the zone where your cluster is deployed. A zone is an approximate regional location where your clusters and their resources are deployed. For example, us-west1-a is a zone in the us-west region. If you've set a default zone using the gcloud config set compute/zone command, the value of the flag in the previous command overrides the default.

The output is the following:

Creating cluster dicom-adapter in COMPUTE_ZONE... Cluster is being health-checked (master is healthy)...done.
Created [].
To inspect the contents of your cluster, go to:
kubeconfig entry generated for dicom-adapter.
dicom-adapter  COMPUTE_ZONE  1.18.16-gke.502   123.456.789.012  n1-standard-1  1.18.16-gke.502  3          RUNNING

Configure the Deployment

When deploying an application to GKE, you define properties of the Deployment using a Deployment manifest file, which is typically a YAML file. For information on Deployment manifest files, see Creating Deployments.

Using a text editor, create a Deployment manifest file for the import adapter called dicom_adapter.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
  name: dicom-adapter
  replicas: 1
      app: dicom-adapter
        app: dicom-adapter
        - name: dicom-import-adapter
            - containerPort: 2575
              protocol: TCP
              name: "port"
            - "--dimse_aet=IMPORTADAPTER"
            - "--dimse_port=2575"
            - "--dicomweb_address="

Replace the following:

  • PROJECT_ID: project ID
  • LOCATION: location of the dataset
  • DATASET_ID: ID for the parent dataset of your DICOM store
  • DICOM_STORE_ID: ID for the DICOM store to which you're importing DICOM data

Configure the Service

To make the DICOM adapter accessible to applications outside of the GKE cluster (such as a PACS), you must configure an internal load balancer. The load balancer lets you internally expose the DIMSE port (in this guide, port 2575).

Create a Service manifest file to configure load balancing. In the directory where you created the Deployment manifest file, use a text editor to create a file called dicom_adapter_load_balancer.yaml with the following content. You create and deploy the Service manifest file in Deploying the Service and the internal load balancer.

apiVersion: v1
kind: Service
  name: dicom-adapter-load-balancer
  # The "Internal" annotation will result in an load balancer that can only
  # be accessed from within the VPC the Kubernetes cluster is in.
  # You can remove this annotation to get an externally accessible load balancer.
  annotations: "Internal"
  - port: 2575
    targetPort: 2575
    protocol: TCP
    name: port
    app: dicom-adapter
  type: LoadBalancer

Deploy the Deployment

To deploy the adapter to a GKE cluster, run the following command in the directory containing the dicom_adapter.yaml Deployment manifest file:

kubectl apply -f dicom_adapter.yaml

The output is the following:

deployment.apps/dicom-adapter created

Inspect the Deployment

After you create the Deployment, use the kubectl tool to inspect it.

To get detailed information about the Deployment, run the following command:

kubectl describe deployment dicom-adapter

To view the Pod created by the Deployment, run the following command:

kubectl get pods -l app=dicom-adapter

To get information about the created Pod, run the following command using the name of the Pod returned from the previous command:

kubectl describe pod POD_NAME

If the Deployment was successful, the last part of the output from the previous command contains the following information. The adapter is ready to serve requests when the dicom-import-adapter container has the Started value in the Reason column.

  Type    Reason     Age    From                                                   Message
  ----    ------     ----   ----                                                   -------
  Normal  Scheduled  3m33s  default-scheduler                                      Successfully assigned default/dicom-adapter-69d579778-qrm7n to gke-dicom-adapter-default-pool-6f6e0dcd-9cdd
  Normal  Pulling    3m31s  kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Pulling image ""
  Normal  Pulled     3m10s  kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Successfully pulled image ""
  Normal  Created    3m7s   kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Created container dicom-import-adapter
  Normal  Started    3m7s   kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Started container dicom-import-adapter

Deploy the Service and the internal load balancer

To create the internal load balancer, in the directory containing the dicom_adapter_load_balancer.yaml Service manifest file, run the following command:

kubectl apply -f dicom_adapter_load_balancer.yaml

The output is the following:

service/dicom-adapter-load-balancer created

Inspect the Service

After creating the Service, inspect it to verify that it has been configured correctly.

To inspect the load balancer, run the following command:

kubectl describe service dicom-adapter-load-balancer

The output is the following:

Name:                     dicom-adapter-load-balancer
Namespace:                default
Labels:                   <none>
Annotations:     Internal
Selector:                 app=dicom-adapter
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     port  2575/TCP
TargetPort:               2575/TCP
NodePort:                 port  30440/TCP
Session Affinity:         None
External Traffic Policy:  Cluster

  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  1m    service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   1m    service-controller  Ensured load balancer

The LoadBalancer Ingress IP address might take up to a minute to populate. Save the LoadBalancer Ingress IP address. You use it to access the Service from outside the cluster in the next section.

Create a Compute Engine virtual machine

To simulate your on-premises PACS, create a Compute Engine VM to send requests to the DICOM adapter. Because you deployed an internal load balancer the VM you create and the existing GKE cluster must be in the same region and use the same VPC network.

Complete the following steps to create a Linux virtual machine instance in Compute Engine:


  1. In the Cloud Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click Create instance.

  3. Choose a Region and Zone for the instance that matches the zone you selected when you created the cluster. For example, if you used us-central1-a for the COMPUTE_ZONE when you created the cluster, select us-central1 (Iowa) for the Region and us-central1-a for the Zone.

  4. In the Boot disk section, click Change to configure your boot disk.

  5. On the Public images tab, choose version 9 of the Debian operating system.

  6. Click Select.

  7. In the Firewall section, select Allow HTTP traffic.

  8. Click Create to create the instance.


Run the gcloud compute instances create command. The command uses the http-server tag to allow HTTP traffic.

gcloud compute instances create INSTANCE_NAME \
   --project=PROJECT_ID \
   --zone=COMPUTE_ZONE \
   --image-family=debian-9 \
   --image-project=debian-cloud \

Replace the following:

  • INSTANCE_NAME: name of the VM
  • PROJECT_ID: project ID
  • COMPUTE_ZONE: zone that you selected when you created the cluster

The output is the following:

Created [].
INSTANCE_NAME  COMPUTE_ZONE           n1-standard-1               INTERNAL_IP  EXTERNAL_IP    RUNNING

Allow a short time for the instance to start up. After the instance starts, it's listed on the VM Instances page with a green status icon.

Connect to the VM

To connect to the VM, complete the following steps:


  1. In the Cloud Console, go to the VM Instances page.

    Go to the VM Instances page

  2. In the list of virtual machine instances, click SSH in the row of the instance that you created.


Run the gcloud compute ssh command:

gcloud compute ssh INSTANCE_NAME \
    --project PROJECT_ID \
    --zone COMPUTE_ZONE

Replace the following:

  • INSTANCE_NAME: name of the VM
  • PROJECT_ID: project ID
  • COMPUTE_ZONE: zone of the VM

You now have a terminal window for interacting with your Linux instance.

Import DICOM images to the DICOM store

There are multiple software options available that you can use to send DICOM images over a network. In the following sections, you use the DCMTK DICOM toolkit.

To import DICOM images to your DICOM store, complete the following steps from the VM you created in the previous section:

  1. Install the DCMTK DICOM toolkit software:

    sudo apt-get install -y dcmtk
  2. Save the DICOM image to the VM. For example, if the DICOM image is stored in a Cloud Storage bucket, run the following command to download it to your current working directory:

    gsutil cp gs://BUCKET/DCM_FILE .

    To use a DICOM image made freely available by Google Cloud from the gcs-public-data--healthcare-tcia-lidc-idri dataset, run the following command:

    gsutil -u PROJECT_ID cp gs://gcs-public-data--healthcare-tcia-lidc-idri/dicom/ .
  3. Run the dcmsend command, which is available through the DCMTK DICOM toolkit. When you run the command, set the Application Entity (AE) Title to IMPORTADAPTER. You can optionally add the --verbose flag to show processing details. The port used in this guide is 2575.

    dcmsend --verbose PEER 2575 DCM_FILE_IN -aec IMPORTADAPTER

    Replace the following:

    • PEER: LoadBalancer Ingress IP address that was returned when you inspected the Service
    • DCMFILE_IN: path to the DICOM image on the VM

    When running dcmsend with a single DICOM image, the output is thee following:

    I: checking input files ...
    I: starting association #1
    I: initializing network ...
    I: negotiating network association ...
    I: Requesting Association
    I: Association Accepted (Max Send PDV: 16366)
    I: sending SOP instances ...
    I: Sending C-STORE Request (MsgID 1, MR)
    I: Received C-STORE Response (Success)
    I: Releasing Association
    I: Status Summary
    I: --------------
    I: Number of associations   : 1
    I: Number of pres. contexts : 1
    I: Number of SOP instances  : 1
    I: - sent to the peer       : 1
    I:   * with status SUCCESS  : 1
  4. To verify that the DICOM image was successfully imported to your DICOM store. search for instances in the DICOM store and ensure that the new DICOM image is in the store.

After completing this section, you have successfully deployed the DICOM adapter to GKE and sent a DICOM image from a PACS instance through the adapter to the Cloud Healthcare API.


GKE troubleshooting

If the DICOM adapter encounters a failure after you deploy it to GKE, follow the steps in Troubleshooting issues with deployed workloads.

Adapter troubleshooting

The import and export adapters generate logs that you can use to diagnose any issues. When you run an adapter using GKE, the logs are stored in Cloud Logging. To view the logs, complete the following steps using either the Google Cloud Console or the kubectl tool:


  1. Go to the GKE Workloads dashboard in Cloud Console.

    Go to the GKE Workloads dashboard

  2. Select the dicom-adapter workload.

  3. In the Deployment details page, click Container logs.


To see all Pods running in your cluster, run the following command:

kubectl get pods

Look for the Pod whose name begins with dicom-adapter.

To get the Pod's logs, run the following command:

kubectl logs POD_NAME

If you missed any of the steps in this guide, the dcmsend command might fail to upload images. To investigate this issue, re-run the command with the -d flag (for "debug"). The flag prints a more verbose log of actions, including messages that provide information about the failure.

Permission and authorization troubleshooting

The following sections describe errors that can occur in dcmsend when permissions or authorizations are incorrectly configured.

Peer aborted association error

The following issue occurs when network traffic cannot flow from the PACS to port 2575 of the load balancer:

cannot send SOP instance: Peer aborted Association (or never connected)

To resolve this issue, ensure that the PACS VM and the GKE cluster are running in the same VPC network. If they are not running in the same VPC network, check the following:

  • The load balancer is not configured as "internal."
  • There are no firewall rules blocking connections to port 2575.

This error can also happen when either the load balancer service or the adapter Pod are not correctly set up in the GKE cluster. To ensure that they are correctly set up, review Inspecting the Deployment and Inspecting the Service in this guide.

Required APIs not enabled error

The following issue occurs when the Cloud Healthcare API has not been enabled in the project where the GKE cluster with the adapter is running:

LO [Http_403, PERMISSION_DENIED, Cloud Healthcare API has not been u]

To resolve this issue, ensure all needed APIs are enabled by following the instructions in Before you begin.

Insufficient scope error

The following issue occurs when the GKE cluster where the adapter is running does not have the correct scope value set:

LO [Http_403, PERMISSION_DENIED, Request had insufficient authentica]

To resolve this issue, update the GKE cluster or create a new cluster with the following flag:


DICOM store permission denied error

The following error occurs when the service account used by the GKE cluster where the adapter is running does not have the roles/healthcare.dicomEditor role:

LO [Http_403, PERMISSION_DENIED, Permission healthcare.dicomStores.d]

To resolve this issue, follow the instructions in Granting the Compute Engine service account permissions.

What's next

After configuring the prototype in this guide, you can start using Cloud VPN to encrypt traffic between your PACS and the Cloud Healthcare API.