Connecting a PACS to the Cloud Healthcare API

This page explains how to use the open source Cloud Healthcare API DICOM adapter on Google Kubernetes Engine (GKE) to connect a Picture Archiving and Communication System (PACS) to the Cloud Healthcare API and import DICOM data.

This guide provides a simple way to set up a prototype using Google Kubernetes Engine and a Compute Engine virtual machine (VM). The Compute Engine VM simulates the on-premise PACS. For more detailed information, see the DICOM adapter README in GitHub.


The adapter can import and export data. This guide shows how to import DICOM images to a DICOM store.

Use the DICOM adapter to translate data between traditional protocols and RESTful protocols. For example, you can translate from the C-STORE format to the STOW-RS format.


This tutorial uses billable components of Google Cloud, including:

  • Cloud Healthcare API
  • Google Kubernetes Engine
  • Compute Engine

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New Cloud Platform users might be eligible for a free trial.

Before you begin

Before you can set up the DICOM adapter, you must choose or create a Google Cloud project and enable the required APIs by completing the following steps:

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the Cloud Healthcare API, Google Kubernetes Engine, and Container Registry APIs.

    Enable the APIs

  5. Wait for the GKE API and related services to be enabled. This can take several minutes.

Choosing a shell

To complete this tutorial, you can use Cloud Shell or your local shell.

Cloud Shell is a shell environment for managing resources hosted on Google Cloud. Cloud Shell comes preinstalled with the gcloud tool and the kubectl tool. The gcloud tool provides the primary command-line interface for GCP. kubectl provides the command-line interface for running commands against GKE clusters.

If you prefer using your local shell, you must install the Cloud SDK, which includes the gcloud tool and the kubectl tool.

To open Cloud Shell or configure your local shell, complete the following steps:

Cloud Shell

To launch Cloud Shell, complete the following steps:

  1. Go to Google Cloud Console.

    Google Cloud Console

  2. From the top-right corner of the console, click the Activate Google Cloud Shell button:

A Cloud Shell session opens inside a frame at the bottom of the console. You use this shell to run gcloud and kubectl commands.

Local shell

To install the gcloud tool and the kubectl tool, install and initialize the latest Cloud SDK version.

Creating a DICOM store

Before using the DICOM adapter, you must create a DICOM store if you haven't already.

Deploying the adapter using Google Kubernetes Engine

The import adapter and export adapter are containerized applications staged in a pre-built Docker image in Container Registry. You can deploy these images to run on a GKE cluster.

Creating the cluster

To create a cluster in GKE named dicom-adapter, run the gcloud container clusters create command:

gcloud container clusters create dicom-adapter \
    --zone=COMPUTE_ZONE \


  • COMPUTE_ZONE is the zone in which your cluster is deployed. A zone is an approximate regional location in which your clusters and their resources are deployed. For example, us-west1-a is a zone in the us-west region. If you've set a default zone using gcloud config set compute/zone, the value of this flag overrides the default.

If successful, the command returns the following response:

Creating cluster dicom-adapter in COMPUTE_ZONE... Cluster is being health-checked (master is healthy)...done.
Created [].
To inspect the contents of your cluster, go to:
kubeconfig entry generated for dicom-adapter.
dicom-adapter  COMPUTE_ZONE  1.18.16-gke.502   123.456.789.012  n1-standard-1  1.18.16-gke.502  3          RUNNING

Configuring the Deployment

When deploying an application to GKE, you define properties of the Deployment using a Deployment manifest file, which is typically a YAML file.

Using a text editor, create a Deployment manifest file for the import adapter called dicom_adapter.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
  name: dicom-adapter
  replicas: 1
      app: dicom-adapter
        app: dicom-adapter
        - name: dicom-import-adapter
            - containerPort: 2575
              protocol: TCP
              name: "port"
            - "--dimse_aet=IMPORTADAPTER"
            - "--dimse_port=2575"
            - "--dicomweb_address="


  • PROJECT_ID is the ID for the Google Cloud project containing your DICOM store.
  • LOCATION is the location where your DICOM store is located.
  • DATASET_ID is the ID for the parent dataset of your DICOM store.
  • DICOM_STORE_ID is the ID for the DICOM store to which you're importing DICOM data.

Configuring the Service

To make the DICOM adapter accessible to applications outside of the GKE cluster (such as a PACS), you must configure an internal load balancer. The load balancer lets you internally expose the DIMSE port (in this guide, port 2575).

Create a Service manifest file to configure internal load balancing. In the directory where you created the Deployment manifest file, use a text editor to create a file called dicom_adapter_load_balancer.yaml with the following content:

apiVersion: v1
kind: Service
  name: dicom-adapter-load-balancer
  # The "Internal" annotation will result in an load balancer that can only
  # be accessed from within the VPC the Kubernetes cluster is in.
  # You can remove this annotation to get an externally accessible load balancer.
  annotations: "Internal"
  - port: 2575
    targetPort: 2575
    protocol: TCP
    name: port
    app: dicom-adapter
  type: LoadBalancer

Deploying the Deployment

To deploy the adapter to a GKE cluster, in the directory containing the dicom_adapter.yaml Deployment manifest file, run the following command:

kubectl apply -f dicom_adapter.yaml

If successful, the command returns the following output:

deployment.apps/dicom-adapter created

Inspecting the Deployment

After you create the Deployment, you can use the kubectl tool to inspect it.

To get detailed information about the Deployment, run the following command:

kubectl describe deployment dicom-adapter

To list the Pod created by the Deployment, run the following command:

kubectl get pods -l app=dicom-adapter

To get information about the created Pod:

kubectl describe pod POD_NAME

If the Deployment was successful, the last part of the output from the previous command should contain the following information:

  Type    Reason     Age    From                                                   Message
  ----    ------     ----   ----                                                   -------
  Normal  Scheduled  3m33s  default-scheduler                                      Successfully assigned default/dicom-adapter-69d579778-qrm7n to gke-dicom-adapter-default-pool-6f6e0dcd-9cdd
  Normal  Pulling    3m31s  kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Pulling image ""
  Normal  Pulled     3m10s  kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Successfully pulled image ""
  Normal  Created    3m7s   kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Created container dicom-import-adapter
  Normal  Started    3m7s   kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd  Started container dicom-import-adapter

Deploying the Service and creating the internal load balancer

To create the internal load balancer, in the directory containing the dicom_adapter_load_balancer.yaml Service manifest file, run the following command:

kubectl apply -f dicom_adapter_load_balancer.yaml

If successful, the command returns the following output:

service/dicom-adapter-load-balancer created

Inspecting the Service

After creating the Service, inspect it to verify that it has been configured successfully.

To inspect the internal load balancer, run the following command:

kubectl describe service dicom-adapter-load-balancer

If successful, the command returns output similar to the following:

Name:                     dicom-adapter-load-balancer
Namespace:                default
Labels:                   <none>
Annotations:     Internal
Selector:                 app=dicom-adapter
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     port  2575/TCP
TargetPort:               2575/TCP
NodePort:                 port  30440/TCP
Session Affinity:         None
External Traffic Policy:  Cluster

  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  1m    service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   1m    service-controller  Ensured load balancer

The LoadBalancer Ingress IP address might take up to a minute to populate. Copy the LoadBalancer Ingress IP address because you will use it and the 2575 port to access the Service from outside the cluster in the next section.

Creating a Compute Engine virtual machine

To simulate your on-premises PACS, create a Compute Engine VM that you'll use to send requests to the DICOM adapter. Because you deployed an internal load balancer the VM you create and the existing GKE cluster must be in the same region and use the same VPC network.

The following steps show how to create a Linux virtual machine instance in Compute Engine:


  1. In the Cloud Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click Create instance.

  3. Choose a Region and Zone for the instance that matches the zone you selected when you created the cluster. For example, if you used us-central1-a for the COMPUTE_ZONE when you created the cluster, then in the instance creation screen, select us-central1 (Iowa) for the Region and us-central1-a for the Zone.

  4. In the Boot disk section, click Change to begin configuring your boot disk.

  5. On the Public images tab, choose version 9 of the Debian operating system.

  6. Click Select.

  7. In the Firewall section, select Allow HTTP traffic.

  8. Click Create to create the instance.


To create a compute instance, run the gcloud compute instances create method with the following options:

  • The COMPUTE_ZONE that you selected when you created the cluster
  • The http-server tag to allow HTTP traffic
gcloud compute instances create INSTANCE_NAME \
   --project=PROJECT_ID \
   --zone=COMPUTE_ZONE \
   --image-family=debian-9 \
   --image-project=debian-cloud \

The output is similar to the following sample:

Created [].
INSTANCE_NAME  COMPUTE_ZONE           n1-standard-1               INTERNAL_IP  EXTERNAL_IP    RUNNING

Allow a short time for the instance to start up. After the instance is started, it's listed on the VM Instances page with a green status icon.

By default, the instance uses the default VPC network that the cluster uses, which means that traffic can be sent from the instance to the cluster.

To connect to the instance, complete the following steps:


  1. In the Cloud Console, go to the VM Instances page.

    Go to the VM Instances page

  2. In the list of virtual machine instances, click SSH in the row of the instance that you created.


To connect to the instance, run the gcloud compute ssh command:

gcloud compute ssh INSTANCE_NAME \
    --project PROJECT_ID \
    --zone COMPUTE_ZONE

You now have a terminal window for interacting with your Linux instance.

Importing DICOM images to the DICOM store

There are several software options available that you can use to send DICOM images over a network. In the following sections, you use the DCMTK DICOM toolkit.

To import DICOM images to your DICOM store, complete the following steps from the VM you created in the previous section:

  1. Install the DCMTK DICOM toolkit software:

    sudo apt install dcmtk
  2. Upload the DICOM image you want to import into your DICOM store to the VM. For example, if the DICOM image is stored in a Cloud Storage bucket, you would run the following command to download it to your current working directory:

    gsutil cp gs://BUCKET/DCM_FILE .

    To use a DICOM image made freely available by Google Cloud from the gcs-public-data--healthcare-tcia-lidc-idri dataset, run the following command:

    gsutil -u PROJECT_ID cp gs://gcs-public-data--healthcare-tcia-lidc-idri/dicom/ .
  3. Run the dcmsend command, which is available through the DCMTK DICOM toolkit. When you run the command, set the Application Entity (AE) Title to IMPORTADAPTER. You can optionally add the --verbose flag to show processing details. Before sending the request, make the following replacements:

    • PEER: The LoadBalancer Ingress IP address that was returned when you inspected the Service.
    • PORT: The port used is 2575.
    • DCMFILE_IN: The path on your filesystem to the DICOM image you are uploading.
    dcmsend --verbose PEER 2575 DCM_FILE_IN -aec IMPORTADAPTER

    If the request is successful, the terminal shows the following output when running dcmsend with a single DICOM image:

    I: checking input files ...
    I: starting association #1
    I: initializing network ...
    I: negotiating network association ...
    I: Requesting Association
    I: Association Accepted (Max Send PDV: 16366)
    I: sending SOP instances ...
    I: Sending C-STORE Request (MsgID 1, MR)
    I: Received C-STORE Response (Success)
    I: Releasing Association
    I: Status Summary
    I: --------------
    I: Number of associations   : 1
    I: Number of pres. contexts : 1
    I: Number of SOP instances  : 1
    I: - sent to the peer       : 1
    I:   * with status SUCCESS  : 1
  4. To verify that the DICOM image was successfully imported to your DICOM store, search for instances in the DICOM store and ensure that the new DICOM image is in the store.

After completing this section, you have successfully deployed the DICOM adapter to GKE and sent a DICOM image from a PACS instance through the adapter and to the Cloud Healthcare API.

Troubleshooting adapter failures

If the DICOM adapter encounters a failure after you deploy it to GKE, follow the steps in Troubleshooting issues with deployed workloads.

The import and export adapters generate logs that you can use to diagnose any issues. When you run an adapter using GKE, the logs are stored in Cloud Logging. To view the logs, complete the following steps using either the Google Cloud Console or the kubectl tool:


  1. Visit the GKE Workloads dashboard in Cloud Console.

    Visit the GKE Workloads dashboard

  2. Select the dicom-adapter workload.

  3. In the Deployment details page, click Container logs.


To see all Pods running in your cluster, run the following command:

kubectl get pods

Look for the Pod whose name begins with dicom-adapter.

To get the Pod's logs, run the following command:

kubectl logs POD_NAME

What's next

After configuring the prototype in this guide, you can start using Cloud VPN to encrypt traffic between your PACS and the Cloud Healthcare API.