This page explains how to use the open source Cloud Healthcare API DICOM adapter on Google Kubernetes Engine (GKE) to connect a Picture Archiving and Communication System (PACS) to the Cloud Healthcare API and import DICOM data.
This guide provides a simple way to set up a prototype using Google Kubernetes Engine and a Compute Engine virtual machine (VM). The Compute Engine VM simulates the on-premise PACS. For more detailed information, see the DICOM adapter README in GitHub.
The DICOM adapter provides a set of components that can translate data between traditional DICOM DIMSE protocols (such as C-STORE) and RESTful DICOMweb protocols (such as STOW-RS). The adapter consists of two primary components: the import adapter and the export adapter. In this guide, you use the import adapter to import DICOM images to a DICOM store.
This tutorial uses billable components of Google Cloud, including:
- Cloud Healthcare API
- Google Kubernetes Engine
- Compute Engine
Before you begin
Before you can set up the DICOM adapter, you must choose or create a Google Cloud project and enable the required APIs by completing the following steps:
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
In the Cloud Console, on the project selector page, select or create a Cloud project.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
- Enable the Cloud Healthcare API, Google Kubernetes Engine, and Container Registry APIs.
- Wait for the GKE API and related services to be enabled. This can take several minutes.
Choosing a shell
To complete this tutorial, you can use Cloud Shell or your local shell.
Cloud Shell is a shell environment for managing resources hosted
on Google Cloud. Cloud Shell comes
preinstalled with the
gcloud tool and
gcloud tool provides the primary command-line interface
kubectl provides the command-line interface for running commands
against GKE clusters.
If you prefer using your local shell, you must install the Cloud SDK,
which includes the
gcloud tool and the
To open Cloud Shell or configure your local shell, complete the following steps:
To launch Cloud Shell, complete the following steps:
Go to Google Cloud Console.
From the top-right corner of the console, click the Activate Google Cloud Shell button:
A Cloud Shell session opens inside a frame at the bottom of the console. You
use this shell to run
To install the
gcloud tool and the
kubectl tool, install and
initialize the latest Cloud SDK version.
Creating a DICOM store
Before using the DICOM adapter, you must create a DICOM store if you haven't already.
Deploying the adapter using Google Kubernetes Engine
The import adapter and export adapter are containerized applications staged in a pre-built Docker image in Container Registry. You can deploy these images to run on a GKE cluster.
Creating the cluster
To create a cluster in GKE named
dicom-adapter, run the
gcloud container clusters create command:
gcloud container clusters create dicom-adapter \ --zone=COMPUTE_ZONE \ --scopes=https://www.googleapis.com/auth/cloud-healthcare
- COMPUTE_ZONE is the zone
in which your cluster is deployed. A zone is an approximate regional location
in which your clusters and their resources are deployed. For example,
us-west1-ais a zone in the
us-westregion. If you've set a default zone using
gcloud config set compute/zone, the value of this flag overrides the default.
If successful, the command returns the following response:
Creating cluster dicom-adapter in COMPUTE_ZONE... Cluster is being health-checked (master is healthy)...done. Created [https://container.googleapis.com/v1/projects/PROJECT_ID/zones/COMPUTE_ZONE/clusters/dicom-adapter]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/COMPUTE_ZONE/dicom-adapter?project=PROJECT_ID kubeconfig entry generated for dicom-adapter. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS dicom-adapter COMPUTE_ZONE 1.15.12-gke.2 123.456.789.012 n1-standard-1 1.15.12-gke.2 3 RUNNING
Configuring the Deployment
When deploying an application to GKE, you define properties of the Deployment using a Deployment manifest file, which is typically a YAML file.
Using a text editor, create a Deployment manifest file for the import adapter
dicom_adapter.yaml with the following content:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: dicom-adapter spec: replicas: 1 template: metadata: labels: app: dicom-adapter spec: containers: - name: dicom-import-adapter image: gcr.io/cloud-healthcare-containers/healthcare-api-dicom-dicomweb-adapter-import:0.2.1 ports: - containerPort: 2575 protocol: TCP name: "port" args: - "--dimse_aet=IMPORTADAPTER" - "--dimse_port=2575" - "--dicomweb_address=https://healthcare.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/dicomStores/DICOM_STORE_ID/dicomWeb"
- PROJECT_ID is the ID for the Google Cloud project containing your DICOM store.
- LOCATION is the location where your DICOM store is located.
- DATASET_ID is the ID for the parent dataset of your DICOM store.
- DICOM_STORE_ID is the ID for the DICOM store to which you're importing DICOM data.
Configuring the Service
To make the DICOM adapter accessible to applications outside of the GKE cluster (such as a PACS), you must configure an internal load balancer. The load balancer lets you internally expose the DIMSE port (in this guide, port 2575).
Create a Service manifest file to configure internal load balancing.
In the directory where you created the Deployment manifest file, use a text
editor to create a file called
with the following content:
apiVersion: v1 kind: Service metadata: name: dicom-adapter-load-balancer # The "Internal" annotation will result in an load balancer that can only # be accessed from within the VPC the Kubernetes cluster is in. # You can remove this annotation to get an externally accessible load balancer. annotations: cloud.google.com/load-balancer-type: "Internal" spec: ports: - port: 2575 targetPort: 2575 protocol: TCP name: port selector: app: dicom-adapter type: LoadBalancer
Deploying the Deployment
To deploy the adapter to a GKE cluster, in the directory
dicom_adapter.yaml Deployment manifest file, run the following
kubectl apply -f dicom_adapter.yaml
If successful, the command returns the following output:
deployment.extensions "dicom-adapter-deployment" created
Inspecting the Deployment
After you create the Deployment, you can use the
kubectl tool to inspect it.
To get detailed information about the Deployment, run the following command:
kubectl describe deployment dicom-adapter
To list the Pod created by the Deployment, run the following command:
kubectl get pods -l app=dicom-adapter
To get information about the created Pod:
kubectl describe pod POD_NAME
If the Deployment was successful, the last part of the output from the previous command should contain the following information:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m33s default-scheduler Successfully assigned default/dicom-adapter-69d579778-qrm7n to gke-dicom-adapter-default-pool-6f6e0dcd-9cdd Normal Pulling 3m31s kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd Pulling image "gcr.io/cloud-healthcare-containers/healthcare-api-dicom-dicomweb-adapter-import:0.2.1" Normal Pulled 3m10s kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd Successfully pulled image "gcr.io/cloud-healthcare-containers/healthcare-api-dicom-dicomweb-adapter-import:0.2.1" Normal Created 3m7s kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd Created container dicom-import-adapter Normal Started 3m7s kubelet, gke-dicom-adapter-default-pool-6f6e0dcd-9cdd Started container dicom-import-adapter
Deploying the Service and creating the internal load balancer
To create the internal load balancer, in the directory containing the
dicom_adapter_load_balancer.yaml Service manifest file, run the following command:
kubectl apply -f dicom_adapter_load_balancer.yaml
If successful, the command returns the following output:
deployment.extensions "dicom_adapter_load_balancer.yaml" created
Inspecting the Service
After creating the Service, inspect it to verify that it has been configured successfully.
To inspect the internal load balancer, run the following command:
kubectl describe service dicom-adapter-load-balancer
If successful, the command returns output similar to the following:
Name: dicom-adapter-load-balancer Namespace: default Labels: <none> Annotations: cloud.google.com/load-balancer-type: Internal Selector: app=dicom-adapter Type: LoadBalancer IP: 198.51.100.1 LoadBalancer Ingress: 203.0.113.1 Port: port 2575/TCP TargetPort: 2575/TCP NodePort: port 30440/TCP Endpoints: 192.0.2.1:2575 Session Affinity: None External Traffic Policy: Cluster Events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 1m service-controller Ensured load balancer
LoadBalancer Ingress IP address might take up to a minute to populate.
LoadBalancer Ingress IP address because you will use it and
2575 port to access the Service from outside the cluster in the
Creating a Compute Engine virtual machine
To simulate your on-premises PACS, create a Compute Engine VM that you'll use to send requests to the DICOM adapter. Because you deployed an internal load balancer the VM you create and the existing GKE cluster must be in the same region and use the same VPC network.
The following steps show how to create a Linux virtual machine instance in Compute Engine:
In the Cloud Console, go to the VM Instances page.
Click Create instance.
Choose a Region and Zone for the instance that matches the zone you selected when you created the cluster. For example, if you used
us-central1-afor the COMPUTE_ZONE when you created the cluster, then in the instance creation screen, select
us-central1 (Iowa)for the Region and
us-central1-afor the Zone.
In the Boot disk section, click Change to begin configuring your boot disk.
On the Public images tab, choose version 9 of the Debian operating system.
In the Firewall section, select Allow HTTP traffic.
Click Create to create the instance.
To create a compute instance, run the
gcloud compute instances create
method with the following options:
- The COMPUTE_ZONE that you selected when you created the cluster
http-servertag to allow HTTP traffic
gcloud compute instances create INSTANCE_NAME \ --project=PROJECT_ID \ --zone=COMPUTE_ZONE \ --image-family=debian-9 \ --image-project=debian-cloud \ --tags=http-server
The output is similar to the following sample:
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/COMPUTE_ZONE/instances/INSTANCE_NAME]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS INSTANCE_NAME COMPUTE_ZONE n1-standard-1 INTERNAL_IP EXTERNAL_IP RUNNING
Allow a short time for the instance to start up. After the instance is started, it's listed on the VM Instances page with a green status icon.
By default, the instance uses the default VPC network that the cluster uses, which means that traffic can be sent from the instance to the cluster.
To connect to the instance, complete the following steps:
In the Cloud Console, go to the VM Instances page.
In the list of virtual machine instances, click SSH in the row of the instance that you created.
To connect to the instance, run the
gcloud compute ssh command:
gcloud compute ssh INSTANCE_NAME \ --project PROJECT_ID \ --zone COMPUTE_ZONE
You now have a terminal window for interacting with your Linux instance.
Importing DICOM images to the DICOM store
There are several software options available that you can use to send DICOM images over a network. In the following sections, you use the DCMTK DICOM toolkit.
To import DICOM images to your DICOM store, complete the following steps from the VM you created in the previous section:
Install the DCMTK DICOM toolkit software:
sudo apt install dcmtk
Upload the DICOM image you want to import into your DICOM store to the VM. For example, if the DICOM image is stored in a Cloud Storage bucket, you would run the following command to download it to your current working directory:
gsutil cp gs://BUCKET/DCM_FILE .
To use a DICOM image made freely available by Google Cloud from the
gcs-public-data--healthcare-tcia-lidc-idridataset, run the following command:
gsutil -u PROJECT_ID cp gs://gcs-public-data--healthcare-tcia-lidc-idri/dicom/126.96.36.199.4.1.145188.8.131.52.6279.6001.100036212881370097961774473021/184.108.40.206.4.1.145220.127.116.11.6279.6001.130765375502800983459674173881/18.104.22.168.4.1.14522.214.171.124.6279.6001.100395847981751414562031366859.dcm .
dcmsendcommand, which is available through the DCMTK DICOM toolkit. When you run the command, set the Application Entity (AE) Title to
IMPORTADAPTER. You can optionally add the
--verboseflag to show processing details. Before sending the request, make the following replacements:
- PEER: The
LoadBalancer IngressIP address that was returned when you inspected the Service.
- PORT: The port used is
- DCMFILE_IN: The path on your filesystem to the DICOM image you are uploading.
dcmsend --verbose PEER 2575 DCM_FILE_IN -aec IMPORTADAPTER
If the request is successful, the terminal shows the following output when running
dcmsendwith a single DICOM image:
I: checking input files ... I: starting association #1 I: initializing network ... I: negotiating network association ... I: Requesting Association I: Association Accepted (Max Send PDV: 16366) I: sending SOP instances ... I: Sending C-STORE Request (MsgID 1, MR) I: Received C-STORE Response (Success) I: Releasing Association I: I: Status Summary I: -------------- I: Number of associations : 1 I: Number of pres. contexts : 1 I: Number of SOP instances : 1 I: - sent to the peer : 1 I: * with status SUCCESS : 1
- PEER: The
To verify that the DICOM image was successfully imported to your DICOM store, search for instances in the DICOM store and ensure that the new DICOM image is in the store.
After completing this section, you have successfully deployed the DICOM adapter to GKE and sent a DICOM image from a PACS instance through the adapter and to the Cloud Healthcare API.
Troubleshooting adapter failures
If the DICOM adapter encounters a failure after you deploy it to GKE, follow the steps in Troubleshooting issues with deployed workloads.
The import and export adapters generate logs that you can use to diagnose
any issues. When you run an adapter using GKE, the logs
are stored in Cloud Logging. To view the logs,
complete the following steps using either the Google Cloud Console or the
Visit the GKE Workloads dashboard in Cloud Console.
In the Deployment details page, click Container logs.
To see all Pods running in your cluster, run the following command:
kubectl get pods
Look for the Pod whose name begins with
To get the Pod's logs, run the following command:
kubectl logs POD_NAME
After configuring the prototype in this guide, you can start using Cloud VPN to encrypt traffic between your PACS and the Cloud Healthcare API.