Using the Cloud Healthcare API MLLP adapter

Before beginning this guide, familiarize yourself with the conceptual documentation on MLLP and the Google Cloud Platform MLLP adapter. The conceptual documentation provides an overview of MLLP, how care systems can send and receive messages to and from the Cloud Healthcare API over an MLLP connection, and the basics of MLLP security.

The Google Cloud Healthcare and Life Sciences team provides an open source MLLP adapter hosted on GitHub. The adapter can run in three environments:

  • Locally/on-premises
  • In a Compute Engine virtual machine (VM) instance
  • In a container on Google Kubernetes Engine

This tutorial provides instructions for running the MLLP adapter locally/on-premises and in a container on GKE with Cloud VPN and without Cloud VPN.

Objectives

After completing this tutorial, you'll know how to:

Costs

This tutorial uses billable components of GCP, including:

  • Google Kubernetes Engine
  • Compute Engine
  • Cloud VPN
  • Cloud Pub/Sub

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New Cloud Platform users might be eligible for a free trial.

Before you begin

  1. Melden Sie sich in Ihrem Google-Konto an.

    Wenn Sie noch kein Konto haben, registrieren Sie sich hier für ein neues Konto.

  2. Wählen Sie ein Google Cloud Platform-Projekt aus oder erstellen Sie eines.

    Zur Seite "Ressourcen verwalten"

  3. Prüfen Sie, ob die Abrechnung für Ihr Google Cloud Platform-Projekt aktiviert ist.

    Informationen zum Aktivieren der Abrechnung

  4. Aktivieren Sie die Cloud Healthcare API, Google Kubernetes Engine, Container Registry, and Cloud Pub/Sub erforderlichen APIs.

    Aktivieren Sie die APIs.

  5. Installieren und initialisieren Sie das Cloud SDK.
  6. Tip: Need a command prompt? You can use the Google Cloud Shell. The Google Cloud Shell is a command line environment that already includes Docker, kubectl, and the Google Cloud SDK, so you don't need to install them.

  7. Download and install Docker.
  8. If you are only testing the adapter locally, you do not need to complete any more steps and can continue to Creating a dataset. If you are deploying the adapter to GKE, first continue with the following steps:

  9. Download and install the kubectl command-line tool.
  10. If you are new to GKE, you should complete the quickstart, in which you'll enable the GKE API and learn how the product works.

Creating a dataset

If you haven't already created a Cloud Healthcare API dataset, do so using either the Google Cloud Platform Console or the gcloud command-line tool.

Console

To create a dataset:
  1. In the GCP Console, go to the Datasets page.

    Go to the Datasets page

  2. Click Create dataset.
  3. Choose a dataset identifier that's unique in your project and region. If the identifier is not unique, the dataset creation fails.
  4. Choose the region where the dataset permanently resides and then click Create.
The new dataset appears in the list.

gcloud

To create a dataset, run the gcloud beta healthcare datasets create command:

gcloud beta healthcare datasets create DATASET_ID \
    --location=REGION

If the request is successful, the command returns the following output:

Create request issued for: [DATASET_ID]
Waiting for operation [OPERATION_ID] to complete...done.
Created dataset [DATASET_ID].

Creating a Cloud Pub/Sub topic and subscription

As described in Configuring Cloud Pub/Sub notifications, you need to configure a Cloud Pub/Sub topic with your HL7v2 store to receive notifications when messages are ingested.

To create a topic:

Console

  1. Go to the Cloud Pub/Sub Topics page in the GCP Console.

    Go to the Cloud Pub/Sub topics page

  2. Click Create Topic.

  3. Enter a topic name with the URI:

    projects/PROJECT_ID/topics/TOPIC_NAME

    where PROJECT_ID is your GCP project ID.

  4. Click Create.

gcloud

Run the gcloud pubsub topics create command:

gcloud pubsub topics create projects/PROJECT_ID/topics/TOPIC_NAME

If the request is successful, the command returns the following output:

Created topic [projects/PROJECT_ID/topics/TOPIC_NAME].

To create a subscription:

Console

  1. Go to the Cloud Pub/Sub Topics page in the GCP Console.

    Go to the Cloud Pub/Sub topics page

  2. Click your project's topic.

  3. Click Create Subscription.

  4. Enter a subscription name:

    projects/PROJECT_ID/subscriptions/SUBSCRIPTION_NAME

    Leave Delivery Type set to Pull.

  5. Click Create.

gcloud

Run the gcloud pubsub subscriptions create command:

gcloud pubsub subscriptions create SUBSCRIPTION_NAME --topic=TOPIC_NAME

If the request is successful, the command returns the following output:

Created subscription [projects/PROJECT_ID/subscriptions/TOPIC_NAME].

Creating an HL7v2 store configured with a Cloud Pub/Sub topic

Create an HL7v2 store and configure it with a Cloud Pub/Sub topic using either the GCP Console or the gcloud command-line tool. Note that, to create an HL7v2 store, you need to have already created a dataset. For the purposes of this tutorial, use the same project for your HL7v2 store and for the Cloud Pub/Sub topic.

Console

  1. Go to the Healthcare Datasets page in the GCP Console.

    Go to the Healthcare Datasets page

  2. Select the dataset where you want to create the HL7v2 store.

  3. Click Create data store.

  4. Enter a name of your choice that's unique in your dataset. If the name is not unique, the HL7v2 store creation fails.

  5. Select HL7v2.

  6. Enter a Cloud Pub/Sub topic name with the same URI that you used when creating the Cloud Pub/Sub topic:

    projects/PROJECT_ID/topics/TOPIC_NAME

  7. Click Create.

The new HL7v2 store appears in the list.

gcloud

To create an HL7v2 store with a Cloud Pub/Sub topic, run the gcloud beta healthcare hl7v2-stores create command and supply the same URI that you used when creating the Cloud Pub/Sub topic:

gcloud beta healthcare hl7v2-stores create HL7V2_STORE_ID \
    --location=REGION \
    --dataset=DATASET_ID \
    --pubsub-topic=projects/PROJECT_ID/topics/PUBSUB_TOPIC

The command returns the following output:

Created hl7v2Store [HL7V2_STORE_ID].

Configuring Cloud Pub/Sub permissions

To send notifications to Cloud Pub/Sub when an HL7v2 message is ingested, you need to configure Cloud Pub/Sub permissions on the Cloud Healthcare API. This step needs to be done only once per project.

You can use the GCP Console or the gcloud command-line tool to add the required pubsub.publisher role to your project's service account:

Console

  1. Make sure that you have enabled the Cloud Healthcare API.
  2. On the Cloud IAM page in the GCP Console, verify that the role Healthcare Service Agent appears in the Role column for the relevant project service account. (Look for the project service account that ends in @gcp-sa-healthcare.iam.gserviceaccount.com.)
  3. In the Inheritance column that matches the role, click the pencil icon. The Edit permissions pane opens.
  4. Click Add another role, then search for the Pub/Sub Publisher role.
  5. Select the role, then click Save. The pubsub.publisher role is then added to the service account.

gcloud

To add the service account permissions, run the gcloud projects add-iam-policy-binding command. To find the PROJECT_ID and PROJECT_NUMBER, refer to Identifying projects.

gcloud projects add-iam-policy-binding PROJECT_ID \
    --member=serviceAccount:service-PROJECT_NUMBER@gcp-sa-healthcare.iam.gserviceaccount.com \
    --role=roles/pubsub.publisher

Pulling the pre-built Docker image

The MLLP adapter is a containerized application staged in a pre-built Docker image in Container Registry. To pull the latest version of the image, run the following command:

docker pull gcr.io/cloud-healthcare-containers/mllp-adapter:latest

Testing the MLLP adapter locally

To test the adapter locally, complete the following steps:

  1. Run the following command on the same machine where you pulled the pre-built Docker image:

    docker run \
        --network=host \
        gcr.io/cloud-healthcare-containers/mllp-adapter \
        /usr/mllp_adapter/mllp_adapter \
        --hl7_v2_project_id=PROJECT_ID \
        --hl7_v2_location_id=REGION \
        --hl7_v2_dataset_id=DATASET_ID \
        --hl7_v2_store_id=HL7V2_STORE_ID \
        --export_stats=false \
        --receiver_ip=127.0.0.1 \
        --pubsub_project_id=PROJECT_ID \
        --pubsub_subscription=PUBSUB_SUBSCRIPTION \
        --api_addr_prefix=https://healthcare.googleapis.com:443/v1beta1 \
        --logtostderr
    

    where:

    • PROJECT_ID is the ID for the GCP project containing your HL7v2 store.
    • REGION is the region where your HL7v2 store is located.
    • DATASET_ID is the ID for the parent dataset of your HL7v2 store.
    • HL7V2_STORE_ID is the ID for the HL7v2 store to which you are sending HL7v2 messages.
    • PROJECT_ID is the ID for the GCP project containing the Cloud Pub/Sub topic.
    • PUBSUB_SUBSCRIPTION is the name of the subscription associated with your Cloud Pub/Sub topic.

    After running the above command, the adapter starts to run on your local machine at the 127.0.0.1 IP address on port 2575.

  2. The adapter runs as a foreground process, so to continue with testing, open a different terminal on your local machine.

  3. Install Netcat:

    sudo apt install netcat
    
  4. Download the hl7v2-mllp-sample.txt file and save it to your local machine. In the same directory where you downloaded the file, run the following command to start sending HL7v2 messages to your HL7v2 store:

    echo -n -e "\x0b$(cat hl7v2-mllp-sample.txt)\x1c\x0d" | nc localhost 2575
    

    After running the command, the message will be sent through the MLLP adapter to your HL7v2 store. If the message was successfully ingested into the HL7v2 store, the command returns the following output:

    MSA|AA|20150503223000|ILITY|FROM_APP|FROM_FACILITY|20190312162410||ACK|f4c59243-19c2-4373-bea0-39c1b2ba616b|P|2.5
    

    This output indicates that the HL7v2 store responded with an AA (Application Accept) response type, meaning that the message was validated and successfully ingested.

  5. You can also verify that the message was successfully sent by opening the terminal where you ran the adapter. The output should look like the following:

     I0213 00:00:00.000000       1 healthapiclient.go:164] Dialing connection to https://healthcare.googleapis.com:443/v1beta1
     I0213 00:00:00.000000       1 mllpreceiver.go:107] Accepted connection from [::1]:40394
     I0213 00:00:00.000000       1 healthapiclient.go:183] Sending message of size 319.
     I0213 00:00:00.000000       1 healthapiclient.go:205] Message was successfully sent.
    
  6. Run the gcloud pubsub subscriptions pull command to view the message published to the Cloud Pub/Sub topic:

    gcloud pubsub subscriptions pull --auto-ack PUBSUB_SUBSCRIPTION
    

    The command returns the following output about the ingested HL7v2 message:

    ┌---------------------------------------------------------------------------------------------------------------|-----------------|---------------┐
    |                                                               DATA                                            |    MESSAGE_ID   |   ATTRIBUTES  |
    ├---------------------------------------------------------------------------------------------------------------|-----------------|---------------|
    | projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/HL7V2_MESSAGE_ID | 123456789012345 | msgType=ADT   |
    └---------------------------------------------------------------------------------------------------------------|-----------------|---------------┘
    
  7. You can also list the messages in your HL7v2 store to see if the message was added:

    curl command

    curl -X GET \
         -H "Authorization: Bearer "$(gcloud auth print-access-token) \
         -H "Content-Type: application/json; charset=utf-8" \
         "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages"
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

    PowerShell

    $cred = gcloud auth print-access-token
    $headers = @{ Authorization = "Bearer $cred" }
    
    Invoke-WebRequest `
      -Method Get `
      -Headers $headers `
      -ContentType: "application/json; charset=utf-8" `
      -Uri "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages" | Select-Object -Expand Content
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

After verifying that you can run the MLLP adapter on your local machine and send HL7v2 messages to the Cloud Healthcare API, you can continue to the next section on Deploying the MLLP adapter to Google Kubernetes Engine.

Deploying the MLLP adapter to Google Kubernetes Engine

When transmitting HL7v2 messages over MLLP from your care center, one possible configuration is to send the messages to an adapter that is deployed in GCP and can forward them to the Cloud Healthcare API.

The MLLP adapter runs as a stateless application on a GKE cluster. A GKE cluster is a managed group of VM instances for running containerized applications. Stateless applications are applications which do not store data or application state to the cluster or to persistent storage. Instead, data and application state stay with the client, which makes stateless applications more scalable.

GKE uses the Deployment controller to deploy stateless applications as uniform, non-unique Pods. Deployments manage the desired state of your application: how many Pods should run your application, what version of the container image should run, what the Pods should be labelled, and so on. The desired state can be changed dynamically through updates to the Deployment's Pod specification.

At the same time that you deploy the adapter, you create a Service controller that allows you to connect the adapter to the Cloud Healthcare API using internal load balancing.

Adding Cloud Pub/Sub API permissions to the GKE service account

As stated in the GKE documentation on Authenticating to Cloud Platform with service accounts, each node in a container cluster is a Compute Engine instance. Therefore, when the MLLP adapter runs on a container cluster, it automatically inherits the scopes of the Compute Engine instances to which it is deployed.

GCP automatically creates a service account named "Compute Engine default service account" and GKE associates this service account with the nodes that GKE creates. Depending on how your project is configured, the default service account might or might not have permissions to use other Cloud Platform APIs. GKE also assigns some limited access scopes to Compute Engine instances.

For best results, don't authenticate to other GCP services (such as Cloud Pub/Sub) from Pods running on GKE by updating the default service account's permissions or assigning more access scopes to Compute Engine instances. Instead, create your own service accounts.

You have to grant the necessary Cloud Pub/Sub permissions to the container cluster, but you also have the option of granting permissions to write metrics to Stackdriver Monitoring.

Create a new service account that contains only the scopes that the container cluster requires. To create the service account, complete the following steps:

  1. In the GCP Console, go to the Create service account key page.

    Go to the Create Service Account Key page
  2. From the Service account list, select New service account.
  3. In the Service account name field, enter a name.
  4. From the Role list, select the following two roles:

    • Pub/Sub > Pub/Sub Subscriber
    • Cloud Healthcare > Healthcare HL7v2 Message Ingest
    • (Optional) Monitoring > Monitoring Metric Writer
  5. Click Create.

After the service account is created, a JSON key file containing the credentials of the service account is downloaded to your computer. You will use this key file to configure the MLLP adapter to authenticate to the Cloud Healthcare API, Cloud Pub/Sub API, and Stackdriver Monitoring API using the --service-account flag during cluster creation.

Creating the cluster

Create the cluster in GKE by running the gcloud container clusters create command:

gcloud container clusters create mllp-adapter --zone=COMPUTE_ZONE --service-account CLIENT_EMAIL

where:

  • COMPUTE_ZONE is the zone in which your cluster is deployed. A zone is an approximate regional location in which your clusters and their resources live. For example, us-west1-a is a zone in the us-west region. If you've set a default zone previously using gcloud config set compute/zone, the value of this flag will override that default.
  • CLIENT_EMAIL is the identifier for the service account. You can find this in the service account key file in the "client_email": field. It takes the format SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com.

The command returns output similar to the following:

Creating cluster mllp-adapter in COMPUTE_ZONE...
Cluster is being configured...
Cluster is being deployed...
Cluster is being health-checked...
Cluster is being health-checked (master is healthy)...done.
Created [https://container.googleapis.com/v1/projects/PROJECT_ID/zones/COMPUTE_ZONE/clusters/mllp-adapter].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/COMPUTE_ZONE/mllp-adapter?project=PROJECT_ID
kubeconfig entry generated for mllp-adapter.
NAME          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
mllp-adapter  COMPUTE_ZONE   1.11.7-gke.4    203.0.113.1    n1-standard-1  1.11.7-gke.4  3          RUNNING

After creating the cluster, GKE creates three Compute Engine VM instances. You can verify this by listing the instances with the following command:

gcloud compute instances list

Configuring the Deployment

When deploying an application to GKE, you define properties of the Deployment using a Deployment manifest file, which is typically a YAML file. (For an example, see Creating a Deployment).

Open a separate terminal and, using a text editor, create a Deployment manifest file called mllp_adapter.yaml with the following content:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mllp-adapter-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mllp-adapter
    spec:
      containers:
        - name: mllp-adapter
          imagePullPolicy: Always
          image: gcr.io/cloud-healthcare-containers/mllp-adapter
          ports:
            - containerPort: 2575
              protocol: TCP
              name: "port"
          command:
            - "/usr/mllp_adapter/mllp_adapter"
            - "--port=2575"
            - "--hl7_v2_project_id=PROJECT_ID"
            - "--hl7_v2_location_id=REGION"
            - "--hl7_v2_dataset_id=DATASET_ID"
            - "--hl7_v2_store_id=HL7V2_STORE_ID"
            - "--api_addr_prefix=https://healthcare.googleapis.com:443/v1beta1"
            - "--logtostderr"
            - "--receiver_ip=RECEIVER_IP"

where:

  • PROJECT_ID is the ID for the GCP project containing your HL7v2 store.
  • REGION is the region where your HL7v2 store is located.
  • DATASET_ID is the ID for the parent dataset of your HL7v2 store.
  • HL7V2_STORE_ID is the ID for the HL7v2 store to which you are sending HL7v2 messages.
  • RECEIVER_IP is the IP address for the originating care center. If you are testing from a local machine that is acting as the care center, set this to 0.0.0.0.
  • PUBSUB_SUBSCRIPTION is the name of the subscription associated with your Cloud Pub/Sub topic.

The Deployment has the following properties:

  • spec: replicas: is the number of replicated Pods that the Deployment manages.
  • spec: template: metadata: labels: is the label given to each Pod, which the Deployment uses to manage the Pods.
  • spec: template: spec: is the Pod specification, which defines how each Pod should run. spec: containers includes the name of the container to run in each Pod and the container image that should run.

For more information about the Deployment specification, see the Deployment API reference.

Configuring the Service

To make the MLLP adapter accessible to applications outside of the cluster (such as a care center), you must configure an internal load balancer.

If you haven't configured a VPN, then applications can access the MLLP adapter through the internal load balancer as long as the applications use the same VPC network and are located in the same GCP region. For example, suppose that the cluster where the MLLP adapter runs is located in the us-central1-a region and you need to make the adapter accessible to a Compute Engine VM instance running in the same region on the same VPC network. You could do so by adding an internal load balancer to the cluster's Service resource.

In the same directory where you created the Deployment manifest file, use the text editor to create a Service manifest file called mllp_adapter_service.yaml with the following content. This file is responsible for configuring internal load balancing:

apiVersion: v1
kind: Service
metadata:
  name: mllp-adapter-service
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  ports:
  - name: port
    port: 2575
    targetPort: 2575
    protocol: TCP
  selector:
    app: mllp-adapter

The Service has the following properties:

  • metadata: name: is the name you choose for the Service. In this case, it's mllp-adapter-service.
  • metadata: annotations: is an annotation that specifies that an internal load balancer is to be configured.
  • spec: type: is the type of load balancer.
  • ports: port: is used to specify the port on which the service can receive traffic from other services in the same cluster. The default MLLP port of 2575 is used.
  • ports: targetPort: is used to specify the port on each Pod where the service is actually running.
  • spec: selector: app: specifies the Pods that the Service targets.

Although it's possible to specify an IP address for the load balancer (using the clusterIP field), the load balancer can generate its own IP address to which you can send messages. For now, allow the cluster to generate the IP address, which you'll use later on in this tutorial.

For more information on internal load balancing, see the GKE documentation.

For more information about the Service specification, see the Service API reference.

Deploying the Deployment

In the directory containing the mllp_adapter.yaml Deployment manifest file, run the following command to deploy the adapter to a GKE cluster:

kubectl apply -f mllp_adapter.yaml

The command returns the following output:

deployment.extensions "mllp-adapter-deployment" created

Inspecting the Deployment

After you create the Deployment, you can use the kubectl tool to inspect it.

To get detailed information about the Deployment, run the following command:

kubectl describe deployment mllp-adapter

To list the Pod created by the Deployment, run the following command:

kubectl get pods -l app=mllp-adapter

To get information about the created Pod:

kubectl describe pod POD_NAME

If the Deployment was successful, the last part of the output from the above command should contain the following information:

Events:
  Type    Reason     Age   From                                                  Message
  ----    ------     ----  ----                                                  -------
  Normal  Scheduled  1m    default-scheduler                                     Successfully assigned default/mllp-adapter-deployment-85b46f8-zxw68 to gke-mllp-adapter-default-pool-9c42852d-95sn
  Normal  Pulling    1m    kubelet, gke-mllp-adapter-default-pool-9c42852d-95sn  pulling image "gcr.io/cloud-healthcare-containers/mllp-adapter"
  Normal  Pulled     1m    kubelet, gke-mllp-adapter-default-pool-9c42852d-95sn  Successfully pulled image "gcr.io/cloud-healthcare-containers/mllp-adapter"
  Normal  Created    1m    kubelet, gke-mllp-adapter-default-pool-9c42852d-95sn  Created container
  Normal  Started    1m    kubelet, gke-mllp-adapter-default-pool-9c42852d-95sn  Started container

Deploying the Service and creating the internal load balancer

In the directory containing the mllp_adapter_service.yaml Service manifest file, run the following command to create the internal load balancer:

kubectl apply -f mllp_adapter_service.yaml

The command returns the following output:

service "mllp-adapter-service" created

Inspecting the Service

After creating the Service, inspect it to verify that it has been configured successfully.

To inspect the internal load balancer, run the following command:

kubectl describe service mllp-adapter-service

The command's output is similar to the following:

Name:                     mllp-adapter-service
Namespace:                default
Labels:                   <none>
Annotations:              cloud.google.com/load-balancer-type=Internal
                          kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/load-balancer-type":"Internal"},"name":"mllp-adapter-service","namespa...
Selector:                 app=mllp-adapter
Type:                     LoadBalancer
IP:                       203.0.113.1
LoadBalancer Ingress:     203.0.113.1
Port:                     port  2575/TCP
TargetPort:               2575/TCP
NodePort:                 port  30660/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  1m    service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   1m    service-controller  Ensured load balancer

Take note of the LoadBalancer Ingress IP address. You will use this IP address and the 2575 port to access the Service from outside the cluster in the next step.

Creating a Compute Engine VM and sending messages

Whereas earlier in this tutorial you tested the MLLP adapter locally and sent HL7v2 messages to your HL7v2 store, you will now send messages from a Compute Engine VM to the MLLP adapter running on GKE. The messages will then be forwarded to an HL7v2 store.

To send requests from the new instance to the GKE cluster, the instance and the existing instances must be in the same region and use the same VPC network.

At the end of this section, you will list the notifications published to your Cloud Pub/Sub topic and will list the HL7v2 messages in your HL7v2 store. The Compute Engine VM instance will need to be granted permissions to perform these tasks. Before creating the instance, create a new service account with the required permissions:

  1. In the GCP Console, go to the Create service account key page.

    Go to the Create Service Account Key page
  2. From the Service account list, select New service account.
  3. In the Service account name field, enter a name.
  4. From the Role list, select the following two roles:

    • Pub/Sub > Pub/Sub Subscriber
    • Cloud Healthcare > Healthcare HL7v2 Message Consumer
  5. Click Create.

The following steps show how to create a Linux virtual machine instance in Compute Engine using the GCP Console:

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click Create instance.
  3. Choose a Region and Zone for the instance that matches the zone you selected when you created the cluster. For example, if you used us-central1-a for the COMPUTE_ZONE when you created the cluster, then in the instance creation screen select us-central1 (Iowa) for the Region and us-central1-a for the Zone.
  4. In the Boot disk section, click Change to begin configuring your boot disk.
  5. On the OS images tab, choose Debian 9.
  6. Click Select.
  7. In the Identity and API access section, select the service account you just created.
  8. In the Firewall section, select Allow HTTP traffic.
  9. Click Create to create the instance.

Allow a short time for the instance to start up. Once ready, it will be listed on the VM Instances page with a green status icon.

By default, the instance uses the same default VPC network that the cluster uses, which means that traffic can be sent from the instance to the cluster.

To connect to the instance, complete the following steps:

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. In the list of virtual machine instances, click SSH in the row of the instance that you just created.

You now have a terminal window for interacting with your Linux instance.

  1. In the terminal window, install Netcat:

    sudo apt install netcat
    
  2. Download the hl7v2-mllp-sample.txt file and save it to the instance. In the same directory where you downloaded the file, run the following command to start sending HL7v2 messages through the MLLP adapter to your HL7v2 store. Use the value of LoadBalancer Ingress that was displayed when you inspected the Service.

    echo -n -e "\x0b$(cat hl7v2-mllp-sample.txt)\x1c\x0d" | nc LOAD_BALANCER_INGRESS_IP_ADDRESS 2575
    

    After running the command, the message will be sent through the MLLP adapter to your HL7v2 store. If the message was successfully ingested into the HL7v2 store, the command returns the following output:

    MSA|AA|20150503223000|ILITY|FROM_APP|FROM_FACILITY|20190312162410||ACK|f4c59243-19c2-4373-bea0-39c1b2ba616b|P|2.5
    

    This output indicates that the HL7v2 store responded with an AA (Application Accept) response type, meaning that the message was validated and successfully ingested.

  3. Run the gcloud pubsub subscriptions pull command to view the message published to the Cloud Pub/Sub topic:

    gcloud pubsub subscriptions pull --auto-ack PUBSUB_SUBSCRIPTION
    

    The command returns the following output about the ingested HL7v2 message:

    ┌---------------------------------------------------------------------------------------------------------------|-----------------|---------------┐
    |                                                               DATA                                            |    MESSAGE_ID   |   ATTRIBUTES  |
    ├---------------------------------------------------------------------------------------------------------------|-----------------|---------------|
    | projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/HL7V2_MESSAGE_ID | 123456789012345 | msgType=ADT   |
    └---------------------------------------------------------------------------------------------------------------|-----------------|---------------┘
    
  4. You can also list the messages in your HL7v2 store to see if the message was added:

    curl command

    curl -X GET \
         -H "Authorization: Bearer "$(gcloud auth print-access-token) \
         -H "Content-Type: application/json; charset=utf-8" \
         "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages"
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

    PowerShell

    $cred = gcloud auth print-access-token
    $headers = @{ Authorization = "Bearer $cred" }
    
    Invoke-WebRequest `
      -Method Get `
      -Headers $headers `
      -ContentType: "application/json; charset=utf-8" `
      -Uri "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages" | Select-Object -Expand Content
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

After completing this section, you have successfully deployed the MLLP adapter to GKE and sent an HL7v2 message from a remote instance through the adapter and to the Cloud Healthcare API.

In the rest of this guide, you will learn how to configure a VPN between a Compute Engine instance (which acts as an "on-premises" instance) and the adapter to ensure that the transmitted HL7v2 messages are securely encrypted.

Configuring a VPN

Using a VPN allows you to extend the private network on which you send HL7v2 messages across a public network, such as the internet. By using a VPN, you can send messages from your care center through the MLLP adapter and to GCP, and the systems in this flow will act as though they were on a single private network.

There are two methods of securing your MLLP connection using VPN:

Configuring Cloud VPN

Cloud VPN securely connects your on-premises network to your GCP Virtual Private Cloud (VPC) network through an IPsec VPN connection. Traffic traveling between the two networks is encrypted by one VPN gateway, then decrypted by the other VPN gateway. This protects your data as it travels over the internet or over a care center network.

In this guide, each VPN gateway you configure is located on a different custom network and subnet in a different GCP region.

The VPN gateway configured in us-central1 acts as the Cloud VPN gateway on the GCP side, while the Cloud VPN gateway in europe-west1 simulates your "on-premises" gateway.

Naming and addressing reference

For reference, this guide uses the following naming and IP addressing:

GCP side

  • Network name: cloud-vpn-network
  • Subnet name: subnet-us-central-10-0-1
  • Region: us-central1
  • Subnet range: 10.0.1.0/24
  • External IP address name: cloud-vpn-ip
  • VPN gateway name: vpn-us-central
  • VPN tunnel name: vpn-us-central-tunnel-1

"On-premises" side

  • Network name: on-prem-vpn-network
  • Subnet name: subnet-europe-west-10-0-2
  • Region: europe-west1
  • Subnet range: 10.0.2.0/24
  • External IP address name: on-prem-vpn-ip
  • VPN gateway name: vpn-europe-west
  • VPN tunnel name: vpn-europe-west-tunnel-1

Creating custom VPC networks and subnets

The first step in configuring Cloud VPN is to create two VPC networks. One network, called on-prem-vpn-network, is configured in the "on-premises" environment and runs on a Compute Engine VM instance called on-prem-instance. The other network, called cloud-vpn-network, is what the GKE cluster running the MLLP adapter uses. You will connect to the on-prem-instance VM and send HL7v2 messages to the MLLP adapter running under the cloud-vpn-network network through the MLLP adapter's internal load balancer.

Create two custom VPC networks and their subnets:

  1. Create the first VPC network, cloud-vpn-network:

    gcloud compute networks create cloud-vpn-network \
       --project=PROJECT_ID \
       --subnet-mode=custom
    
  2. Create the subnet-us-central-10-0-1 subnet for the cloud-vpn-network network:

    gcloud compute networks subnets create subnet-us-central-10-0-1 \
       --project=PROJECT_ID \
       --region=us-central1 \
       --network=cloud-vpn-network \
       --range=10.0.1.0/24
    
  3. Create the on-prem-vpn-network VPC network:

    gcloud compute networks create on-prem-vpn-network \
       --project=PROJECT_ID \
       --subnet-mode=custom
    
  4. Create the subnet-europe-west-10-0-2 subnet for the on-prem-vpn-network VPC network:

    gcloud compute networks subnets create subnet-europe-west-10-0-2 \
       --project=PROJECT_ID \
       --region=europe-west1 \
       --network=on-prem-vpn-network \
       --range=10.0.2.0/24
    

Creating an external IP address

Before creating the VPN gateways, reserve an external IP address for each gateway:

  1. Reserve a regional external (static) IP address for the cloud-vpn-ip address:

    gcloud compute addresses create cloud-vpn-ip \
       --project=PROJECT_ID \
       --region=us-central1
    
  2. Reserve a regional external (static) IP address for on-prem-vpn-ip address:

    gcloud compute addresses create on-prem-vpn-ip \
       --project=PROJECT_ID \
       --region=europe-west1
    
  3. Retrieve and make note of the external IP addresses so that you can use them to configure the VPN gateways in the next section:

    Cloud VPN IP address:

    gcloud compute addresses describe cloud-vpn-ip  \
       --project PROJECT_ID \
       --region us-central1 \
       --format='flattened(address)'
    

    "On-premises" VPN IP address:

    gcloud compute addresses describe on-prem-vpn-ip \
       --project PROJECT_ID \
       --region europe-west1 \
       --format='flattened(address)'
    

    The commands return output similar to the following:

    address: 203.0.113.1
    

Creating the VPN gateways, tunnels, and routes

Before starting this section, create a cryptographically strong pre-shared key (shared secret) by following the instructions in Generating a strong pre-shared key. Save this key so that you can use it when creating tunnels into your VPNs.

Complete the following steps to create the VPN gateway, tunnel, and route for the Cloud VPN:

  1. Create the target VPN gateway object:

    gcloud compute target-vpn-gateways create vpn-us-central \
       --project PROJECT_ID \
       --region us-central1 \
       --network cloud-vpn-network
    
  2. Create three forwarding rules. This step instructs GCP to send ESP (IPsec), UDP 500, and UDP 4500 traffic to the gateway. Replace the CLOUD_VPN_EXTERNAL_ADDRESS variable with the value from the Cloud VPN IP address in the previous section:

    ESP (IPsec):

    gcloud compute forwarding-rules create vpn-us-central-rule-esp \
        --project PROJECT_ID \
        --region us-central1 \
        --address CLOUD_VPN_EXTERNAL_ADDRESS \
        --ip-protocol ESP \
        --target-vpn-gateway vpn-us-central
    

    UDP 500:

    gcloud compute forwarding-rules create vpn-us-central-rule-udp500 \
        --project PROJECT_ID \
        --region us-central1 \
        --address CLOUD_VPN_EXTERNAL_ADDRESS \
        --ip-protocol UDP \
        --ports 500 \
        --target-vpn-gateway vpn-us-central
    

    UDP 4500:

    gcloud compute forwarding-rules create vpn-us-central-rule-udp4500 \
        --project PROJECT_ID \
        --region us-central1 \
        --address CLOUD_VPN_EXTERNAL_ADDRESS \
        --ip-protocol UDP \
        --ports 4500 \
        --target-vpn-gateway vpn-us-central
    
  3. Create a tunnel into the Cloud VPN gateway. Replace ON_PREM_VPN_IP with the value from the "On-premises" VPN IP address in the previous section.

    gcloud compute vpn-tunnels create vpn-us-central-tunnel-1 \
        --project PROJECT_ID \
        --region us-central1 \
        --peer-address ON_PREM_VPN_IP \
        --shared-secret SHARED_SECRET \
        --ike-version 2 \
        --local-traffic-selector 0.0.0.0/0 \
        --target-vpn-gateway vpn-us-central
    
  4. Create a route. This step automatically creates a static route to 10.0.2.0/24

    gcloud compute routes create "vpn-us-central-tunnel-1-route-1" \
       --project PROJECT_ID \
       --network "cloud-vpn-network" \
       --next-hop-vpn-tunnel "vpn-us-central-tunnel-1" \
       --next-hop-vpn-tunnel-region "us-central1" \
       --destination-range "10.0.2.0/24"
    

Complete the following steps to create the VPN gateway, tunnel, and route for the "on-premises" VPN:

  1. Create the target VPN gateway object:

    gcloud compute target-vpn-gateways create "vpn-europe-west" \
       --project PROJECT_ID \
       --region "europe-west1" \
       --network "on-prem-vpn-network"
    
  2. Create three forwarding rules. This step instructs GCP to send ESP (IPsec), UDP 500, and UDP 4500 traffic to the gateway. Replace the ON_PREMISES_VPN_EXTERNAL_ADDRESS variable with the value from the "On-premises" VPN IP address in the previous section:

    ESP (IPsec):

    gcloud compute forwarding-rules create vpn-europe-west-rule-esp \
        --project PROJECT_ID \
        --region europe-west1 \
        --address ON_PREMISES_VPN_EXTERNAL_ADDRESS \
        --ip-protocol ESP \
        --target-vpn-gateway vpn-europe-west
    

    UDP 500:

    gcloud compute forwarding-rules create vpn-europe-west-rule-udp500 \
        --project PROJECT_ID \
        --region europe-west1 \
        --address ON_PREMISES_VPN_EXTERNAL_ADDRESS \
        --ip-protocol UDP \
        --ports 500 \
        --target-vpn-gateway vpn-europe-west
    

    UDP 4500:

    gcloud compute forwarding-rules create vpn-europe-west-rule-udp4500 \
         --project PROJECT_ID \
         --region europe-west1 \
         --address ON_PREMISES_VPN_EXTERNAL_ADDRESS \
         --ip-protocol UDP \
         --ports 4500 \
         --target-vpn-gateway vpn-europe-west
    
  3. Create a tunnel into the "on-premises" gateway:

    gcloud compute vpn-tunnels create vpn-europe-west-tunnel-1 \
       --project PROJECT_ID \
       --region europe-west1 \
       --peer-address CLOUD_VPN_IP \
       --shared-secret SHARED_SECRET \
       --ike-version 2 \
       --local-traffic-selector 0.0.0.0/0 \
       --target-vpn-gateway vpn-europe-west
    
  4. Create a route. This step automatically creates a static route to 10.0.1.0/24.

    gcloud compute routes create "vpn-europe-west-tunnel-1-route-1" \
       --project PROJECT_ID \
       --network "on-prem-vpn-network" \
       --next-hop-vpn-tunnel "vpn-europe-west-tunnel-1" \
       --next-hop-vpn-tunnel-region "europe-west1" \
       --destination-range "10.0.1.0/24"
    

You've now created the Cloud VPN and "on-premises" gateways and initiated their tunnels. The VPN gateways will not connect until you've created firewall rules to allow traffic through the tunnel between them.

Creating firewall rules

You must create firewall rules for both sides of the VPN tunnel. These rules allow all TCP, UDP, and ICMP traffic to ingress from the subnet on one side of the VPN tunnel to the other.

  1. Create the firewall rules for the Cloud VPN subnet:

    gcloud compute firewall-rules create allow-tcp-udp-icmp-cloud-vpn \
       --project=PROJECT_ID \
       --direction=INGRESS \
       --priority=1000 \
       --network=cloud-vpn-network \
       --action=ALLOW \
       --rules=tcp,udp,icmp \
       --source-ranges=10.0.2.0/24
    
  2. Create the firewall rules for the "on-premises" subnet:

    gcloud compute firewall-rules create allow-tcp-udp-icmp-on-prem-vpn \
       --project=PROJECT_ID \
       --direction=INGRESS \
       --priority=1000 \
       --network=on-prem-vpn-network \
       --action=ALLOW \
       --rules=tcp,udp,icmp \
       --source-ranges=10.0.1.0/24
    
  3. Create a firewall rule that lets you SSH into the VM instance on port 22:

    gcloud compute firewall-rules create on-prem-vpn-allow-ssh \
       --project=PROJECT_ID \
       --direction=INGRESS \
       --priority=1000 \
       --network=on-prem-vpn-network \
       --action=ALLOW \
       --rules=tcp:22 \
       --source-ranges=0.0.0.0/0
    

Checking the status of the VPN tunnel

To verify that your tunnel is up:

  1. Go to the VPN page in the GCP Console.

    Go to the VPN page

  2. Click the Google VPN Tunnels tab.

  3. In the Status field for each tunnel, look for a green check mark and the word "Established." If these items are there, your gateways have negotiated a tunnel. If no mark appears after a few minutes, see Troubleshooting.

    For additional logging information related to your VPN tunnels, see Checking VPN Logs on the Troubleshooting page. For example, you can view metrics about dropped packets, tunnel status, received bytes, and sent bytes.

Now that you've successfully configured Cloud VPN with the necessary gateways, tunnels, and firewall rules, you can create a secure connection between the "on-premises" VM instance and the MLLP adapter running on GKE.

Combining deployment to GKE and Cloud VPN

Whereas earlier in this tutorial you tested the MLLP adapter locally and sent HL7v2 messages over a non-VPN connection to the MLLP adapter, you will now send messages from a Compute Engine VM over a secure connection using Cloud VPN to the MLLP adapter running on GKE. The messages will then be forwarded to an HL7v2 store.

Re-creating the deployment

First, re-create the deployment on GKE so that the cluster uses the settings you configured in Configuring Cloud VPN:

  1. Run the gcloud container clusters delete command to delete the mllp-adapter cluster you previously created. Enter the same COMPUTE_ZONE value that you used when you created the cluster.

    gcloud container clusters delete mllp-adapter --zone=COMPUTE_ZONE
    
  2. Follow the same steps again in Deploying the MLLP adapter to Kubernetes Engine, but when you create the cluster in GKE, add the cloud-vpn-network network and the subnet-us-central-10-0-1 subnet that you created in Creating custom VPN networks and subnets.

    The command to create the cluster in GKE should look like this:

    gcloud container clusters create mllp-adapter \
       --zone=COMPUTE_ZONE \
       --service-account=CLIENT_EMAIL \
       --network=cloud-vpn-network \
       --subnetwork=subnet-us-central-10-0-1
    

    where:

    • COMPUTE_ZONE is the zone in which your cluster is deployed. When you configured Cloud VPN in the previous section, you set the "GCP side" network to use us-central1. This "GCP side" network is what the GKE cluster will run on. Use any of the following zones in us-central1: us-central1-c, us-central1-a, us-central1-f, us-central1-b.
    • CLIENT_EMAIL is the identifier for the service account. You can find this in the service account key file in the "client_email": field. It takes the format SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com.

Creating a new Compute Engine VM with network settings

The following steps show how to create a Linux virtual machine instance in Compute Engine using the GCP Console. Unlike the Compute Engine VM you created previously, this VM will use the "'on-premises' side" network settings to communicate with the GKE cluster over a VPN.

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click Create instance.
  3. Choose a Region and Zone for the instance that matches the "'on-premises' side" network settings: europe-west1 (Belgium) for the Region and europe-west1-b for the Zone.
  4. In the Boot disk section, click Change to begin configuring your boot disk.
  5. On the OS images tab, choose Debian 9.
  6. Click Select.
  7. In the Identity and API access section, select the service account you just created.
  8. In the Firewall section, select Allow HTTP traffic.
  9. Expand the Management, security, disks, networking, sole tenancy section.
  10. Under Network interfaces in the Networking tab, specify the network details for the "'on-premises' side" network settings:
    • In the Network field, select on-prem-vpn-network.
    • In the Subnetwork field, select subnet-europe-west-10-0-2 (10.0.2.0/24).
  11. Click Create to create the instance.

Allow a short time for the instance to start up. Once ready, it will be listed on the VM Instances page with a green status icon.

To connect to the instance, complete the following steps:

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. In the list of virtual machine instances, click SSH in the row of the instance that you just created.

You now have a terminal window for interacting with your Linux instance.

  1. In the terminal window, install Netcat:

    sudo apt install netcat
    
  2. Download the hl7v2-mllp-sample.txt file and save it to the instance. In the same directory where you downloaded the file, run the following command to start sending HL7v2 messages through the MLLP adapter to your HL7v2 store. Use the value of LoadBalancer Ingress that was displayed when you inspected the Service.

    echo -n -e "\x0b$(cat hl7v2-mllp-sample.txt)\x1c\x0d" | nc LOAD_BALANCER_INGRESS_IP_ADDRESS 2575
    

    After running the command, the message will be sent through the MLLP adapter to your HL7v2 store. If the message was successfully ingested into the HL7v2 store, the command returns the following output:

    MSA|AA|20150503223000|ILITY|FROM_APP|FROM_FACILITY|20190312162410||ACK|f4c59243-19c2-4373-bea0-39c1b2ba616b|P|2.5
    

    This output indicates that the HL7v2 store responded with an AA (Application Accept) response type, meaning that the message was validated and successfully ingested.

  3. Run the gcloud pubsub subscriptions pull command to view the message published to the Cloud Pub/Sub topic:

    gcloud pubsub subscriptions pull --auto-ack PUBSUB_SUBSCRIPTION
    

    The command returns the following output about the ingested HL7v2 message:

    ┌---------------------------------------------------------------------------------------------------------------|-----------------|---------------┐
    |                                                               DATA                                            |    MESSAGE_ID   |   ATTRIBUTES  |
    ├---------------------------------------------------------------------------------------------------------------|-----------------|---------------|
    | projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/HL7V2_MESSAGE_ID | 123456789012345 | msgType=ADT   |
    └---------------------------------------------------------------------------------------------------------------|-----------------|---------------┘
    
  4. You can also list the messages in your HL7v2 store to see if the message was added:

    curl command

    curl -X GET \
         -H "Authorization: Bearer "$(gcloud auth print-access-token) \
         -H "Content-Type: application/json; charset=utf-8" \
         "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages"
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

    PowerShell

    $cred = gcloud auth print-access-token
    $headers = @{ Authorization = "Bearer $cred" }
    
    Invoke-WebRequest `
      -Method Get `
      -Headers $headers `
      -ContentType: "application/json; charset=utf-8" `
      -Uri "https://healthcare.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages" | Select-Object -Expand Content
    

    If the request is successful, the server returns a 200 OK HTTP status code and the message's ID in a resource path:

    200 OK
    {
      "messages": [
        "projects/PROJECT_ID/locations/REGION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages/MESSAGE_ID"
      ]
    }
    

After completing this section, you have successfully deployed the MLLP adapter to GKE and, over a VPN, securely sent an HL7v2 message from an "on-premises" instance through the adapter and to the Cloud Healthcare API.

Troubleshooting

Adapter failures

After deploying the MLLP adapter to GKE, the adapter encounters a failure.

Connection refused error when running locally

When testing the MLLP adapter locally, you encounter the error Connection refused.

  • This error occurs with some Mac OS users. Instead of using the --network=host flag, use -p 2575:2575. Also, instead of setting --receiver_ip=127.0.0.0, set --receiver_ip=0.0.0.0. The command should look like this:

    docker run \
      -p 2575:2575 \
      gcr.io/cloud-healthcare-containers/mllp-adapter \
      /usr/mllp_adapter/mllp_adapter \
      --hl7_v2_project_id=PROJECT_ID \
      --hl7_v2_location_id=REGION \
      --hl7_v2_dataset_id=DATASET_ID \
      --hl7_v2_store_id=HL7V2_STORE_ID \
      --export_stats=false \
      --receiver_ip=0.0.0.0 \
      --pubsub_project_id=PROJECT_ID \
      --pubsub_subscription=PUBSUB_SUBSCRIPTION \
      --api_addr_prefix=https://healthcare.googleapis.com:443/v1beta1 \
      --logtostderr
    

could not find default credentials error when running locally

When testing the MLLP adapter locally, you encounter the error healthapiclient.NewHL7V2Client: oauth2google.DefaultTokenSource: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information..

This error occurs when the adapter cannot find your GCP credentials. To fix the error, try one of the following methods:

  • Follow the instructions in Setting up authentication for server-to-server production applications to check that the adapter is finding your GCP credentials.
  • Install and initialize the Cloud SDK to access the gcloud command-line tool. Make sure that you authorize the Cloud SDK on the machine where you are testing the adapter locally. Then, re-run the command to run the adapter locally, but add the -v ~/.config:/root/.config flag. This flag grants the adapter acess to your gcloud tool credentials. The command should look like this:

    docker run \
      --network=host \
      -v ~/.config:/root/.config
      gcr.io/cloud-healthcare-containers/mllp-adapter \
      /usr/mllp_adapter/mllp_adapter \
      --hl7_v2_project_id=PROJECT_ID \
      --hl7_v2_location_id=REGION \
      --hl7_v2_dataset_id=DATASET_ID \
      --hl7_v2_store_id=HL7V2_STORE_ID \
      --export_stats=false \
      --receiver_ip=0.0.0.0 \
      --pubsub_project_id=PROJECT_ID \
      --pubsub_subscription=PUBSUB_SUBSCRIPTION \
      --api_addr_prefix=https://healthcare.googleapis.com:443/v1beta1 \
      --logtostderr
    
Hat Ihnen diese Seite weitergeholfen? Teilen Sie uns Ihr Feedback mit:

Feedback geben zu...

Cloud Healthcare API