Quickstart using the gcloud command-line tool

This page shows you how to use the Cloud Healthcare API with the gcloud command-line tool to complete the following tasks:

  1. Create a Cloud Healthcare API dataset.
  2. Create one of the following data stores inside the dataset:
    • Digital Imaging and Communications in Medicine (DICOM) store
    • Fast Healthcare Interoperability Resources (FHIR) store
    • Health Level Seven International Version 2 (HL7v2) store
  3. Store and inspect a particular type of medical data in the DICOM, FHIR, or HL7v2 store.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Enable the Cloud Healthcare API.

    Enable the API

  5. Install gsutil, a tool that enables you to access Cloud Storage from the command-line using HTTPS.
  6. Based on how you are using the gcloud command-line tool, complete one of the following steps:
    • If you are using Cloud Shell, go to Google Cloud Console and then click the Activate Cloud Shell button at the top of the console window.

      Go to Google Cloud Console

      A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. It can take a few seconds for the shell session to initialize.

    • If you are using a Compute Engine virtual machine, open the virtual machine's terminal window.
    • If you are using the gcloud tool on your machine, install and initialize the Cloud SDK.

Creating a dataset

Datasets are the basic containers that hold healthcare data in Google Cloud. To use the Cloud Healthcare API, you must create at least one dataset.

To create a dataset, use the gcloud healthcare datasets create command:

gcloud healthcare datasets create DATASET_ID \
    --location=LOCATION

Replace the following:

  • DATASET_ID: an identifier for the dataset. The dataset ID must be unique within the location. The dataset ID can be any Unicode string of 1-256 characters consisting of numbers, letters, underscores, dashes, and periods.
  • LOCATION: the Google Cloud location in which to create the dataset. Use us-central1, us-west2, us-east4, europe-west2, europe-west4, europe-west6, northamerica-northeast1, southamerica-east1, asia-east2, asia-northeast1, asia-southeast1, australia-southeast1, or us. To use the default region for the project, omit the --location option.

The output is the following:

Created dataset [DATASET_ID].

Storing and viewing DICOM, FHIR, and HL7v2 data

To complete this quickstart, choose from one of the following sections:

Storing and viewing DICOM instances

This section shows how to complete the following tasks:

  1. Create a DICOM store.
  2. Import a DICOM instance from Cloud Storage into the DICOM store.
  3. (Optional) View the DICOM instance's metadata.

The Cloud Healthcare API implements the DICOMweb standard to store and access medical imaging data.

  1. DICOM stores exist inside datasets and contain DICOM instances. To create a DICOM store, use the gcloud healthcare dicom-stores create command:

    gcloud healthcare dicom-stores create DICOM_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION
    

    Replace the following:

    • DICOM_STORE_ID: an identifier for the DICOM store. The DICOM store ID must be unique in the dataset. The DICOM store ID can be any Unicode string from 1 through 256 characters consisting of numbers, letters, underscores, dashes, and periods.
    • DATASET_ID: the name of the DICOM store's parent dataset.
    • LOCATION: the location of the parent dataset.

    The output is the following:

    Created dicomStore [DICOM_STORE_ID].
    
  2. Sample DICOM data is available in the gs://gcs-public-data--healthcare-nih-chest-xray Cloud Storage bucket.

    Import the gs://gcs-public-data--healthcare-nih-chest-xray/dicom/00000001_000.dcm DICOM instance using the gcloud healthcare dicom-stores import command:

    gcloud healthcare dicom-stores import gcs DICOM_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION \
     --gcs-uri=gs://gcs-public-data--healthcare-nih-chest-xray/dicom/00000001_000.dcm
    

    Replace the following:

    • DICOM_STORE_ID: the identifier for the DICOM store.
    • DATASET_ID: the name of the DICOM store's parent dataset.
    • LOCATION: the location of the parent dataset.

    The output is the following:

    Request issued for: [DICOM_STORE_ID]
    Waiting for operation [projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/operations/OPERATION_ID] to complete...done.
    name: projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID
    
  3. (Optional) The gcloud tool does not support DICOMweb transactions, such as viewing or retrieving instances. Instead, you can use the DICOMweb command-line tool from Google. The DICOMweb command-line tool runs using Python. For information on how to set up Python on Google Cloud, see Setting up a Python development environment.

    After setting up Python, you can install the tool using Pip:

    pip install https://github.com/GoogleCloudPlatform/healthcare-api-dicomweb-cli/archive/v1.0.zip
    

    To view the instance's metadata, run the following command using the DICOMweb command-line tool:

    dcmweb \
      https://healthcare.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/dicomStores/DICOM_STORE_ID/dicomWeb \
      search instances
    

    The output is the following:

    [
      {
        "00080016": {
          "Value": [
            "1.2.840.10008.5.1.4.1.1.7"
          ],
          "vr": "UI"
        },
        "00080018": {
          "Value": [
            "1.3.6.1.4.1.11129.5.5.153751009835107614666834563294684339746480"
          ],
          "vr": "UI"
        },
        "00080060": {
          "Value": [
            "DX"
          ],
          "vr": "CS"
        },
        "00100020": {
          "Value": [
            "1"
          ],
          "vr": "LO"
        },
        "00100040": {
          "Value": [
            "M"
          ],
          "vr": "CS"
        },
        "0020000D": {
          "Value": [
            "1.3.6.1.4.1.11129.5.5.111396399361969898205364400549799252857604"
          ],
          "vr": "UI"
        },
        "0020000E": {
          "Value": [
            "1.3.6.1.4.1.11129.5.5.195628213694300498946760767481291263511724"
          ],
          "vr": "UI"
        },
        "00280010": {
          "Value": [
            1024
          ],
          "vr": "US"
        },
        "00280011": {
          "Value": [
            1024
          ],
          "vr": "US"
        },
        "00280100": {
          "Value": [
            8
          ],
          "vr": "US"
        }
      }
    ]
    

Now that you've stored a DICOM instance in the Cloud Healthcare API, continue to What's next for information on next steps, such as how to search for or retrieve DICOM images.

Storing and viewing FHIR resources

This section shows how to complete the following tasks:

  1. Create a FHIR store.
  2. Create a Cloud Storage bucket and copy a FHIR resource file to the bucket.
  3. Import the FHIR resource from the Cloud Storage bucket into the FHIR store.

To complete this quickstart, follow these steps:

  1. FHIR stores exist inside datasets and contain FHIR resources. To create a FHIR store, use the gcloud healthcare fhir-stores create command:

    gcloud healthcare fhir-stores create FHIR_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION \
     --version=STU3
    

    Replace the following:

    • FHIR_STORE_ID: an identifier for the FHIR store. The FHIR store ID must be unique in the dataset. The FHIR store ID can be any Unicode string from 1 through 256 characters consisting of numbers, letters, underscores, dashes, and periods.
    • DATASET_ID: the name of the FHIR store's parent dataset.
    • LOCATION: the location of the parent dataset.
    • The available options for the FHIR store version are DSTU2, STU3, or R4. For this quickstart, use STU3.

    The output is the following:

    Created fhirStore [FHIR_STORE_ID].
    
  2. Save the sample JSON FHIR resource file to your machine. The file contains basic data for a Patient resource and an Encounter that the patient had.

  3. If you don't already have a Cloud Storage bucket that you want to use to store the sample FHIR resource file, create a new bucket using the gsutil mb command:

    gsutil mb gs://BUCKET
    

    Replace the BUCKET variable with your own globally unique bucket name.

    The output is the following:

    Creating gs://BUCKET/...
    

    If the bucket name you chose is already in use, either by you or someone else, the command returns the following message:

    Creating gs://BUCKET/...
    ServiceException: 409 Bucket BUCKET already exists.
    

    If the bucket name is already in use, try again with a different bucket name.

  4. Copy the sample JSON FHIR resource file to the bucket using the gsutil cp command:

    gsutil cp resources.ndjson gs://BUCKET
    

    The output is the following:

    Copying file://resources.ndjson [Content-Type=application/octet-stream]...
    / [1 files][  860.0 B/  860.0 B]
    Operation completed over 1 objects/860.0 B.
    
  5. After copying the FHIR resource file to the bucket, import the FHIR resource using the gcloud healthcare fhir-stores import command:

    gcloud healthcare fhir-stores import gcs FHIR_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION \
     --gcs-uri=gs://BUCKET/resources.ndjson \
     --content-structure=RESOURCE
    

    Replace the following:

    • FHIR_STORE_ID: the identifier for the FHIR store.
    • DATASET_ID: the name of the FHIR store's parent dataset.
    • LOCATION: the location of the parent dataset.
    • BUCKET: the name of the Cloud Storage bucket that contains the FHIR resource file.

    The output is the following:

    Request issued for: [FHIR_STORE_ID]
    Waiting for operation [projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/operations/OPERATION_ID] to complete...done.
    name: projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID
    version: STU3
    

Now that you've stored a FHIR resource in the Cloud Healthcare API, continue to What's next for information on next steps, such as how to view and search for FHIR resources in the FHIR store.

Storing and viewing HL7v2 messages

This section shows how to complete the following tasks:

  1. Create an HL7v2 store.
  2. Create a Cloud Storage bucket and copy an HL7v2 message to the bucket.
  3. Import the HL7v2 message from the Cloud Storage bucket into the HL7v2 store.

The HL7v2 implementation in the Cloud Healthcare API aligns with the HL7v2 standard.

  1. HL7v2 stores exist inside datasets and contain HL7v2 messages. To create an HL7v2 store, use the gcloud healthcare hl7V2-stores create command:

    gcloud healthcare hl7V2-stores create HL7V2_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION
    

    Replace the following:

    • HL7V2_STORE_ID: an identifier for the HL7v2 store. The HL7v2 store ID must be unique in the dataset. The HL7v2 store ID can be any Unicode string from 1 through 256 characters consisting of numbers, letters, underscores, dashes, and periods.
    • DATASET_ID: the name of the HL7v2 store's parent dataset.
    • LOCATION: the location of the parent dataset.

    The output is the following:

    Created hl7v2Store [HL7V2_STORE_ID].
    
  2. Save the sample HL7v2 message file to your machine. The message contains the following basic information, where it is base-64 encoded in the data field of the sample file:

    MSH|^~\&|A|SEND_FACILITY|A|A|20180101000000||TYPE^A|20180101000000|T|0.0|||AA||00|ASCII
    EVN|A00|20180101040000
    PID||14^111^^^^MRN|11111111^^^^MRN~1111111111^^^^ORGNMBR
    
  3. If you don't already have a Cloud Storage bucket that you want to use to store the sample HL7v2 message, create a new bucket using the gsutil mb command:

    gsutil mb gs://BUCKET
    

    Replace the BUCKET variable with your own globally unique bucket name.

    The output is the following:

    Creating gs://BUCKET/...
    

    If the bucket name you chose is already in use, either by you or someone else, the command returns the following:

    Creating gs://BUCKET/...
    ServiceException: 409 Bucket BUCKET already exists.
    

    If the bucket name is already in use, try again with a different bucket name.

  4. Copy the sample HL7v2 message to the bucket using the gsutil cp command:

    gsutil cp hl7v2-sample-import.ndjson gs://BUCKET
    

    The output is the following:

    Copying file://hl7v2-sample-import.ndjson [Content-Type=application/octet-stream]...
    / [1 files][  241.0 B/  241.0 B]
    Operation completed over 1 objects/241.0 B.
    
  5. After copying the HL7v2 file to the bucket, import the HL7v2 message using the gcloud beta healthcare hl7V2-stores import command:

    gcloud beta healthcare hl7v2-stores import gcs HL7V2_STORE_ID \
     --dataset=DATASET_ID \
     --location=LOCATION \
     --gcs-uri=gs://BUCKET/hl7v2-sample-import.ndjson
    

    Replace the following:

    • HL7V2_STORE_ID: the identifier for the HL7v2 store.
    • DATASET_ID: the name of the HL7v2 store's parent dataset.
    • LOCATION: the location of the parent dataset.
    • BUCKET: the name of the Cloud Storage bucket that contains the HL7v2 file.

    The output is the following:

    Request issued for: [HL7V2_STORE_ID]
    Waiting for operation [projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/operations/OPERATION_ID] to complete...done.
    name: projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID
    

Now that you've stored an HL7v2 message in the Cloud Healthcare API, continue to What's next for information on next steps, such as how to view the contents of the HL7v2 message in its store.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can clean up the resources you created on Google Cloud.

If you created a new project for this tutorial, follow the steps in Delete the project. Otherwise, follow the steps in Delete the dataset.

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the dataset

If you no longer need the dataset created in this quickstart, you can delete it. Deleting a dataset permanently deletes the dataset and any FHIR, HL7v2, or DICOM stores it contains.

  1. To delete a dataset, use the gcloud healthcare datasets delete command:

    gcloud healthcare datasets delete DATASET_ID \
    --location=LOCATION \
    --project=PROJECT_ID
    
  2. To confirm, type Y.

The output is the following:

Deleted dataset [DATASET_ID].

How did it go?

What's next

See the following sections for general information on the Cloud Healthcare API and how to perform tasks using the Cloud Console or curl and Windows PowerShell.

DICOM

Continue to the DICOM guide to review topics such as:

See the DICOM conformance statement for information on how the Cloud Healthcare API implements the DICOMweb standard.

FHIR

Continue to the FHIR guide to review topics such as:

See the FHIR conformance statement for information on how the Cloud Healthcare API implements the FHIR standard.

HL7v2

Continue to the HL7v2 guide to review topics such as: