Method: hl7V2Stores.import

Full name: projects.locations.datasets.hl7V2Stores.import

Import messages to the HL7v2 store by loading data from the specified sources. This method is optimized to load large quantities of data using import semantics that ignore some HL7v2 store configuration options and are not suitable for all use cases. It is primarily intended to load data into an empty HL7v2 store that is not being used by other clients.

An existing message will be overwritten if a duplicate message is imported. A duplicate message is a message with the same raw bytes as a message that already exists in this HL7v2 store. When a message is overwritten, its labels will also be overwritten.

The import operation is idempotent unless the input data contains multiple valid messages with the same raw bytes but different labels. In that case, after the import completes, the store contains exactly one message with those raw bytes but there is no ordering guarantee on which version of the labels it has. The operation result counters do not count duplicated raw bytes as an error and count one success for each message in the input, which might result in a success count larger than the number of messages in the HL7v2 store.

If some messages fail to import, for example due to parsing errors, successfully imported messages are not rolled back.

This method returns an Operation that can be used to track the status of the import by calling operations.get.

Immediate fatal errors appear in the error field, errors are also logged to Cloud Logging (see Viewing error logs in Cloud Logging). Otherwise, when the operation finishes, a response of type ImportMessagesResponse is returned in the response field. The metadata field type for this operation is OperationMetadata.

HTTP request

POST https://healthcare.googleapis.com/v1/{name=projects/*/locations/*/datasets/*/hl7V2Stores/*}:import

The URL uses gRPC Transcoding syntax.

Path parameters

Parameters
name

string

Required. The name of the target HL7v2 store, in the format projects/{projectId}/locations/{locationId}/datasets/{datasetId}/hl7v2Stores/{hl7v2_store_id}

Authorization requires the following IAM permission on the specified resource name:

  • healthcare.hl7v2Stores.import

Request body

The request body contains data with the following structure:

JSON representation
{

  // Union field source can be only one of the following:
  "gcsSource": {
    object(GcsSource)
  }
  // End of list of possible types for union field source.
}
Fields

Union field source. Specifies the import source and configuration.

To enable the Cloud Healthcare API to read from resources in your project such as Cloud Storage buckets, you must give the consumer Cloud Healthcare API service account the proper permissions. The service account is: service-{PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com. The PROJECT_NUMBER identifies the project that contains the target HL7v2 store. To get the project number, go to the Cloud Console Dashboard. source can be only one of the following:

gcsSource

object(GcsSource)

Cloud Storage source data location and import configuration.

The Cloud Healthcare Service Agent requires the roles/storage.objectViewer Cloud IAM roles on the Cloud Storage location.

Response body

If successful, the response body contains an instance of Operation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-healthcare
  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GcsSource

Specifies the configuration for importing data from Cloud Storage.

JSON representation
{
  "uri": string
}
Fields
uri

string

Points to a Cloud Storage URI containing file(s) to import.

The URI must be in the following format: gs://{bucket_id}/{objectId}. The URI can include wildcards in objectId and thus identify multiple files. Supported wildcards:

  • * to match 0 or more non-separator characters
  • ** to match 0 or more characters (including separators). Must be used at the end of a path and with no other wildcards in the path. Can also be used with a file extension (such as .ndjson), which imports all files with the extension in the specified directory and its sub-directories. For example, gs://my-bucket/my-directory/**.ndjson imports all files with .ndjson extensions in my-directory/ and its sub-directories.
  • ? to match 1 character

Files matching the wildcard are expected to contain content only, no metadata.