Method: fhirStores.export

Full name: projects.locations.datasets.fhirStores.export

Export resources from the FHIR store to the specified destination.

This method returns an Operation that can be used to track the status of the export by calling operations.get.

Immediate fatal errors appear in the error field, errors are also logged to Stackdriver (see Viewing logs). Otherwise, when the operation finishes, a detailed response of type ExportResourcesResponse is returned in the response field. The metadata field type for this operation is OperationMetadata.

HTTP request


The URL uses gRPC Transcoding syntax.

Path parameters



The name of the FHIR store to export resource from. The name should be in the format of projects/{projectId}/locations/{locationId}/datasets/{datasetId}/fhirStores/{fhirStoreId}.

Authorization requires the following Google IAM permission on the specified resource name:

  • healthcare.fhirStores.export

Request body

The request body contains data with the following structure:

JSON representation

  // Union field destination can be only one of the following:
  "gcsDestination": {
  "bigqueryDestination": {
  // End of list of possible types for union field destination.

Union field destination. The output destination of the export.

To enable the Cloud Healthcare API to write to resources in your project such as Cloud Storage buckets, you must give the consumer Cloud Healthcare API service account the proper permissions. The service account is: service-{PROJECT_NUMBER} The PROJECT_NUMBER identifies the project that contains the source Cloud Storage bucket. To get the project number, go to the GCP Console Dashboard. destination can be only one of the following:



The Cloud Storage output destination.

The Cloud Storage location requires the roles/storage.objectAdmin Cloud IAM role.

The exported outputs are organized by FHIR resource types. The server will create one object per resource type. Each object contains newline delimited JSON, and each line is a FHIR resource.



The BigQuery output destination.

The BigQuery location requires two IAM roles: roles/bigquery.dataEditor and roles/bigquery.jobUser.

The output will be one BigQuery table per resource type.

Response body

If successful, the response body contains an instance of Operation.

Authorization Scopes

Requires one of the following OAuth scopes:


For more information, see the Authentication Overview.


The configuration for exporting to Cloud Storage.

JSON representation
  "uriPrefix": string


URI for a Cloud Storage directory where result files should be written (in the format gs://{bucket-id}/{path/to/destination/dir}). If there is no trailing slash, the service will append one when composing the object path. The user is responsible for creating the Cloud Storage bucket referenced in uriPrefix.


The configuration for exporting to BigQuery.

JSON representation
  "datasetUri": string,
  "schemaConfig": {
  "force": boolean


BigQuery URI to a dataset, up to 2000 characters long, in the format bq://projectId.bqDatasetId



The configuration for the exported BigQuery schema.



If this flag is TRUE, all tables will be deleted from the dataset before the new exported tables are written. If the flag is not set and the destination dataset contains tables, the export call returns an error.


Configuration for the FHIR BigQuery schema. Determines how the server generates the schema.

JSON representation
  "schemaType": enum(SchemaType),
  "recursiveStructureDepth": string


Specifies the output schema type. If unspecified, the default is LOSSLESS.


string (int64 format)

The depth for all recursive structures in the output analytics schema. For example, concept in the CodeSystem resource is a recursive structure; when the depth is 2, the CodeSystem table will have a column called concept.concept but not concept.concept.concept. If not specified or set to 0, the server will use the default value 2. The maximum depth allowed is 5.


An enum consisting of the supported output schema types.

SCHEMA_TYPE_UNSPECIFIED No schema type specified. Same as LOSSLESS.
LOSSLESS A data-driven schema generated from the fields present in the FHIR data being exported, with no additional simplification.
ANALYTICS Analytics schema defined by the FHIR community. See
Hat Ihnen diese Seite weitergeholfen? Teilen Sie uns Ihr Feedback mit:

Feedback geben zu...

Cloud Healthcare API