This page explains various error scenarios and troubleshooting steps to resolve the errors.

Connectivity and networking error scenarios

If your service experiences connectivity or networking issues, check the scenarios in this section to see if one of them is causing the problem.

Service creation fails due to constraint to restrict VPC peering

Do not set the org-policy constraint to restrict VPC peering. Specifying constraints/compute.restrictVpcPeering will cause your creation request to fail with an INVALID_ARGUMENT error. If you must set the constraint, use the following command to allow under:folders/270204312590:

gcloud resource-manager org-policies allow compute.restrictVpcPeering under:folders/270204312590 --organization ORGANIZATION_ID

For more information, see Organization policy constraints.

Cross-project deployment fails where service account may not exist

To create a Dataproc Metastore service that is accessible in a network belonging to a different project than the one the service belongs to, you must grant roles/metastore.serviceAgent to the service project's Dataproc Metastore service agent ( in the network project's IAM policy.

gcloud projects add-iam-policy-binding NETWORK_PROJECT_ID \
    --role "roles/metastore.serviceAgent" \
    --member ""

For more information, see Setting up a cross-project deployment.

Private IP is required for network connectivity

Dataproc Metastore uses private IP only, so no public IP is exposed. This means that only VMs on the provided Virtual Private Cloud (VPC) network or on-premises (connected through Cloud VPN or Cloud Interconnect) can access the Dataproc Metastore service.

For more information, see Accessing a service.

Connection error caused by resources provisioned in shared VPC networks

If your Dataproc Metastore service uses a network belonging to a different project, and Compute Engine is protected by the service perimeter, then the metastore project and network project must be in the same perimeter.

To add existing Dataproc Metastore projects to the perimeter, follow the instructions in Updating a service perimeter.

For more information, see VPC Service Controls with Dataproc Metastore.

Allocated IP range is exhausted

The provided VPC network may run out of available RFC 1918 addresses required by Dataproc Metastore services. If that happens, Dataproc Metastore will attempt to reserve private IP address ranges outside of the RFC 1918 ranges for service creation. Please see Valid ranges in the VPC network documentation for a list of supported non-RFC 1918 private ranges.

Non-RFC 1918 private IP addresses used in Dataproc Metastore may conflict with a range in an on-premises network that is connected to the provided VPC network. To check the list of RFC 1918 and non-RFC 1918 private IP addresses reserved by Dataproc Metastore:

gcloud compute addresses list \
    --project NETWORK_PROJECT_ID \
    --filter="purpose:VPC_PEERING AND name ~ cluster|resourcegroup"

If a conflict is determined and cannot be mitigated by re-configuring the on-premises network, delete the offending Dataproc Metastore service and re-create it again after 2 hours.

For more information, see IP address range exhaustion.

Operation timeout error scenarios

The following error scenarios result in an unresponsive service or operation timeouts.

Using Audit logs to troubleshoot operation timeouts

To troubleshoot service operation timeouts, you can use the Logs Explorer in the Cloud Console to retrieve your audit log entries for your Cloud project.

In the Query builder pane, select Audited Resource or audited_resource as the Google Cloud resource type, followed by Dataproc Metastore or as the service. Selecting a method is optional.

For more information, see Viewing logs.

Import and export error scenarios

This section outlines some common issues you may run into when using import and export for Dataproc Metastore.

Import fails because the Hive versions did not match

When you import metadata, the Hive metastore and Dataproc Metastore versions must be compatible. Your import may fail if the two do not match. For more information, see version policy.

Import fails because there are missing Hive table files

When preparing the import for Avro, there should be a file for each Hive table, even if the table is empty, otherwise the import will fail.

Service agent or user does not have the right permissions

The Dataproc Metastore service agent ( and the user importing the metadata must have the following read permission: to the Cloud Storage bucket used for the import.

  • For MySQL, they must have storage.objects.get permission on the Cloud Storage object (SQL dump file) used for the import.

  • For Avro, they must have storage.objects.get permission on the Cloud Storage bucket used for the import.

For exports, the Dataproc Metastore service agent ( and the user creating the export must have storage.objects.create permission on the bucket.

Job fails because the database file is too big

If your database file is too large, it can take more than one hour to complete the import or export process. You can try giving the job more time to complete or using a smaller file size.

Backup and restore error scenarios

This section outlines some common issues you may run into when using backup and restore for Dataproc Metastore.

Unable to create a new backup for a service

If there are already 7 backups in a service, then you must first manually delete a backup before creating a new one. You can delete existing backups from the Backup/Restore tab.

User does not have the right permissions

To backup metadata, you must be granted an IAM role containing the metastore.backups.create IAM permission.

To restore metadata, you must be granted an IAM role containing the and metastore.backups.use IAM permissions.

Troubleshooting gcloud command-line tool issues

If you run into an issue where a gcloud tool command is unavailable, or if the command behaves differently from how it is documented, try updating the gcloud SDK:

gcloud components update