Frequently asked questions and troubleshooting

The following are common issues that can occur when interacting with the Cloud Asset API and how to handle them.

Why does my request have invalid authentication credentials?

If you haven't set up the OAuth header properly, making a call will return the following error:

{
  "error": {
    "code": 401,
    "message": "Request had invalid authentication credentials. Expected
               OAuth 2 access token, login cookie or other valid
               authentication credential. See
               https://developers.google.com/identity/sign-in/web/devconsole-project.",
    "status": "UNAUTHENTICATED",
    "details": [
      {
        "@type": "type.googleapis.com/google.rpc.DebugInfo",
        "detail": "Authentication error: 2"
      }
    ]
  }
}

To address this issue, repeat the steps to verify your initial setup.

Why do I not have permission to use the Cloud Asset API?

An error is returned if you don't have permission to export assets or get the history on an organization, project, or folder.

For example, if you don't have permission, running the following command:

gcurl -d '{"outputConfig":{"gcsDestination": \
{"uri":gs://YOUR_BUCKET/NEW_FILE}}}' \
https://cloudasset.googleapis.com/v1/projects/PROJECT_NUMBER:exportAssets

Will return the following error:

{
 "error": {
  "code": 403,
  "message": "The caller does not have permission",
  "status": "PERMISSION_DENIED",
  "details": [
   {
    "@type": "type.googleapis.com/google.rpc.DebugInfo",
    "detail": "[ORIGINAL ERROR] generic::permission_denied: Request
    denied by Cloud IAM."
   }
  ]
 }
}

To address this issue, request access from your project, folder, or organization admin. Depending on the assets you are trying to export or get history for, you'll need one of the following roles or other roles that include the required Cloud Asset API permissions:

  • cloudasset.viewer
  • cloudasset.owner

For more information on roles and permissions, see Understanding roles.

For more information on access control options for Cloud Asset APIs, see Access Control.

Why are my export to Cloud Storage commands failing?

If the Cloud Storage bucket you use to store exported data isn't in the Cloud Asset API-enabled project you're running the export from, performing the request will result in the following permission denied error:

    {
     "error": {
      "code": 7,
      "message": "Failed to write to: YOUR_BUCKET/FILE",
     }
    }
    

To address this issue, either use a Cloud Storage bucket that belongs to the Cloud Asset API-enabled project you're running the export from, or grant the service-PROJECT_NUMBER@gcp-sa-cloudasset.iam.gserviceaccount.com service account the roles/storage.admin role, where PROJECT_NUMBER is the project number of the Cloud Asset API-enabled project you're running the export from.

Why is the Cloud Asset API result stale?

Data freshness in the Cloud Asset API is on a best-effort basis. While almost all asset updates will be available to clients in minutes, in rare cases it's possible the result of the Cloud Asset API methods won't include the most recent asset updates.

Why are temporary files output after running ExportAssets?

The ExportAssets operation might create temporary files in the output folder. Don't remove these temporary files while the operation is in progress. Once the operation is complete, the temporary files are removed automatically.

If the temporary files remain, you can safely remove them after the ExportAssets operation is complete.

What if my request URL too long for BatchGetAssetsHistory?

The BatchGetAssetsHistory method is an HTTP GET action that sends all request data in a length limited URL. As a result, an error will occur if the request is too long.

To bypass this, the client code should use HTTP POST to send request with the Content-Type set to application/x-www-form-urlencoded along with an X-HTTP-Method-Override: GET HTTP header. See Long Request URLs for more information.

The following is an example request for BatchGetAssetsHistory using HTTP POST:

curl -X POST -H "X-HTTP-Method-Override: GET" \
     -H "Content-Type: application/x-www-form-urlencoded" \
     -H "Authorization: Bearer " \
     -d 'assetNames=&contentType=1&readTimeWindow.startTime=2018-09-01T09:00:00Z' \
     https://cloudasset.googleapis.com/v1/projects/:batchGetAssetsHistory

Why is my Cloud SDK or Cloud Shell credential rejected?

If a user project in a request is sent to cloudasset.googleapis.com from the Cloud SDK or Cloud Shell, you'll see an error message like the following:

Your application has authenticated using end user credentials from the
Cloud SDK or Cloud Shell which are not supported by the
cloudasset.googleapis.com. We recommend that most server applications
use service accounts instead. For more information about service accounts
and how to use them in your application, see
https://cloud.google.com/docs/authentication/.

To fix this issue, set the user project to the Cloud Asset API-enabled user's project ID. This can be done by specifying HTTP header X-Goog-User-Project in the HTTP request.

If you're using curl, this can be done by adding the following parameter:

-H 'X-Goog-User-Project: PROJECT_ID'

If you're using the gcloud tool, specify flag --billing-project <var>PROJECT_ID</var> along with the gcloud asset command, or use the following command:

gcloud config set billing/quota_project PROJECT_ID

How do I export assets to BigQuery tables that don't belong to the current project?

When you call the ExportAssets API from the Cloud Asset API-enabled project (A), it uses the service-PROJECT_A_NUMBER@gcp-sa-cloudasset.iam.gserviceaccount.com service account to write to the destination BigQuery table. To output to a BigQuery table in another project (B):

In project B's Identity and Access Management (IAM) policy, grant the service account (service-PROJECT_A_NUMBER@gcp-sa-cloudasset.iam.gserviceaccount.com) the roles/bigquery.user and roles/bigquery.dataEditor roles.

Why do I see different ancestors for the same assets?

When calling the Cloud Asset API to get different metadata types, such as RESOURCE metadata and IAM POLICY metadata for the same asset, it is possible that the ancestors field is inconsistent across content types. This is because there are different data ingestion schedules for each content type, and until the ingestion process is complete, they may be inconsistent. Check the update_time field to ensure that the asset has the most up-to-date information.

Please contact us if the inconsistency lasts for more than 24 hours.

How frequently should I call the ExportAssets API?

We recommend calling the ExportAssets API for the same organization/folder/project in a sequential manner; For example, issue the second call after the previous one completes. If you want to capture asset updates in real time, consider using real-time notifications.

Receive duplicate asset updates

After setting up real-time notifications, it is possible for you to receive duplicate asset updates in your Pub/Sub topic. This is caused by an automatic attempt to retry the delivery, as Pub/Sub does not guarantee at-least-once delivery.

Why didn't I receive notifications for project deletions?

When you shut down a project, you have 30 days to undo the operation. The deleted field in the notification is not set until the project is permanently deleted. To monitor projects that are pending deletion, you can set a feed with a condition on the project's lifecycleState, e.g. temporal_asset.asset.resource.data.lifecycleState == "DELETE_REQUESTED".