Troubleshooting

This page describes troubleshooting methods for common errors you may encounter while using Cloud Storage.

See the Google Cloud Status Dashboard for information about regional or global incidents affecting Google Cloud services such as Cloud Storage.

Logging raw requests

When using tools such as gcloud or the Cloud Storage client libraries, much of the request and response information is handled by the tool. However, it is sometimes useful to see details to aid in troubleshooting or when posting questions to forums such as Stack Overflow. Use the following instructions to return request and response headers for your tool:

Console

Viewing request and response information depends on the browser you're using to access the Google Cloud console. For the Google Chrome browser:

  1. Click Chrome's main menu button ().

  2. Select More Tools.

  3. Click Developer Tools.

  4. In the pane that appears, click the Network tab.

Command line

gcloud

Use global debugging flags in your request. For example:

gcloud storage ls gs://my-bucket/my-object --log-http --verbosity=debug

gsutil

Use the global -D flag in your request. For example:

gsutil -D ls gs://my-bucket/my-object

Client libraries

C++

  • Set the environment variable CLOUD_STORAGE_ENABLE_TRACING=http to get the full HTTP traffic.

  • Set the environment variable CLOUD_STORAGE_ENABLE_CLOG=yes to get logging of each RPC.

C#

Add a logger via ApplicationContext.RegisterLogger, and set logging options on the HttpClient message handler. For more information, see the FAQ entry.

Go

Set the environment variable GODEBUG=http2debug=1. For more information, see the Go package net/http.

If you want to log the request body as well, use a custom HTTP client.

Java

  1. Create a file named "logging.properties" with the following contents:

    # Properties file which configures the operation of the JDK logging facility.
    # The system will look for this config file to be specified as a system property:
    # -Djava.util.logging.config.file=${project_loc:googleplus-simple-cmdline-sample}/logging.properties
    
    # Set up the console handler (uncomment "level" to show more fine-grained messages)
    handlers = java.util.logging.ConsoleHandler
    java.util.logging.ConsoleHandler.level = CONFIG
    
    # Set up logging of HTTP requests and responses (uncomment "level" to show)
    com.google.api.client.http.level = CONFIG
  2. Use logging.properties with Maven

    mvn -Djava.util.logging.config.file=path/to/logging.properties insert_command

For more information, see Pluggable HTTP Transport.

Node.js

Set the environment variable NODE_DEBUG=https before calling the Node script.

PHP

Provide your own HTTP handler to the client using httpHandler and set up middleware to log the request and response.

Python

Use the logging module. For example:

import logging
import http.client

logging.basicConfig(level=logging.DEBUG)
http.client.HTTPConnection.debuglevel=5

Ruby

At the top of your .rb file after require "google/cloud/storage", add the following:

ruby
Google::Apis.logger.level = Logger::DEBUG

Adding custom headers

Adding custom headers to requests is a common tool for debugging purposes, such as for enabling debug headers or for tracing a request. The following example shows how to set request headers for different Cloud Storage tools:

Command line

gcloud

Use the --additional-headers flag, which is available for most commands. For example:

gcloud storage objects describe gs://my-bucket/my-object --additional-headers=HEADER_NAME=HEADER_VALUE

Where HEADER_NAME and HEADER_VALUE define the header you are adding to the request.

gsutil

Use the global -h flag in your request. For example:

gsutil -h "HEADER_NAME:HEADER_VALUE" stat gs://my-bucket/my-object

Where HEADER_NAME and HEADER_VALUE define the header you are adding to the request.

Client libraries

C++

namespace gcs = google::cloud::storage;
gcs::Client client = ...;
client.AnyFunction(... args ..., gcs::CustomHeader("header-name", "value"));

C#

The following sample adds a custom header to every request made by the client library.

using Google.Cloud.Storage.V1;

var client = StorageClient.Create();
client.Service.HttpClient.DefaultRequestHeaders.Add("custom-header", "custom-value");

var buckets = client.ListBuckets("my-project-id");
foreach (var bucket in buckets)

{
  Console.WriteLine(bucket.Name);
}

Go

Adding custom headers to requests made by the Go client library require wrapping the transport used for the client with a custom RoundTripper. The following example sends debug headers and logs the corresponding response headers:

package main

import (
  "context"
  "io/ioutil"
  "log"
  "net/http"

  "cloud.google.com/go/storage"
  "google.golang.org/api/option"
  raw "google.golang.org/api/storage/v1"
  htransport "google.golang.org/api/transport/http"
)

func main() {

  ctx := context.Background()

  // Standard way to initialize client:
  // client, err := storage.NewClient(ctx)
  // if err != nil {
  //      // handle error
  // }

  // Instead, create a custom http.Client.
  base := http.DefaultTransport
  trans, err := htransport.NewTransport(ctx, base, option.WithScopes(raw.DevstorageFullControlScope),
            option.WithUserAgent("custom-user-agent"))
  if err != nil {
            // Handle error.
  }
  c := http.Client{Transport:trans}

  // Add RoundTripper to the created HTTP client.
  c.Transport = withDebugHeader{c.Transport}

  // Supply this client to storage.NewClient
  client, err := storage.NewClient(ctx, option.WithHTTPClient(&c))
  if err != nil {
              // Handle error.
  }

  // Use client to make a request
 }

type withDebugHeader struct {
  rt http.RoundTripper
}

func (wdh withDebugHeader) RoundTrip(r *http.Request) (*http.Response, error) {
  headerName := "X-Custom-Header"
  r.Header.Add(headerName, "value")
  resp, err := wdh.rt.RoundTrip(r)
  if err == nil {
    log.Printf("Resp Header: %+v, ", resp.Header.Get(headerName))
  } else {
    log.Printf("Error: %+v", err)
  }
  return resp, err
}

Java

import com.google.api.gax.rpc.FixedHeaderProvider;
import com.google.api.gax.rpc.HeaderProvider;
import com.google.cloud.WriteChannel;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

import java.io.IOException;
import java.nio.ByteBuffer;
import static java.nio.charset.StandardCharsets.UTF_8;

public class Example {

  public void main(String args[]) throws IOException {
    HeaderProvider headerProvider =
            FixedHeaderProvider.create("custom-header", "custom-value");
    Storage storage = StorageOptions.getDefaultInstance()
            .toBuilder()
            .setHeaderProvider(headerProvider)
            .build().getService();
    String bucketName = "example-bucket";
    String blobName = "test-custom-header";

    // Use client with custom header
    BlobInfo blob = BlobInfo.newBuilder(bucketName, blobName).build();
    byte[] stringBytes;
    try (WriteChannel writer = storage.writer(blob)) {
      stringBytes = "hello world".getBytes(UTF_8);
      writer.write(ByteBuffer.wrap(stringBytes));
    }
  }
}

Node.js

const storage = new Storage();

storage.interceptors.push({
  request: requestConfig => {
    Object.assign(requestConfig.headers, {
      'X-Custom-Header': 'value',
      });
    return requestConfig;
  },
});

PHP

All method calls which trigger http requests accept an optional $restOptions argument as the last argument. You can provide custom headers on a per-request basis, or on a per-client basis.

use Google\Cloud\Storage\StorageClient;

$client = new StorageClient([
   'restOptions' => [
       'headers' => [
           'x-foo' => 'bat'
       ]
   ]
]);

$bucket = $client->bucket('my-bucket');

$bucket->info([
   'restOptions' => [
       'headers' => [
           'x-foo' => 'bar'
       ]
   ]
]);

Python

Adding custom headers to requests made by the Python client library is not currently supported.

Ruby

require "google/cloud/storage"

storage = Google::Cloud::Storage.new

storage.add_custom_headers { 'X-Custom-Header'=> 'value' }

Error codes

The following are common HTTP status codes you may encounter.

301: Moved Permanently

Issue: I'm setting up a static website, and accessing a directory path returns an empty object and a 301 HTTP response code.

Solution: If your browser downloads a zero byte object and you get a 301 HTTP response code when accessing a directory, such as http://www.example.com/dir/, your bucket most likely contains an empty object of that name. To check that this is the case and fix the issue:

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. Click the Activate Cloud Shell button at the top of the Google Cloud console. Activate Cloud Shell
  3. Run gcloud storage ls --recursive gs://www.example.com/dir/. If the output includes http://www.example.com/dir/, you have an empty object at that location.
  4. Remove the empty object with the command: gcloud storage rm gs://www.example.com/dir/

You can now access http://www.example.com/dir/ and have it return that directory's index.html file instead of the empty object.

400: Bad Request

Issue: While performing a resumable upload, I received this error and the message Failed to parse Content-Range header.

Solution: The value you used in your Content-Range header is invalid. For example, Content-Range: */* is invalid and instead should be specified as Content-Range: bytes */*. If you receive this error, your current resumable upload is no longer active, and you must start a new resumable upload.

401: Unauthorized

Issue: Requests to a public bucket directly, or via Cloud CDN, are failing with a HTTP 401: Unauthorized and an Authentication Required response.

Solution: Check that your client, or any intermediate proxy, is not adding an Authorization header to requests to Cloud Storage. Any request with an Authorization header, even if empty, is validated as if it were an authentication attempt.

403: Account Disabled

Issue: I tried to create a bucket but got a 403 Account Disabled error.

Solution: This error indicates that you have not yet turned on billing for the associated project. For steps for enabling billing, see Enable billing for a project.

If billing is turned on and you continue to receive this error message, you can reach out to support with your project ID and a description of your problem.

403: Forbidden

Issue: I should have permission to access a certain bucket or object, but when I attempt to do so, I get a 403 - Forbidden error with a message that is similar to: example@email.com does not have storage.objects.get access to the Google Cloud Storage object.

Solution: You are missing a IAM permission for the bucket or object that is required to complete the request. If you expect to be able to make the request but cannot, perform the following checks:

  1. Is the grantee referenced in the error message the one you expected? If the error message refers to an unexpected email address or to "Anonymous caller", then your request is not using the credentials you intended. This could be because the tool you are using to make the request was set up with the credentials from another alias or entity, or it could be because the request is being made on your behalf by a service account.

  2. Is the permission referenced in the error message one thought you needed? If the permission is unexpected, it's likely because the tool you're using requires additional access in order to complete your request. For example, in order to bulk delete objects in a bucket, gcloud must first construct a list of objects in the bucket to delete. This portion of the bulk delete action requires the storage.objects.list permission, which might be surprising, given that the goal is object deletion, which normally requires only the storage.objects.delete permission. If this is the cause of your error message, make sure you're granted IAM roles that have the additional necessary permissions.

  3. Are you granted the IAM role on the intended resource or parent resource? For example, if you're granted the Storage Object Viewer role for a project and you're trying to download an object, make sure the object is in a bucket that's in the project; you might inadvertently have the Storage Object Viewer permission for a different project.

403: Forbidden

Issue: I am downloading my content from storage.cloud.google.com, and I receive a 403: Forbidden error when I use the browser to navigate to the object using the URL:

https://storage.cloud.google.com/BUCKET_NAME/OBJECT_NAME

Solution: Using storage.cloud.google.com to download objects is known as authenticated browser downloads, which uses cookie-based authentication. If you have configured Data Access audit logs in Cloud Audit Logs to track access to objects, one of the restrictions of that feature is that authenticated browser downloads cannot be used to download a tracked object, unless the object is publicly readable. Attempting to use an authenticated browser download for non-public objects results in a 403 response. This restriction exists to prevent phishing for Google IDs, which are used for cookie-based authentication.

To avoid this issue, do one of the following:

  • Use direct API calls, which support unauthenticated downloads, instead of using authenticated browser downloads.
  • Disable the Cloud Storage Data Access audit logs that are tracking access to the affected objects. Be aware that Data Access audit logs are set at or above the project level and can be enabled simultaneously at multiple levels.
  • Set exemptions to exclude specific users from Data Access audit log tracking, which allows those users to perform authenticated browser downloads.
  • Make affected objects publicly readable, by granting read permission to either allUsers or allAuthenticatedUsers. Data Access audit logs do not record access to public objects.

409: Conflict

Issue: I tried to create a bucket but received the following error:

409 Conflict. Sorry, that name is not available. Please try a different one.

Solution: The bucket name you tried to use (e.g. gs://cats or gs://dogs) is already taken. Cloud Storage has a global namespace so you may not name a bucket with the same name as an existing bucket. Choose a name that is not being used.

429: Too Many Requests

Issue: My requests are being rejected with a 429 Too Many Requests error.

Solution: You are hitting a limit to the number of requests Cloud Storage allows for a given resource. See the Cloud Storage quotas for a discussion of limits in Cloud Storage. If your workload consists of 1000's of requests per second to a bucket, see Request rate and access distribution guidelines for a discussion of best practices, including ramping up your workload gradually and avoiding sequential filenames.

429: Too Many Requests

Issue: My requests are being rejected with the following error:

429 Too Many Requests. This workload is drawing too much egress bandwidth
from Cloud Storage and has exceeded the InternetEgressBandwidth Quota.
Please reduce the rate of request or contact Google Cloud Support if you want to
increase your bandwidth quota.

Issue: My requests are being rejected with the following error:

429 Too Many Requests. This workload is drawing too much egress bandwidth
from Cloud Storage and has exceeded the
MultiregionInternetEgressBandwidth Quota. Please reduce the rate of request or
contact Google Cloud Support if you want to increase your bandwidth quota.

Solution: Egress to the Internet could be limited based on your project's history. Contact Google Cloud Support to request increasing your bandwidth quota.

Diagnosing Google Cloud console errors

Issue: When using the Google Cloud console to perform an operation, I get a generic error message. For example, I see an error message when trying to delete a bucket, but I don't see details for why the operation failed.

Solution: Use the Google Cloud console's notifications to see detailed information about the failed operation:

  1. Click the Notifications button in the Google Cloud console header.

    Notifications

    A dropdown displays the most recent operations performed by the Google Cloud console.

  2. Click the item you want to find out more about.

    A page opens up and displays detailed information about the operation.

  3. Click on each row to expand the detailed error information.

    Below is an example of error information for a failed bucket deletion operation, which explains that a bucket retention policy prevented the deletion of the bucket.

    Bucket deletion error details

Folders

Issue: I deleted some objects in my bucket, and now the folder that contained them does not appear in the Google Cloud console.

Solution: While the Google Cloud console displays your bucket's contents as if there was a directory structure, folders do not fundamentally exist in Cloud Storage. As a result, when you remove all objects with a common prefix from a bucket, the folder icon representing that group of objects no longer appears in the Google Cloud console.

Static website errors

The following are common issues that you may encounter when setting up a bucket to host a static website.

HTTPS serving

Issue: I want to serve my content over HTTPS without using a load balancer.

Solution: You can serve static content through HTTPS using direct URIs such as https://storage.googleapis.com/my-bucket/my-object. For other options to serve your content through a custom domain over SSL, you can:

Domain verification

Issue: I can't verify my domain.

Solution: Normally, the verification process in Search Console directs you to upload a file to your domain, but you may not have a way to do this without first having an associated bucket, which you can only create after you have performed domain verification.

In this case, verify ownership using the Domain name provider verification method. See Ownership verification for steps to accomplish this. This verification can be done before the bucket is created.

Inaccessible page

Issue: I get an Access denied error message for a web page served by my website.

Solution: Check that the object is shared publicly. If it is not, see Making Data Public for instructions on how to do this.

If you previously uploaded and shared an object, but then upload a new version of it, then you must reshare the object publicly. This is because the public permission is replaced with the new upload.

Permission update failed

Issue: I get an error when I attempt to make my data public.

Solution: Make sure that you have the setIamPolicy permission for your object or bucket. This permission is granted, for example, in the Storage Admin role. If you have the setIamPolicy permission and you still get an error, your bucket might be subject to public access prevention, which does not allow access to allUsers or allAuthenticatedUsers. Public access prevention might be set on the bucket directly, or it might be enforced through an organization policy that is set at a higher level.

Content download

Issue: I am prompted to download my page's content, instead of being able to view it in my browser.

Solution: If you specify a MainPageSuffix as an object that does not have a web content type, then instead of serving the page, site visitors are prompted to download the content. To resolve this issue, update the content-type metadata entry to a suitable value, such as text/html. See Editing object metadata for instructions on how to do this.

Latency

The following are common latency issues you might encounter. In addition, the Google Cloud Status Dashboard provides information about regional or global incidents affecting Google Cloud services such as Cloud Storage.

Upload or download latency

Issue: I'm seeing increased latency when uploading or downloading.

Solution: Use the gsutil perfdiag command to run performance diagnostics from the affected environment. Consider the following common causes of upload and download latency:

  • CPU or memory constraints: The affected environment's operating system should have tooling to measure local resource consumption such as CPU usage and memory usage.

  • Disk IO constraints: As part of the gsutil perfdiag command, use the rthru_file and wthru_file tests to gauge the performance impact caused by local disk IO.

  • Geographical distance: Performance can be impacted by the physical separation of your Cloud Storage bucket and affected environment, particularly in cross-continental cases. Testing with a bucket located in the same region as your affected environment can identify the extent to which geographic separation is contributing to your latency.

    • If applicable, the affected environment's DNS resolver should use the EDNS(0) protocol so that requests from the environment are routed through an appropriate Google Front End.

CLI or client library latency

Issue: I'm seeing increased latency when accessing Cloud Storage with gcloud storage, gsutil, or one of the client libraries.

Solution: The CLIs and the client libraries automatically retry requests when it's useful to do so, and this behavior can effectively increase latency as seen from the end user. Use the Cloud Monitoring metric storage.googleapis.com/api/request_count to see if Cloud Storage is consistenty serving a retryable response code, such as 429 or 5xx.

Proxy servers

Issue: I'm connecting through a proxy server. What do I need to do?

Solution: To access Cloud Storage through a proxy server, you must allow access to these domains:

  • accounts.google.com for creating OAuth2 authentication tokens
  • oauth2.googleapis.com for performing OAuth2 token exchanges
  • *.googleapis.com for storage requests

If your proxy server or security policy doesn't support whitelisting by domain and instead requires whitelisting by IP network block, we strongly recommend that you configure your proxy server for all Google IP address ranges. You can find the address ranges by querying WHOIS data at ARIN. As a best practice, you should periodically review your proxy settings to ensure they match Google's IP addresses.

We do not recommend configuring your proxy with individual IP addresses you obtain from one-time lookups of oauth2.googleapis.com and storage.googleapis.com. Because Google services are exposed via DNS names that map to a large number of IP addresses that can change over time, configuring your proxy based on a one-time lookup may lead to failures to connect to Cloud Storage.

If your requests are being routed through a proxy server, you may need to check with your network administrator to ensure that the Authorization header containing your credentials is not stripped out by the proxy. Without the Authorization header, your requests are rejected and you receive a MissingSecurityHeader error.

What's next