perfdiag - Run performance diagnostic

perfdiag - Run performance diagnostic


gsutil perfdiag [-i in.json] gsutil perfdiag [-o out.json] [-n objects] [-c processes]

[-k threads] [-p parallelism type] [-y slices] [-s size] [-d directory] [-t tests] url...


The perfdiag command runs a suite of diagnostic tests for a given Google Storage bucket.

The 'url' parameter must name an existing bucket (e.g. gs://foo) to which the user has write permission. Several test files will be uploaded to and downloaded from this bucket. All test files will be deleted at the completion of the diagnostic if it finishes successfully.

gsutil performance can be impacted by many factors at the client, server, and in-between, such as: CPU speed; available memory; the access path to the local disk; network bandwidth; contention and error rates along the path between gsutil and Google; operating system buffering configuration; and firewalls and other network elements. The perfdiag command is provided so that customers can run a known measurement suite when troubleshooting performance problems.

Providing Diagnostic Output To Google Cloud Storage Team

If the Google Cloud Storage Team asks you to run a performance diagnostic please use the following command, and email the output file (output.json) to

gsutil perfdiag -o output.json gs://your-bucket


-n Sets the number of objects to use when downloading and uploading files during tests. Defaults to 5.
-c Sets the number of processes to use while running throughput experiments. The default value is 1.

Sets the number of threads per process to use while running throughput experiments. Each process will receive an equal number of threads. The default value is 1.

Note: All specified threads and processes will be created, but may not by saturated with work if too few objects (specified with -n) and too few components (specified with -y) are specified.


Sets the type of parallelism to be used (only applicable when threads or processes are specified and threads * processes > 1). The default is to use fan. Must be one of the following:

Use one thread per object. This is akin to using gsutil -m cp, with sliced object download / parallel composite upload disabled.
Use Y (specified with -y) threads for each object, transferring one object at a time. This is akin to using parallel object download / parallel composite upload, without -m. Sliced uploads not supported for s3.
Use Y (specified with -y) threads for each object, transferring multiple objects at a time. This is akin to simultaneously using sliced object download / parallel composite upload and gsutil -m cp. Sliced uploads not supported for s3.
-y Sets the number of slices to divide each file/object into while transferring data. Only applicable with the slice (or both) parallelism type. The default is 4 slices.
-s Sets the size (in bytes) for each of the N (set with -n) objects used in the read and write throughput tests. The default is 1 MiB. This can also be specified using byte suffixes such as 500K or 1M. Note: these values are interpreted as multiples of 1024 (K=1024, M=1024*1024, etc.) Note: If rthru_file or wthru_file are performed, N (set with -n) times as much disk space as specified will be required for the operation.
-d Sets the directory to store temporary local files in. If not specified, a default temporary directory will be used.

Sets the list of diagnostic tests to perform. The default is to run the lat, rthru, and wthru diagnostic tests. Must be a comma-separated list containing one or more of the following:

For N (set with -n) objects, write the object, retrieve its metadata, read the object, and finally delete the object. Record the latency of each operation.
Write N (set with -n) objects to the bucket, record how long it takes for the eventually consistent listing call to return the N objects in its result, delete the N objects, then record how long it takes listing to stop returning the N objects.
Runs N (set with -n) read operations, with at most C (set with -c) reads outstanding at any given time.
The same as rthru, but simultaneously writes data to the disk, to gauge the performance impact of the local disk on downloads.
Runs N (set with -n) write operations, with at most C (set with -c) writes outstanding at any given time.
The same as wthru, but simultaneously reads data from the disk, to gauge the performance impact of the local disk on uploads.

Adds metadata to the result JSON file. Multiple -m values can be specified. Example:

gsutil perfdiag -m "key1:val1" -m "key2:val2" gs://bucketname

Each metadata key will be added to the top-level "metadata" dictionary in the output JSON file.

-o Writes the results of the diagnostic to an output file. The output is a JSON file containing system information and performance diagnostic results. The file can be read and reported later using the -i option.
-i Reads the JSON output file created using the -o command and prints a formatted description of the results.

Measuring Availability

The perfdiag command ignores the boto num_retries configuration parameter. Instead, it always retries on HTTP errors in the 500 range and keeps track of how many 500 errors were encountered during the test. The availability measurement is reported at the end of the test.

Note that HTTP responses are only recorded when the request was made in a single process. When using multiple processes or threads, read and write throughput measurements are performed in an external process, so the availability numbers reported won't include the throughput measurements.


The perfdiag command collects system information. It collects your IP address, executes DNS queries to Google servers and collects the results, and collects network statistics information from the output of netstat -s. It will also attempt to connect to your proxy server if you have one configured. None of this information will be sent to Google unless you choose to send it.

Send feedback about...

Cloud Storage Documentation