A client for Cloud Storage - Unified object storage.
Here's a simple usage example the Java Storage client. This example shows how to create a Storage object.
Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of("bucket", "blob_name");
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
Blob blob = storage.create(blobInfo, "Hello, Cloud Storage!".getBytes(UTF_8));
This second example shows how to update an object's content if the object exists.
Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of("bucket", "blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
byte[] prevContent = blob.getContent();
System.out.println(new String(prevContent, UTF_8));
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
channel.close();
}
For more detailed code examples, see the sample library.
When using google-cloud from outside of App/Compute Engine, you have to specify a project ID and provide credentials.
Operations in this library are generally thread safe, except for the use of BlobReadChannel and BlobWriteChannel.
The GCS Java client library includes support to GCS via gRPC. When using GCS from Google Compute Engine (GCE) this library enable higher total throughput across large workloads that run on hundreds or thousands of VMs.
At present, GCS gRPC is GA with Allowlist. To access this API, kindly contact the Google Cloud Storage gRPC team at gcs-grpc-contact@google.com with a list of GCS buckets you would like to Allowlist. Please note that while the service is GA (with Allowlist), the client library features remain experimental and subject to change without notice. The methods to create, list, query, and delete HMAC keys and notifications are unavailable in gRPC transport.
This example shows how to enable gRPC with Direct Google Access only supported on Google Compute Engine.
StorageOptions options = StorageOptions.grpc().setAttemptDirectPath(true).build();
try (Storage storage = options.getService()) {
BlobId blobId = BlobId.of("bucket", "blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
byte[] prevContent = blob.getContent();
System.out.println(new String(prevContent, UTF_8));
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
channel.close();
}
}
This example shows how to enable gRPC.
StorageOptions options = StorageOptions.grpc().build();
try (Storage storage = options.getService()) {
BlobId blobId = BlobId.of("bucket", "blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
byte[] prevContent = blob.getContent();
System.out.println(new String(prevContent, UTF_8));
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
channel.close();
}
}
See Also: Google Cloud Storage
Classes
Acl
Access Control List for buckets or blobs. See Also: About Access Control Lists
Acl.Builder
Builder for Acl
objects.
Acl.Domain
Class for ACL Domain entities.
Acl.Entity
Base class for Access Control List entities.
Acl.Group
Class for ACL Group entities.
Acl.Project
Class for ACL Project entities.
Acl.Project.ProjectRole
Acl.RawEntity
Acl.Role
Acl.User
Class for ACL User entities.
BidiBlobWriteSessionConfig
Blob
An object in Google Cloud Storage. A Blob
object includes the BlobId
instance,
the set of properties inherited from the BlobInfo class and the Storage
instance.
The class provides methods to perform operations on the object. Reading a property value does not
issue any RPC calls. The object content is not stored within the Blob
instance.
Operations that access the content issue one or multiple RPC calls, depending on the content
size.
Objects of this class are immutable. Operations that modify the blob like #update and
#copyTo return a new object. Any changes to the object in Google Cloud Storage made after
creation of the Blob
are not visible in the Blob
. To get a Blob
object
with the most recent information use #reload.
Example of getting the content of the object in Google Cloud Storage:
BlobId blobId = BlobId.of(bucketName, blobName);
Blob blob = storage.get(blobId);
long size = blob.getSize(); // no RPC call is required
byte[] content = blob.getContent(); // one or multiple RPC calls will be issued
Blob.BlobSourceOption
Class for specifying blob source options when Blob
methods are used.
Blob.Builder
Builder for Blob
.
BlobId
Google Storage Object identifier. A BlobId
object includes the name of the containing
bucket, the blob's name and possibly the blob's generation. If #getGeneration() is
null
the identifier refers to the latest blob's generation.
BlobInfo
Information about an object in Google Cloud Storage. A BlobInfo
object includes the
BlobId
instance and the set of properties, such as the blob's access control
configuration, user provided metadata, the CRC32C checksum, etc. Instances of this class are used
to create a new object in Google Cloud Storage or update the properties of an existing object. To
deal with existing Storage objects the API includes the Blob class which extends
BlobInfo
and declares methods to perform operations on the object. Neither BlobInfo
nor
Blob
instances keep the object content, just the object properties.
Example of usage BlobInfo
to create an object in Google Cloud Storage:
BlobId blobId = BlobId.of(bucketName, blobName);
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
Blob blob = storage.create(blobInfo, "Hello, world".getBytes(StandardCharsets.UTF_8));
See Also: Concepts and Terminology
BlobInfo.Builder
Builder for BlobInfo
.
BlobInfo.CustomerEncryption
Objects of this class hold information on the customer-supplied encryption key, if the blob is encrypted using such a key.
BlobInfo.ImmutableEmptyMap<K,V>
This class is meant for internal use only. Users are discouraged from using this class.
BlobInfo.Retention
Defines a blob's Retention policy. Can only be used on objects in a retention-enabled bucket.
BlobInfo.Retention.Builder
BlobInfo.Retention.Mode
BlobWriteSessionConfig
A sealed internal implementation only class which provides the means of configuring a BlobWriteSession.
A BlobWriteSessionConfig
will be used to configure all BlobWriteSessions
produced by an instance of Storage.
See Also: Storage#blobWriteSession(BlobInfo, BlobWriteOption...), BlobWriteSessionConfigs, GrpcStorageOptions.Builder#setBlobWriteSessionConfig(BlobWriteSessionConfig)
BlobWriteSessionConfigs
Factory class to select and construct BlobWriteSessionConfigs.
There are several strategies which can be used to upload a Blob to Google Cloud Storage. This class provides factories which allow you to select the appropriate strategy for your workload.
Strategy | Factory Method(s) | Description | Transport(s) Supported | Considerations | Retry Support | Cloud Storage API used |
---|---|---|---|---|---|---|
Default (Chunk based upload) | #getDefault() | Buffer up to a configurable amount of bytes in memory, write to Cloud Storage when full or close. Buffer size is configurable via DefaultBlobWriteSessionConfig#withChunkSize(int) | gRPC | The network will only be used for the following operations:
|
Each chunk is retried up to the limitations specified in StorageOptions#getRetrySettings() | Resumable Upload |
Buffer to disk then upload | Buffer bytes to a temporary file on disk. On close() upload the entire files contents to Cloud Storage. Delete the temporary file. | gRPC |
|
Upload the file in the fewest number of RPC possible retrying within the limitations specified in StorageOptions#getRetrySettings() | Resumable Upload | |
Journal to disk while uploading | journaling(Collection < Path>) | Create a Resumable Upload Session, before transmitting bytes to Cloud Storage write to a recovery file on disk. If the stream to Cloud Storage is interrupted with a retryable error query the offset of the Resumable Upload Session, then open the recovery file from the offset and transmit the bytes to Cloud Storage. | gRPC |
|
Opening the stream for upload will be retried up to the limitations specified in StorageOptions#getRetrySettings() All bytes are buffered to disk and allow for recovery from any arbitrary offset. | Resumable Upload |
Parallel Composite Upload | #parallelCompositeUpload() | Break the stream of bytes into smaller part objects uploading each part in parallel. Then composing the parts together to make the ultimate object. | gRPC |
|
Bucket
A Google cloud storage bucket.
Objects of this class are immutable. Operations that modify the bucket like #update
return a new object. To get a Bucket
object with the most recent information use #reload. Bucket
adds a layer of service-related functionality over BucketInfo.
Bucket.BlobTargetOption
Class for specifying blob target options when Bucket
methods are used.
Bucket.BlobWriteOption
Class for specifying blob write options when Bucket
methods are used.
Bucket.BucketSourceOption
Class for specifying bucket source options when Bucket
methods are used.
Bucket.Builder
Builder for Bucket
.
BucketInfo
Google Storage bucket metadata; See Also: Concepts and Terminology
BucketInfo.AgeDeleteRule (deprecated)
Deprecated. Use a LifecycleRule
with a DeleteLifecycleAction
and use
LifecycleCondition.Builder.setAge
instead.
For example, new DeleteLifecycleAction(1)
is equivalent to new
LifecycleRule( LifecycleAction.newDeleteAction(),
LifecycleCondition.newBuilder().setAge(1).build()))
Delete rule class that sets a Time To Live for blobs in the bucket. See Also: Object Lifecycle Management
BucketInfo.Autoclass
Configuration for the Autoclass settings of a bucket. See Also: https://cloud.google.com/storage/docs/autoclass
BucketInfo.Autoclass.Builder
BucketInfo.Builder
Builder for BucketInfo
.
BucketInfo.CreatedBeforeDeleteRule (deprecated)
Deprecated. Use a LifecycleRule
with an action DeleteLifecycleAction
and a
condition LifecycleCondition.Builder.setCreatedBefore
instead.
Delete rule class for blobs in the bucket that have been created before a certain date. See Also: Object Lifecycle Management
BucketInfo.CustomPlacementConfig
The bucket's custom placement configuration for Custom Dual Regions. If using location
is
also required.
BucketInfo.CustomPlacementConfig.Builder
BucketInfo.DeleteRule (deprecated)
Deprecated. Use a LifecycleRule
with a DeleteLifecycleAction
and a
LifecycleCondition
which is equivalent to a subclass of DeleteRule instead.
Base class for bucket's delete rules. Allows to configure automatic deletion of blobs and blobs versions. See Also: Object Lifecycle Management
BucketInfo.HierarchicalNamespace
The bucket's hierarchical namespace (Folders) configuration. Enable this to use HNS.
BucketInfo.HierarchicalNamespace.Builder
BucketInfo.IamConfiguration
The Bucket's IAM Configuration. See Also: public-access-prevention, uniform bucket-level access
BucketInfo.IamConfiguration.Builder
Builder for IamConfiguration
BucketInfo.IsLiveDeleteRule (deprecated)
Deprecated. Use a LifecycleRule
with a DeleteLifecycleAction
and a condition
LifecycleCondition.Builder.setIsLive
instead.
Delete rule class to distinguish between live and archived blobs. See Also: Object Lifecycle Management
BucketInfo.LifecycleRule
Lifecycle rule for a bucket. Allows supported Actions, such as deleting and changing storage class, to be executed when certain Conditions are met.
Versions 1.50.0-1.111.2 of this library don’t support the CustomTimeBefore, DaysSinceCustomTime, DaysSinceNoncurrentTime and NoncurrentTimeBefore lifecycle conditions. To read GCS objects with those lifecycle conditions, update your Java client library to the latest version. See Also: Object Lifecycle Management
BucketInfo.LifecycleRule.AbortIncompleteMPUAction
BucketInfo.LifecycleRule.DeleteLifecycleAction
BucketInfo.LifecycleRule.LifecycleAction
Base class for the Action to take when a Lifecycle Condition is met. Supported Actions are expressed as subclasses of this class, accessed by static factory methods.
BucketInfo.LifecycleRule.LifecycleCondition
Condition for a Lifecycle rule, specifies under what criteria an Action should be executed. See Also: Object Lifecycle Management
BucketInfo.LifecycleRule.LifecycleCondition.Builder
Builder for LifecycleCondition
.
BucketInfo.LifecycleRule.SetStorageClassLifecycleAction
BucketInfo.Logging
The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.
BucketInfo.Logging.Builder
BucketInfo.NumNewerVersionsDeleteRule (deprecated)
Deprecated. Use a LifecycleRule
with a DeleteLifecycleAction
and a condition
LifecycleCondition.Builder.setNumberOfNewerVersions
instead.
Delete rule class for versioned blobs. Specifies when to delete a blob's version according to the number of available newer versions for that blob. See Also: Object Lifecycle Management
BucketInfo.ObjectRetention
BucketInfo.ObjectRetention.Builder
BucketInfo.ObjectRetention.Mode
BucketInfo.SoftDeletePolicy
The bucket's soft delete policy. If this policy is set, any deleted objects will be soft-deleted according to the time specified in the policy
BucketInfo.SoftDeletePolicy.Builder
BufferToDiskThenUpload
There are scenarios in which disk space is more plentiful than memory space. This new BlobWriteSessionConfig allows augmenting an instance of storage to produce BlobWriteSessions which will buffer to disk rather than holding things in memory.
Once the file on disk is closed, the entire file will then be uploaded to GCS. See Also: BlobWriteSessionConfigs#bufferToDiskThenUpload(Path), Storage#blobWriteSession(BlobInfo, BlobWriteOption...), BlobWriteSessionConfigs#bufferToDiskThenUpload(Collection), GrpcStorageOptions.Builder#setBlobWriteSessionConfig(BlobWriteSessionConfig)
CanonicalExtensionHeadersSerializer
Canonical extension header serializer. See Also: Canonical Extension Headers
CopyWriter
Google Storage blob copy writer. A CopyWriter
object allows to copy both blob's data and
information. To override source blob's information supply a BlobInfo
to the
CopyRequest
using either Storage.CopyRequest.Builder#setTarget(BlobInfo,
Storage.BlobTargetOption...) or Storage.CopyRequest.Builder#setTarget(BlobInfo,
Iterable).
This class holds the result of a copy request. If source and destination blobs share the same location and storage class the copy is completed in one RPC call otherwise one or more #copyChunk calls are necessary to complete the copy. In addition, CopyWriter#getResult() can be used to automatically complete the copy and return information on the newly created blob. See Also: Rewrite
Cors
Cross-Origin Resource Sharing (CORS) configuration for a bucket. See Also: Cross-Origin Resource Sharing (CORS)
Cors.Builder
CORS configuration builder.
Cors.Origin
Class for a CORS origin.
DefaultBlobWriteSessionConfig
Default Configuration to represent uploading to Google Cloud Storage in a chunked manner.
Perform a resumable upload, uploading at most chunkSize
bytes each PUT.
Configuration of chunk size can be performed via DefaultBlobWriteSessionConfig#withChunkSize(int).
An instance of this class will provide a BlobWriteSession is logically equivalent to the following:
Storage storage = ...;
WriteChannel writeChannel = storage.writer(BlobInfo, BlobWriteOption);
writeChannel.setChunkSize(chunkSize);
GrpcStorageOptions
GrpcStorageOptions.Builder
GrpcStorageOptions.GrpcStorageDefaults
GrpcStorageOptions.GrpcStorageFactory
Internal implementation detail, only public to allow for java.io.Serializable compatibility in com.google.cloud.ServiceOptions.
To access an instance of this class instead use GrpcStorageOptions.defaults().getDefaultServiceFactory(). See Also: GrpcStorageOptions.GrpcStorageDefaults#getDefaultServiceFactory(), GrpcStorageOptions#defaults()
GrpcStorageOptions.GrpcStorageRpcFactory
Internal implementation detail, only public to allow for java.io.Serializable compatibility in com.google.cloud.ServiceOptions.
To access an instance of this class instead use GrpcStorageOptions.defaults().getDefaultRpcFactory(). See Also: GrpcStorageOptions.GrpcStorageDefaults#getDefaultRpcFactory(), GrpcStorageOptions#defaults()
HmacKey
HMAC key for a service account.
HmacKey.Builder
Builder for HmacKey
objects. *
HmacKey.HmacKeyMetadata
The metadata for a service account HMAC key. This class holds all data associated with an HMAC key other than the secret key.
HmacKey.HmacKeyMetadata.Builder
Builder for HmacKeyMetadata
objects. *
HttpCopyWriter
HttpMethod
Http method supported by Storage service.
HttpStorageOptions
HttpStorageOptions.Builder
HttpStorageOptions.HttpStorageDefaults
HttpStorageOptions.HttpStorageFactory
Internal implementation detail, only public to allow for java.io.Serializable.
To access an instance of this class instead use HttpStorageOptions.defaults().getDefaultServiceFactory(). See Also: HttpStorageOptions#defaults(), HttpStorageDefaults#getDefaultServiceFactory()
HttpStorageOptions.HttpStorageRpcFactory
Internal implementation detail, only public to allow for java.io.Serializable.
To access an instance of this class instead use HttpStorageOptions.defaults().getDefaultRpcFactory(). See Also: HttpStorageDefaults#getDefaultRpcFactory(), HttpStorageOptions#defaults()
JournalingBlobWriteSessionConfig
There are scenarios in which disk space is more plentiful than memory space. This new BlobWriteSessionConfig allows augmenting an instance of storage to produce BlobWriteSessions which will buffer to disk rather than holding things in memory.
If we have disk available we can checkpoint the contents of an object to disk before transmitting to GCS. The checkpointed data on disk allows arbitrary rewind in the case of failure but allows the upload to happen as soon as the checkpoint ack is complete.
Due to the details of how Resumable Upload Sessions are implemented in the GCS gRPC API this is possible. However, this approach will not work with the HTTP transports Resumable Upload Session spec. See Also: Storage#blobWriteSession(BlobInfo, BlobWriteOption...), GrpcStorageOptions.Builder#setBlobWriteSessionConfig(BlobWriteSessionConfig)
Notification
The class representing Pub/Sub notifications for the Storage. See pubsub-notifications for details.
Notification.Builder
Builder for Notification
.
NotificationInfo
The class representing Pub/Sub Notification metadata for the Storage.
NotificationInfo.Builder
Builder for NotificationInfo
.
NotificationInfo.BuilderImpl
Builder for NotificationInfo
.
Option<O>
Base class for Storage operation option.
ParallelCompositeUploadBlobWriteSessionConfig
Immutable config builder to configure BlobWriteSession instances to perform Parallel Composite Uploads.
Parallel Composite Uploads can yield higher throughput when uploading large objects. However, there are some things which must be kept in mind when choosing to use this strategy.
- Performing parallel composite uploads costs more money. Class A operations
are performed to create each part and to perform each compose. If a storage tier other than
STANDARD
is used, early deletion fees apply to deletion of the parts.An illustrative example. Upload a 5GiB object using 64MiB as the max size per part.
- 80 Parts will be created (Class A)
- 3 compose calls will be performed (Class A)
- Delete 80 Parts along with 2 intermediary Compose objects (Free tier as long as
STANDARD
class)
- The service account/credentials used to perform the parallel composite upload require
storage.objects.delete
in order to cleanup the temporary part and intermediary compose objects.
To handle handle part and intermediary compose object deletion out of band passing PartCleanupStrategy#never() to ParallelCompositeUploadBlobWriteSessionConfig#withPartCleanupStrategy(PartCleanupStrategy) will prevent automatic cleanup. - Please see the Parallel composite uploads documentation for a more in depth explanation of the limitations of Parallel composite uploads.
- A failed upload can leave part and intermediary compose objects behind which will count as
storage usage, and you will be billed for it.
By default if an upload fails, an attempt to cleanup the part and intermediary compose will be made. However if the program were to crash there is no means for the client to perform the cleanup.
Every part and intermediary compose object will be created with a name which ends in.part
. An Object Lifecycle Management rule can be setup on your bucket to automatically cleanup objects with the suffix after some period of time. See Object Lifecycle Management for full details and a guide on how to setup a Delete rule with a suffix match condition. - Using parallel composite uploads are not a a one size fits all solution. They have very real overhead until uploading a large enough object. The inflection point is dependent upon many factors, and there is no one size fits all value. You will need to experiment with your deployment and workload to determine if parallel composite uploads are useful to you.
In general if you object sizes are smaller than several hundred megabytes it is unlikely parallel composite uploads will be beneficial to overall throughput. See Also: BlobWriteSessionConfigs#parallelCompositeUpload(), Storage#blobWriteSession(BlobInfo, BlobWriteOption...), GrpcStorageOptions.Builder#setBlobWriteSessionConfig(BlobWriteSessionConfig), https://cloud.google.com/storage/docs/parallel-composite-uploads
ParallelCompositeUploadBlobWriteSessionConfig.BufferAllocationStrategy
A strategy which dictates how buffers are to be used for individual parts. The chosen strategy will apply to all instances of BlobWriteSession created from a single instance of Storage. See Also: #withBufferAllocationStrategy(BufferAllocationStrategy)
ParallelCompositeUploadBlobWriteSessionConfig.ExecutorSupplier
Class which will be used to supply an Executor where work will be submitted when performing a parallel composite upload. See Also: #withExecutorSupplier(ExecutorSupplier)
ParallelCompositeUploadBlobWriteSessionConfig.PartCleanupStrategy
A cleanup strategy which will dictate what cleanup operations are performed automatically when performing a parallel composite upload. See Also: #withPartCleanupStrategy(PartCleanupStrategy)
ParallelCompositeUploadBlobWriteSessionConfig.PartMetadataFieldDecorator
A Decorator which is used to manipulate metadata fields, specifically on the part objects created in a Parallel Composite Upload See Also: #withPartMetadataFieldDecorator(PartMetadataFieldDecorator)
ParallelCompositeUploadBlobWriteSessionConfig.PartNamingStrategy
A naming strategy which will be used to generate a name for a part or intermediary compose object. See Also: #withPartNamingStrategy(PartNamingStrategy)
PostPolicyV4
Presigned V4 post policy. Instances of PostPolicyV4
include a URL and a map of fields
that can be specified in an HTML form to submit a POST request to upload an object.
See POST Object for details of upload by using HTML forms.
See Storage#generateSignedPostPolicyV4(BlobInfo, long, TimeUnit, PostPolicyV4.PostFieldsV4, PostPolicyV4.PostConditionsV4, Storage.PostPolicyV4Option...) for example of usage.
PostPolicyV4.ConditionV4
Class for a specific POST policy document condition. See Also: Policy document
PostPolicyV4.PostConditionsV4
A helper class for specifying conditions in a V4 POST Policy document. Used in: Storage#generateSignedPostPolicyV4(BlobInfo, long, TimeUnit, PostPolicyV4.PostFieldsV4, PostPolicyV4.PostConditionsV4, Storage.PostPolicyV4Option...). See Also: Policy document
PostPolicyV4.PostConditionsV4.Builder
PostPolicyV4.PostFieldsV4
A helper class to define fields to be specified in a V4 POST request. Instance of this class
helps to construct PostPolicyV4
objects. Used in: Storage#generateSignedPostPolicyV4(BlobInfo, long, TimeUnit, PostPolicyV4.PostFieldsV4,
PostPolicyV4.PostConditionsV4, Storage.PostPolicyV4Option...).
See Also: POST Object Form fields
PostPolicyV4.PostFieldsV4.Builder
PostPolicyV4.PostPolicyV4Document
Class for a V4 POST Policy document. Used by Storage to construct PostPolicyV4
objects.
See Also: Policy document
Rpo
Enums for the Recovery Point Objective (RPO) of dual-region buckets, which determines how fast data is replicated between regions. See Also: https://cloud.google.com/storage/docs/turbo-replication
ServiceAccount
A service account, with its specified scopes, authorized for this instance. See Also: Authenticating from Google Cloud Storage
SignatureInfo
Signature Info holds payload components of the string that requires signing. See Also: Components
SignatureInfo.Builder
Storage.BlobGetOption
Class for specifying blob get options.
Storage.BlobListOption
Class for specifying blob list options.
Storage.BlobRestoreOption
Class for specifying blob restore options *
Storage.BlobSourceOption
Class for specifying blob source options.
Storage.BlobTargetOption
Class for specifying blob target options.
Storage.BlobWriteOption
Class for specifying blob write options.
Storage.BucketGetOption
Class for specifying bucket get options.
Storage.BucketListOption
Class for specifying bucket list options.
Storage.BucketSourceOption
Class for specifying bucket source options.
Storage.BucketTargetOption
Class for specifying bucket target options.
Storage.ComposeRequest
A class to contain all information needed for a Google Cloud Storage Compose operation. See Also: Compose Operation
Storage.ComposeRequest.Builder
Storage.ComposeRequest.SourceBlob
Class for Compose source blobs.
Storage.CopyRequest
A class to contain all information needed for a Google Cloud Storage Copy operation.
Storage.CopyRequest.Builder
Storage.CreateHmacKeyOption
Class for specifying createHmacKey options
Storage.DeleteHmacKeyOption
Class for specifying deleteHmacKey options
Storage.GetHmacKeyOption
Class for specifying getHmacKey options
Storage.ListHmacKeysOption
Class for specifying listHmacKeys options
Storage.PostPolicyV4Option
Class for specifying Post Policy V4 options. *
Storage.SignUrlOption
Class for specifying signed URL options.
Storage.UpdateHmacKeyOption
Class for specifying updateHmacKey options
StorageBatch
A batch of operations to be submitted to Google Cloud Storage using a single RPC request.
Example of using a batch request to delete, update and get a blob:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1"));
BlobId secondBlob = BlobId.of("bucket", "blob2"));
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
public void success(Boolean result) {
// deleted successfully
}
public void error(StorageException exception) {
// delete failed
}
});
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(secondBlob);
batch.submit();
Blob blob = result.get(); // returns get result or throws StorageException
StorageClass
Enums for the storage classes. See https://cloud.google.com/storage/docs/storage-classes for details.
StorageOptions
StorageOptions.Builder
StorageOptions.DefaultStorageFactory (deprecated)
Deprecated. Use HttpStorageFactory
StorageOptions.DefaultStorageRpcFactory (deprecated)
Deprecated. Use HttpStorageRpcFactory
StorageRoles
IAM roles specific to Storage. An overview of the permissions available to Storage and the capabilities they grant can be found in the Google Cloud Storage IAM documentation.
Interfaces
BlobWriteSession
A session to write an object to Google Cloud Storage.
A session can only write a single version of an object. If writing multiple versions of an object a new session must be created each time.
Provides an api that allows writing to and retrieving the resulting BlobInfo after write finalization.
The underlying implementation is dictated based upon the specified BlobWriteSessionConfig provided at StorageOptions creation time. See Also: BlobWriteSessionConfigs, GrpcStorageOptions.Builder#setBlobWriteSessionConfig(BlobWriteSessionConfig), BlobWriteSessionConfig
Storage
An interface for Google Cloud Storage. See Also: Google Cloud Storage
StorageFactory
An interface for Storage factories.
StorageRetryStrategy
A factory class which is used to provide access to ResultRetryAlgorithm for idempotent and non-idempotent calls made via Storage. Before Storage performs an operation it will determine if the operation is idempotent and select the appropriate ResultRetryAlgorithm to use for that invocation. See Also: #getDefaultStorageRetryStrategy(), #getUniformStorageRetryStrategy()
Enums
Acl.Entity.Type
BucketInfo.DeleteRule.Type
BucketInfo.PublicAccessPrevention
Public Access Prevention enum with expected values. See Also: public-access-prevention
HmacKey.HmacKeyState
NotificationInfo.EventType
NotificationInfo.PayloadFormat
PostPolicyV4.ConditionV4Type
Storage.BlobField
Storage.BucketField
Storage.PredefinedAcl
Storage.UriScheme
TransportCompatibility.Transport
Enum representing the transports com.google.cloud.storage
classes have implementations
for.
Exceptions
ParallelCompositeUploadException
An exception which provides access to created objects during a Parallel Composite Upload that did not finish successfully.
This exception can occur when calling any method on the java.nio.channels.WritableByteChannel returned from BlobWriteSession#open(), in which case it will be the cause of a StorageException.
Similarly, this exception will be the cause of a java.util.concurrent.CancellationException thrown by the BlobWriteSession#getResult().
StorageBatchResult<T>
This class holds a single result of a batch call to Cloud Storage.
StorageException
Storage service exception. See Also: Google Cloud Storage error codes
Annotation Types
TransportCompatibility
Annotation which is used to convey which Cloud Storage API a class or method has compatibility with.
Not all operations are compatible with all transports.