Reference documentation and code samples for the Cloud Storage API class Google::Cloud::Storage::Bucket.
Bucket
Represents a Storage bucket. Belongs to a Project and has many Files.
Inherits
- Object
Example
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext"
Methods
#acl
def acl() -> Bucket::Acl
The Acl instance used to control access to the bucket.
A bucket has owners, writers, and readers. Permissions can be granted to an individual user's email address, a group's email address, as well as many predefined lists.
Grant access to a user by prepending "user-"
to an email:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" email = "heidi@example.net" bucket.acl.add_reader "user-#{email}"
Grant access to a group by prepending "group-"
to an email:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" email = "authors@example.net" bucket.acl.add_reader "group-#{email}"
Or, grant access via a predefined permissions list:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.acl.public!
#api_url
def api_url() -> String
A URL that can be used to access the bucket using the REST API.
- (String)
#autoclass
def autoclass() -> Google::Apis::StorageV1::Bucket::Autoclass
The autoclass configuration of the bucket
- (Google::Apis::StorageV1::Bucket::Autoclass)
#autoclass_enabled
def autoclass_enabled() -> Boolean
Whether Autoclass is enabled for the bucket.
- (Boolean)
#autoclass_enabled=
def autoclass_enabled=(toggle)
Updates bucket's autoclass configuration. This defines the default class for objects in the
bucket and down/up-grades the storage class of objects based on the access patterns.
Accepted values are :false
, and :true
.
For more information, see Storage Classes.
- toggle (Boolean) — for autoclass configuration of the bucket.
#autoclass_terminal_storage_class
def autoclass_terminal_storage_class() -> String
Terminal Storage class of the autoclass
- (String)
#autoclass_terminal_storage_class_update_time
def autoclass_terminal_storage_class_update_time() -> DateTime
Update time at which the autoclass terminal storage class was last modified
- (DateTime)
#autoclass_toggle_time
def autoclass_toggle_time() -> DateTime
Toggle time of the autoclass
- (DateTime)
#combine
def combine(sources, destination, acl: nil, encryption_key: nil, if_source_generation_match: nil, if_generation_match: nil, if_metageneration_match: nil) -> Google::Cloud::Storage::File
Concatenates a list of existing files in the bucket into a new file in the bucket. There is a limit (currently 32) to the number of files that can be composed in a single operation.
To compose files encrypted with a customer-supplied encryption key,
use the encryption_key
option. All source files must have been
encrypted with the same key, and the resulting destination file will
also be encrypted with the same key.
- sources (Array<String, Google::Cloud::Storage::File>) — The list of source file names or objects that will be concatenated into a single file.
- destination (String) — The name of the new file.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- encryption_key (String, nil) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to encrypt the source files, if one was used. All source files must have been encrypted with the same key, and the resulting destination file will also be encrypted with the key.
-
if_source_generation_match (Array<Integer>) (defaults to: nil) — Makes the operation
conditional on whether the source files' current generations match the
given values. The list must match
sources
item-to-item. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current metageneration matches the given value.
- (file) — A block yielding a delegate file object for setting the properties of the destination file.
- (Google::Cloud::Storage::File) — The new file.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext"
Set the properties of the new file in a block:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext" do |f| f.cache_control = "private, max-age=0, no-cache" f.content_disposition = "inline; filename=filename.ext" f.content_encoding = "deflate" f.content_language = "de" f.content_type = "application/json" end
Specify the generation of source files (but skip retrieval):
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file_1 = bucket.file "path/to/my-file-1.ext", generation: 1490390259479000, skip_lookup: true file_2 = bucket.file "path/to/my-file-2.ext", generation: 1490310974144000, skip_lookup: true new_file = bucket.compose [file_1, file_2], "path/to/new-file.ext"
#compose
def compose(sources, destination, acl: nil, encryption_key: nil, if_source_generation_match: nil, if_generation_match: nil, if_metageneration_match: nil) -> Google::Cloud::Storage::File
Concatenates a list of existing files in the bucket into a new file in the bucket. There is a limit (currently 32) to the number of files that can be composed in a single operation.
To compose files encrypted with a customer-supplied encryption key,
use the encryption_key
option. All source files must have been
encrypted with the same key, and the resulting destination file will
also be encrypted with the same key.
- sources (Array<String, Google::Cloud::Storage::File>) — The list of source file names or objects that will be concatenated into a single file.
- destination (String) — The name of the new file.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- encryption_key (String, nil) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to encrypt the source files, if one was used. All source files must have been encrypted with the same key, and the resulting destination file will also be encrypted with the key.
-
if_source_generation_match (Array<Integer>) (defaults to: nil) — Makes the operation
conditional on whether the source files' current generations match the
given values. The list must match
sources
item-to-item. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current metageneration matches the given value.
- (file) — A block yielding a delegate file object for setting the properties of the destination file.
- (Google::Cloud::Storage::File) — The new file.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext"
Set the properties of the new file in a block:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext" do |f| f.cache_control = "private, max-age=0, no-cache" f.content_disposition = "inline; filename=filename.ext" f.content_encoding = "deflate" f.content_language = "de" f.content_type = "application/json" end
Specify the generation of source files (but skip retrieval):
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file_1 = bucket.file "path/to/my-file-1.ext", generation: 1490390259479000, skip_lookup: true file_2 = bucket.file "path/to/my-file-2.ext", generation: 1490310974144000, skip_lookup: true new_file = bucket.compose [file_1, file_2], "path/to/new-file.ext"
#compose_file
def compose_file(sources, destination, acl: nil, encryption_key: nil, if_source_generation_match: nil, if_generation_match: nil, if_metageneration_match: nil) -> Google::Cloud::Storage::File
Concatenates a list of existing files in the bucket into a new file in the bucket. There is a limit (currently 32) to the number of files that can be composed in a single operation.
To compose files encrypted with a customer-supplied encryption key,
use the encryption_key
option. All source files must have been
encrypted with the same key, and the resulting destination file will
also be encrypted with the same key.
- sources (Array<String, Google::Cloud::Storage::File>) — The list of source file names or objects that will be concatenated into a single file.
- destination (String) — The name of the new file.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- encryption_key (String, nil) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to encrypt the source files, if one was used. All source files must have been encrypted with the same key, and the resulting destination file will also be encrypted with the key.
-
if_source_generation_match (Array<Integer>) (defaults to: nil) — Makes the operation
conditional on whether the source files' current generations match the
given values. The list must match
sources
item-to-item. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current metageneration matches the given value.
- (file) — A block yielding a delegate file object for setting the properties of the destination file.
- (Google::Cloud::Storage::File) — The new file.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext"
Set the properties of the new file in a block:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" sources = ["path/to/my-file-1.ext", "path/to/my-file-2.ext"] new_file = bucket.compose sources, "path/to/new-file.ext" do |f| f.cache_control = "private, max-age=0, no-cache" f.content_disposition = "inline; filename=filename.ext" f.content_encoding = "deflate" f.content_language = "de" f.content_type = "application/json" end
Specify the generation of source files (but skip retrieval):
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file_1 = bucket.file "path/to/my-file-1.ext", generation: 1490390259479000, skip_lookup: true file_2 = bucket.file "path/to/my-file-2.ext", generation: 1490310974144000, skip_lookup: true new_file = bucket.compose [file_1, file_2], "path/to/new-file.ext"
#cors
def cors() { |cors| ... } -> Bucket::Cors
Returns the current CORS configuration for a static website served from the bucket.
The return value is a frozen (unmodifiable) array of hashes containing the attributes specified for the Bucket resource field cors.
This method also accepts a block for updating the bucket's CORS rules. See Cors for details.
- (cors) — a block for setting CORS rules
- cors (Bucket::Cors) — the object accepting CORS rules
- (Bucket::Cors) — The frozen builder object.
Retrieving the bucket's CORS rules.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.cors.size #=> 2 rule = bucket.cors.first rule.origin #=> ["http://example.org"] rule.methods #=> ["GET","POST","DELETE"] rule.headers #=> ["X-My-Custom-Header"] rule.max_age #=> 3600
Updating the bucket's CORS rules inside a block.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.update do |b| b.cors do |c| c.add_rule ["http://example.org", "https://example.org"], "*", headers: ["X-My-Custom-Header"], max_age: 3600 end end
#create_file
def create_file(file, path = nil, acl: nil, cache_control: nil, content_disposition: nil, content_encoding: nil, content_language: nil, content_type: nil, custom_time: nil, checksum: nil, crc32c: nil, md5: nil, metadata: nil, storage_class: nil, encryption_key: nil, kms_key: nil, temporary_hold: nil, event_based_hold: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil) -> Google::Cloud::Storage::File
Creates a new File object by providing a path to a local file (or any File-like object such as StringIO) to upload, along with the path at which to store it in the bucket.
Customer-supplied encryption keys
By default, Google Cloud Storage manages server-side encryption keys
on your behalf. However, a customer-supplied encryption key
can be provided with the encryption_key
option. If given, the same
key must be provided to subsequently download or copy the file. If you
use customer-supplied encryption keys, you must securely manage your
keys and ensure that they are not lost. Also, please note that file
metadata is not encrypted, with the exception of the CRC32C checksum
and MD5 hash. The names of files and buckets are also not encrypted,
and you can read or update the metadata of an encrypted file without
providing the encryption key.
-
file (String, ::File) — Path of the file on the filesystem to
upload. Can be an File object, or File-like object such as StringIO.
(If the object does not have path, a
path
argument must be also be provided.) - path (String) — Path to store the file in Google Cloud Storage.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- cache_control (String) (defaults to: nil) — The Cache-Control response header to be returned when the file is downloaded.
- content_disposition (String) (defaults to: nil) — The Content-Disposition response header to be returned when the file is downloaded.
-
content_encoding (String) (defaults to: nil) — The Content-Encoding
response
header to be returned when the file is downloaded. For example,
content_encoding: "gzip"
can indicate to clients that the uploaded data is gzip-compressed. However, there is no check to guarantee the specifiedContent-Encoding
has actually been applied to the file data, and incorrectly specifying the file's encoding could lead to unintended behavior on subsequent download requests. - content_language (String) (defaults to: nil) — The Content-Language response header to be returned when the file is downloaded.
- content_type (String) (defaults to: nil) — The Content-Type response header to be returned when the file is downloaded.
- custom_time (DateTime) (defaults to: nil) — A custom time specified by the user for the file. Once set, custom_time can't be unset, and it can only be changed to a time in the future. If custom_time must be unset, you must either perform a rewrite operation, or upload the data again and create a new file.
-
checksum (Symbol, nil) (defaults to: nil) — The type of checksum for the client to
automatically calculate and send with the create request to verify
the integrity of the object. If provided, Cloud Storage will only
create the file if the value calculated by the client matches the
value calculated by the service.
Acceptable values are:
md5
- Calculate and provide a checksum using the MD5 hash.crc32c
- Calculate and provide a checksum using the CRC32c hash.all
- Calculate and provide checksums for all available verifications.
Optional. The default is
nil
. Do not provide if also providing a correspondingcrc32c
ormd5
argument. See Validation for more information. -
crc32c (String) (defaults to: nil) — The CRC32c checksum of the file data, as
described in RFC 4960, Appendix
B.
If provided, Cloud Storage will only create the file if the value
matches the value calculated by the service. Do not provide if also
providing a
checksum: :crc32c
orchecksum: :all
argument. See Validation for more information. -
md5 (String) (defaults to: nil) — The MD5 hash of the file data. If provided, Cloud
Storage will only create the file if the value matches the value
calculated by the service. Do not provide if also providing a
checksum: :md5
orchecksum: :all
argument. See Validation for more information. - metadata (Hash) (defaults to: nil) — A hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.
-
storage_class (Symbol, String) (defaults to: nil) — Storage class of the file.
Determines how the file is stored and determines the SLA and the
cost of storage. Accepted values include
:standard
,:nearline
,:coldline
, and:archive
, as well as the equivalent strings returned by #storage_class.:multi_regional
,:regional
, anddurable_reduced_availability
are accepted legacy storage classes. For more information, see Storage Classes and Per-Object Storage Class. The default value is the default storage class for the bucket. -
encryption_key (String) (defaults to: nil) — Optional. A customer-supplied, AES-256
encryption key that will be used to encrypt the file. Do not provide
if
kms_key
is used. -
kms_key (String) (defaults to: nil) — Optional. Resource name of the Cloud KMS
key, of the form
projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key
, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide ifencryption_key
is used. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
- (ArgumentError)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext"
Specifying a destination path:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext"
Providing a customer-supplied encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # Key generation shown for example purposes only. Write your own. cipher = OpenSSL::Cipher.new "aes-256-cfb" cipher.encrypt key = cipher.random_key bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", encryption_key: key # Store your key and hash securely for later use. file = bucket.file "destination/path/file.ext", encryption_key: key
Providing a customer-managed Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", kms_key: kms_key_name file = bucket.file "destination/path/file.ext" file.kms_key #=> kms_key_name
Create a file with gzip-encoded data.
require "zlib" require "google/cloud/storage" storage = Google::Cloud::Storage.new gz = StringIO.new "" z = Zlib::GzipWriter.new gz z.write "Hello world!" z.close data = StringIO.new gz.string bucket = storage.bucket "my-bucket" bucket.create_file data, "path/to/gzipped.txt", content_encoding: "gzip" file = bucket.file "path/to/gzipped.txt" # The downloaded data is decompressed by default. file.download "path/to/downloaded/hello.txt" # The downloaded data remains compressed with skip_decompress. file.download "path/to/downloaded/gzipped.txt", skip_decompress: true
#create_notification
def create_notification(topic, custom_attrs: nil, event_types: nil, prefix: nil, payload: nil) -> Google::Cloud::Storage::Notification
Creates a new Pub/Sub notification subscription for the bucket.
- topic (String) — The name of the Cloud PubSub topic to which the notification subscription will publish.
- custom_attrs (Hash(String => String)) (defaults to: nil) — The custom attributes for the notification. An optional list of additional attributes to attach to each Cloud Pub/Sub message published for the notification subscription.
-
event_types (Symbol, String, Array<Symbol, String>) (defaults to: nil) —
The event types for the notification subscription. If provided, messages will only be sent for the listed event types. If empty, messages will be sent for all event types.
Acceptable values are:
:finalize
- Sent when a new object (or a new generation of an existing object) is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.:update
- Sent when the metadata of an existing object changes.:delete
- Sent when an object has been permanently deleted. This includes objects that are overwritten or are deleted as part of the bucket's lifecycle configuration. For buckets with object versioning enabled, this is not sent when an object is archived (seeOBJECT_ARCHIVE
), even if archival occurs via the File#delete method.:archive
- Only sent when the bucket has enabled object versioning. This event indicates that the live version of an object has become an archived version, either because it was archived or because it was overwritten by the upload of an object of the same name.
- prefix (String) (defaults to: nil) — The file name prefix for the notification subscription. If provided, the notification will only be applied to file names that begin with this prefix.
-
payload (Symbol, String, Boolean) (defaults to: nil) — The desired content of the
Pub/Sub message payload. Acceptable values are:
:json
ortrue
- The Pub/Sub message payload will be a UTF-8 string containing the resource representation of the file's metadata.:none
orfalse
- No payload is included with the notification.
The default value is
:json
.
require "google/cloud/pubsub" require "google/cloud/storage" pubsub = Google::Cloud::Pubsub.new storage = Google::Cloud::Storage.new topic = pubsub.create_topic "my-topic" topic.policy do |p| p.add "roles/pubsub.publisher", "serviceAccount:#{storage.service_account_email}" end bucket = storage.bucket "my-bucket" notification = bucket.create_notification topic.name
#created_at
def created_at() -> DateTime
Creation time of the bucket.
- (DateTime)
#data_locations
def data_locations()
#default_acl
def default_acl() -> Bucket::DefaultAcl
The DefaultAcl instance used to control access to the bucket's files.
A bucket's files have owners, writers, and readers. Permissions can be granted to an individual user's email address, a group's email address, as well as many predefined lists.
Grant access to a user by prepending "user-"
to an email:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" email = "heidi@example.net" bucket.default_acl.add_reader "user-#{email}"
Grant access to a group by prepending "group-"
to an email
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" email = "authors@example.net" bucket.default_acl.add_reader "group-#{email}"
Or, grant access via a predefined permissions list:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.default_acl.public!
#default_event_based_hold=
def default_event_based_hold=(new_default_event_based_hold)
Updates the default event-based hold field for the bucket. This field
controls the initial state of the event_based_hold
field for
newly-created files in the bucket.
See File#event_based_hold? and File#set_event_based_hold!.
To pass metageneration preconditions, call this method within a block passed to #update.
- new_default_event_based_hold (Boolean) — The default event-based hold field for the bucket.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.update do |b| b.retention_period = 2592000 # 30 days in seconds b.default_event_based_hold = true end file = bucket.create_file "path/to/local.file.ext" file.event_based_hold? # true file.delete # raises Google::Cloud::PermissionDeniedError file.release_event_based_hold! # The end of the retention period is calculated from the time that # the event-based hold was released. file.retention_expires_at
#default_event_based_hold?
def default_event_based_hold?() -> Boolean
Whether the event_based_hold
field for newly-created files in the
bucket will be initially set to true
. See
#default_event_based_hold=, File#event_based_hold? and
File#set_event_based_hold!.
-
(Boolean) — Returns
true
if theevent_based_hold
field for newly-created files in the bucket will be initially set totrue
, otherwisefalse
.
#default_kms_key
def default_kms_key() -> String, nil
The Cloud KMS encryption key that will be used to protect files.
For example: projects/a/locations/b/keyRings/c/cryptoKeys/d
-
(String, nil) — A Cloud KMS encryption key, or
nil
if none has been configured.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" bucket.default_kms_key = kms_key_name bucket.default_kms_key #=> kms_key_name
#default_kms_key=
def default_kms_key=(new_default_kms_key)
Set the Cloud KMS encryption key that will be used to protect files.
For example: projects/a/locations/b/keyRings/c/cryptoKeys/d
To pass metageneration preconditions, call this method within a block passed to #update.
-
new_default_kms_key (String, nil) — New Cloud KMS key name, or
nil
to delete the Cloud KMS encryption key.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" bucket.default_kms_key = kms_key_name
Delete the default Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.default_kms_key = nil
#delete
def delete(if_metageneration_match: nil, if_metageneration_not_match: nil) -> Boolean
Permanently deletes the bucket. The bucket must be empty before it can be deleted.
The API call to delete the bucket may be retried under certain conditions. See Google::Cloud#storage to control this behavior.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the bucket's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the bucket's current metageneration does not match the given value.
-
(Boolean) — Returns
true
if the bucket was deleted.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.delete
#exists?
def exists?() -> Boolean
Determines whether the bucket exists in the Storage service.
- (Boolean) — true if the bucket exists in the Storage service.
#file
def file(path, generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, skip_lookup: nil, encryption_key: nil, soft_deleted: nil) -> Google::Cloud::Storage::File, nil
Retrieves a file matching the path.
If a customer-supplied encryption
key
was used with #create_file, the encryption_key
option must be
provided or else the file's CRC32C checksum and MD5 hash will not be
returned.
- path (String) — Name (path) of the file.
- generation (Integer) (defaults to: nil) — When present, selects a specific revision of this object. Default is the latest version.
- if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
-
skip_lookup (Boolean) (defaults to: nil) — Optionally create a Bucket object
without verifying the bucket resource exists on the Storage service.
Calls made on this object will raise errors if the bucket resource
does not exist. Default is
false
. -
encryption_key (String) (defaults to: nil) — Optional. The customer-supplied,
AES-256 encryption key used to encrypt the file, if one was provided
to #create_file. (Not used if
skip_lookup
is also set.) - soft_deleted (Boolean) (defaults to: nil) — Optional. If true, only soft-deleted object versions will be listed. The default is false.
- (Google::Cloud::Storage::File, nil) — Returns nil if file does not exist
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" puts file.name
#files
def files(prefix: nil, delimiter: nil, token: nil, max: nil, versions: nil, match_glob: nil, include_folders_as_prefixes: nil, soft_deleted: nil) -> Array<Google::Cloud::Storage::File>
Retrieves a list of files matching the criteria.
- prefix (String) (defaults to: nil) — Filter results to files whose names begin with this prefix.
-
delimiter (String) (defaults to: nil) — Returns results in a directory-like mode.
items
will contain only objects whose names, aside from theprefix
, do not containdelimiter
. Objects whose names, aside from theprefix
, containdelimiter
will have their name, truncated after thedelimiter
, returned inprefixes
. Duplicateprefixes
are omitted. - token (String) (defaults to: nil) — A previously-returned page token representing part of the larger set of results to view.
-
match_glob (String) (defaults to: nil) — A glob pattern used to filter results returned in items (e.g.
foo*bar
). The string value must be UTF-8 encoded. See: https://cloud.google.com/storage/docs/json_api/v1/objects/list#list-object-glob - max (Integer) (defaults to: nil) — Maximum number of items plus prefixes to return. As duplicate prefixes are omitted, fewer total results may be returned than requested. The default value of this parameter is 1,000 items.
-
versions (Boolean) (defaults to: nil) — If
true
, lists all versions of an object as distinct results. The default isfalse
. For more information, see Object Versioning . -
include_folders_as_prefixes (Boolean) (defaults to: nil) — If
true
, will also include folders and managed folders, besides objects, in the returned prefixes. Only applicable if delimiter is set to '/'. - soft_deleted (Boolean) (defaults to: nil) — If true, only soft-deleted object versions will be listed. The default is false.
- (Array<Google::Cloud::Storage::File>) — (See File::List)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" files = bucket.files files.each do |file| puts file.name end
Retrieve all files: (See File::List#all)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" files = bucket.files files.all do |file| puts file.name end
#find_file
def find_file(path, generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, skip_lookup: nil, encryption_key: nil, soft_deleted: nil) -> Google::Cloud::Storage::File, nil
Retrieves a file matching the path.
If a customer-supplied encryption
key
was used with #create_file, the encryption_key
option must be
provided or else the file's CRC32C checksum and MD5 hash will not be
returned.
- path (String) — Name (path) of the file.
- generation (Integer) (defaults to: nil) — When present, selects a specific revision of this object. Default is the latest version.
- if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
-
skip_lookup (Boolean) (defaults to: nil) — Optionally create a Bucket object
without verifying the bucket resource exists on the Storage service.
Calls made on this object will raise errors if the bucket resource
does not exist. Default is
false
. -
encryption_key (String) (defaults to: nil) — Optional. The customer-supplied,
AES-256 encryption key used to encrypt the file, if one was provided
to #create_file. (Not used if
skip_lookup
is also set.) - soft_deleted (Boolean) (defaults to: nil) — Optional. If true, only soft-deleted object versions will be listed. The default is false.
- (Google::Cloud::Storage::File, nil) — Returns nil if file does not exist
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" puts file.name
#find_files
def find_files(prefix: nil, delimiter: nil, token: nil, max: nil, versions: nil, match_glob: nil, include_folders_as_prefixes: nil, soft_deleted: nil) -> Array<Google::Cloud::Storage::File>
Retrieves a list of files matching the criteria.
- prefix (String) (defaults to: nil) — Filter results to files whose names begin with this prefix.
-
delimiter (String) (defaults to: nil) — Returns results in a directory-like mode.
items
will contain only objects whose names, aside from theprefix
, do not containdelimiter
. Objects whose names, aside from theprefix
, containdelimiter
will have their name, truncated after thedelimiter
, returned inprefixes
. Duplicateprefixes
are omitted. - token (String) (defaults to: nil) — A previously-returned page token representing part of the larger set of results to view.
-
match_glob (String) (defaults to: nil) — A glob pattern used to filter results returned in items (e.g.
foo*bar
). The string value must be UTF-8 encoded. See: https://cloud.google.com/storage/docs/json_api/v1/objects/list#list-object-glob - max (Integer) (defaults to: nil) — Maximum number of items plus prefixes to return. As duplicate prefixes are omitted, fewer total results may be returned than requested. The default value of this parameter is 1,000 items.
-
versions (Boolean) (defaults to: nil) — If
true
, lists all versions of an object as distinct results. The default isfalse
. For more information, see Object Versioning . -
include_folders_as_prefixes (Boolean) (defaults to: nil) — If
true
, will also include folders and managed folders, besides objects, in the returned prefixes. Only applicable if delimiter is set to '/'. - soft_deleted (Boolean) (defaults to: nil) — If true, only soft-deleted object versions will be listed. The default is false.
- (Array<Google::Cloud::Storage::File>) — (See File::List)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" files = bucket.files files.each do |file| puts file.name end
Retrieve all files: (See File::List#all)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" files = bucket.files files.all do |file| puts file.name end
#find_notification
def find_notification(id) -> Google::Cloud::Storage::Notification, nil
Retrieves a Pub/Sub notification subscription for the bucket.
- id (String) — The Notification ID.
- (Google::Cloud::Storage::Notification, nil) — Returns nil if the notification does not exist
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" notification = bucket.notification "1" puts notification.id
#find_notifications
def find_notifications() -> Array<Google::Cloud::Storage::Notification>
Retrieves the entire list of Pub/Sub notification subscriptions for the bucket.
- (Array<Google::Cloud::Storage::Notification>)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" notifications = bucket.notifications notifications.each do |notification| puts notification.id end
#generate_signed_post_policy_v4
def generate_signed_post_policy_v4(path, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil, expires: nil, fields: nil, conditions: nil, scheme: "https", virtual_hosted_style: nil, bucket_bound_hostname: nil) -> PostObject
Generate a PostObject
that includes the fields and URL to
upload objects via HTML forms. The resulting PostObject
is
based on a policy document created from the method arguments.
This policy provides authorization to ensure that the HTML
form can upload files into the bucket. See Signatures -
Policy document.
Generating a PostObject
requires service account credentials,
either by connecting with a service account when calling
Google::Cloud.storage, or by passing in the service account
issuer
and signing_key
values. Although the private key can
be passed as a string for convenience, creating and storing
an instance of OpenSSL::PKey::RSA
is more efficient
when making multiple calls to generate_signed_post_policy_v4
.
A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
- path (String) — Path to the file in Google Cloud Storage.
- issuer (String) (defaults to: nil) — Service Account's Client Email.
- client_email (String) (defaults to: nil) — Service Account's Client Email.
- signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
- private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
-
signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's
Private Key or a Proc that accepts a single String parameter and returns a
RSA SHA256 signature using a valid Google Service Account Private Key.
When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.
- expires (Integer) (defaults to: nil) — The number of seconds until the URL expires. The default is 604800 (7 days).
-
fields (Hash{String => String}) (defaults to: nil) — User-supplied form fields such as
acl
,cache-control
,success_action_status
, andsuccess_action_redirect
. Optional. See Upload an object with HTML forms - Form fields. -
conditions (Array<Hash{String => String}|Array<String>>) (defaults to: nil) — An array of
policy conditions that every upload must satisfy. For example:
[["eq", "$Content-Type", "image/jpeg"]]
. Optional. See Signatures - Policy document. -
scheme (String) (defaults to: "https") — The URL scheme. The default value is
HTTPS
. -
virtual_hosted_style (Boolean) (defaults to: nil) — Whether to use a virtual hosted-style
hostname, which adds the bucket into the host portion of the URI rather
than the path, e.g.
https://mybucket.storage.googleapis.com/...
. The default value offalse
uses the form ofhttps://storage.googleapis.com/mybucket
. -
bucket_bound_hostname (String) (defaults to: nil) — Use a bucket-bound hostname, which
replaces the
storage.googleapis.com
host with the name of aCNAME
bucket, e.g. a bucket namedgcs-subdomain.my.domain.tld
, or a Google Cloud Load Balancer which routes to a bucket you own, e.g.my-load-balancer-domain.tld
.
- (PostObject) — An object containing the URL, fields, and values needed to upload files via HTML forms.
- (SignedUrlUnavailable) — If the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" conditions = [["starts-with", "$acl","public"]] post = bucket.generate_signed_post_policy_v4 "avatars/heidi/400x400.png", expires: 10, conditions: conditions post.url #=> "https://storage.googleapis.com/my-todo-app/" post.fields["key"] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields["policy"] #=> "ABC...XYZ" post.fields["x-goog-algorithm"] #=> "GOOG4-RSA-SHA256" post.fields["x-goog-credential"] #=> "cred@pid.iam.gserviceaccount.com/20200123/auto/storage/goog4_request" post.fields["x-goog-date"] #=> "20200128T000000Z" post.fields["x-goog-signature"] #=> "4893a0e...cd82"
Using Cloud IAMCredentials signBlob to create the signature:
require "google/cloud/storage" require "google/apis/iamcredentials_v1" require "googleauth" # Issuer is the service account email that the Signed URL will be signed with # and any permission granted in the Signed URL must be granted to the # Google Service Account. issuer = "service-account@project-id.iam.gserviceaccount.com" # Create a lambda that accepts the string_to_sign signer = lambda do |string_to_sign| IAMCredentials = Google::Apis::IamcredentialsV1 iam_client = IAMCredentials::IAMCredentialsService.new # Get the environment configured authorization scopes = ["https://www.googleapis.com/auth/iam"] iam_client.authorization = Google::Auth.get_application_default scopes request = Google::Apis::IamcredentialsV1::SignBlobRequest.new( payload: string_to_sign ) resource = "projects/-/serviceAccounts/#{issuer}" response = iam_client.sign_service_account_blob resource, request response.signed_blob end storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" conditions = [["starts-with", "$acl","public"]] post = bucket.generate_signed_post_policy_v4 "avatars/heidi/400x400.png", expires: 10, conditions: conditions, issuer: issuer, signer: signer post.url #=> "https://storage.googleapis.com/my-todo-app/" post.fields["key"] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields["policy"] #=> "ABC...XYZ" post.fields["x-goog-algorithm"] #=> "GOOG4-RSA-SHA256" post.fields["x-goog-credential"] #=> "cred@pid.iam.gserviceaccount.com/20200123/auto/storage/goog4_request" post.fields["x-goog-date"] #=> "20200128T000000Z" post.fields["x-goog-signature"] #=> "4893a0e...cd82"
#hierarchical_namespace
def hierarchical_namespace() -> Google::Apis::StorageV1::Bucket::HierarchicalNamespace
The bucket's hierarchical namespace (Folders) configuration. This value can be modified by calling #hierarchical_namespace=.
- (Google::Apis::StorageV1::Bucket::HierarchicalNamespace)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.hierarchical_namespace
#hierarchical_namespace=
def hierarchical_namespace=(new_hierarchical_namespace)
Sets the value of Hierarchical Namespace (Folders) for the bucket. This can only be enabled at bucket create time. If this is enabled, Uniform Bucket-Level Access must also be enabled. This value can be queried by calling #hierarchical_namespace.
- new_hierarchical_namespace (Google::Apis::StorageV1::Bucket::HierarchicalNamespace, Hash(String => String)) — The bucket's new Hierarchical Namespace Configuration.
Enabled Hierarchical Namespace using HierarchicalNamespace class:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" hierarchical_namespace = Google::Apis::StorageV1::Bucket::HierarchicalNamespace.new hierarchical_namespace.enabled = true bucket.hierarchical_namespace = hierarchical_namespace
Disable Hierarchical Namespace using Hash:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" hierarchical_namespace = { enabled: false } bucket.hierarchical_namespace = hierarchical_namespace
#id
def id() -> String
The ID of the bucket.
- (String)
#kind
def kind() -> String
The kind of item this is.
For buckets, this is always storage#bucket
.
- (String)
#labels
def labels() -> Hash(String => String)
A hash of user-provided labels. The hash is frozen and changes are not allowed.
- (Hash(String => String))
#labels=
def labels=(labels)
Updates the hash of user-provided labels.
To pass metageneration preconditions, call this method within a block passed to #update.
- labels (Hash(String => String)) — The user-provided labels.
#lifecycle
def lifecycle() { |lifecycle| ... } -> Bucket::Lifecycle
Returns the current Object Lifecycle Management rules configuration for the bucket.
This method also accepts a block for updating the bucket's Object Lifecycle Management rules. See Lifecycle for details.
- (lifecycle) — a block for setting Object Lifecycle Management rules
- lifecycle (Bucket::Lifecycle) — the object accepting Object Lifecycle Management rules
- (Bucket::Lifecycle) — The frozen builder object.
Retrieving a bucket's lifecycle management rules.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.lifecycle.size #=> 2 rule = bucket.lifecycle.first rule.action #=> "SetStorageClass" rule.storage_class #=> "COLDLINE" rule.age #=> 10 rule.matches_storage_class #=> ["STANDARD", "NEARLINE"]
Updating the bucket's lifecycle management rules in a block.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.update do |b| b.lifecycle do |l| # Remove the last rule from the array l.pop # Remove rules with the given condition l.delete_if do |r| r.matches_storage_class.include? "NEARLINE" end # Update rules l.each do |r| r.age = 90 if r.action == "Delete" end # Add a rule l.add_set_storage_class_rule "COLDLINE", age: 10 end end
#location
def location() -> String
The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.
- (String)
#location_type
def location_type() -> String
The bucket's location type. Location type defines the geographic placement of the bucket's data and affects cost, performance, and availability. There are three possible values:
region
- Lowest latency within a single regionmulti-region
- Highest availability across largest areadual-region
- High availability and low latency across 2 regions
- (String) — The location type code: "region", "multi-region", or "dual-region"
#lock_retention_policy!
def lock_retention_policy!() -> Boolean
PERMANENTLY locks the retention policy (see #retention_period=) on the bucket if one exists. The policy is transitioned to a locked state in which its duration cannot be reduced.
Locked policies can be extended in duration by setting #retention_period= to a higher value. Such an extension is permanent, and it cannot later be reduced. The extended duration will apply retroactively to all files currently in the bucket.
This method also creates a
lien
on the resourcemanager.projects.delete
permission for the project
containing the bucket.
The bucket's metageneration value is required for the lock policy API
call. Attempting to call this method on a bucket that was loaded with
the skip_lookup: true
option will result in an error.
-
(Boolean) — Returns
true
if the lock operation is successful.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds bucket.lock_retention_policy! bucket.retention_policy_locked? # true file = bucket.create_file "path/to/local.file.ext" file.delete # raises Google::Cloud::PermissionDeniedError # Locked policies can be extended in duration bucket.retention_period = 7776000 # 90 days in seconds
#logging_bucket
def logging_bucket() -> String
The destination bucket name for the bucket's logs.
- (String)
#logging_bucket=
def logging_bucket=(logging_bucket)
Updates the destination bucket for the bucket's logs.
To pass metageneration preconditions, call this method within a block passed to #update.
- logging_bucket (String) — The bucket to hold the logging output
#logging_prefix
def logging_prefix() -> String
The logging object prefix for the bucket's logs. For more information,
- (String)
#logging_prefix=
def logging_prefix=(logging_prefix)
Updates the logging object prefix. This prefix will be used to create log object names for the bucket. It can be at most 900 characters and must be a valid object name. By default, the object prefix is the name of the bucket for which the logs are enabled.
To pass metageneration preconditions, call this method within a block passed to #update.
- logging_prefix (String) — The logging object prefix.
#metageneration
def metageneration() -> Integer
The metadata generation of the bucket.
- (Integer) — The metageneration.
#name
def name() -> String
The name of the bucket.
- (String)
#new_file
def new_file(file, path = nil, acl: nil, cache_control: nil, content_disposition: nil, content_encoding: nil, content_language: nil, content_type: nil, custom_time: nil, checksum: nil, crc32c: nil, md5: nil, metadata: nil, storage_class: nil, encryption_key: nil, kms_key: nil, temporary_hold: nil, event_based_hold: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil) -> Google::Cloud::Storage::File
Creates a new File object by providing a path to a local file (or any File-like object such as StringIO) to upload, along with the path at which to store it in the bucket.
Customer-supplied encryption keys
By default, Google Cloud Storage manages server-side encryption keys
on your behalf. However, a customer-supplied encryption key
can be provided with the encryption_key
option. If given, the same
key must be provided to subsequently download or copy the file. If you
use customer-supplied encryption keys, you must securely manage your
keys and ensure that they are not lost. Also, please note that file
metadata is not encrypted, with the exception of the CRC32C checksum
and MD5 hash. The names of files and buckets are also not encrypted,
and you can read or update the metadata of an encrypted file without
providing the encryption key.
-
file (String, ::File) — Path of the file on the filesystem to
upload. Can be an File object, or File-like object such as StringIO.
(If the object does not have path, a
path
argument must be also be provided.) - path (String) — Path to store the file in Google Cloud Storage.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- cache_control (String) (defaults to: nil) — The Cache-Control response header to be returned when the file is downloaded.
- content_disposition (String) (defaults to: nil) — The Content-Disposition response header to be returned when the file is downloaded.
-
content_encoding (String) (defaults to: nil) — The Content-Encoding
response
header to be returned when the file is downloaded. For example,
content_encoding: "gzip"
can indicate to clients that the uploaded data is gzip-compressed. However, there is no check to guarantee the specifiedContent-Encoding
has actually been applied to the file data, and incorrectly specifying the file's encoding could lead to unintended behavior on subsequent download requests. - content_language (String) (defaults to: nil) — The Content-Language response header to be returned when the file is downloaded.
- content_type (String) (defaults to: nil) — The Content-Type response header to be returned when the file is downloaded.
- custom_time (DateTime) (defaults to: nil) — A custom time specified by the user for the file. Once set, custom_time can't be unset, and it can only be changed to a time in the future. If custom_time must be unset, you must either perform a rewrite operation, or upload the data again and create a new file.
-
checksum (Symbol, nil) (defaults to: nil) — The type of checksum for the client to
automatically calculate and send with the create request to verify
the integrity of the object. If provided, Cloud Storage will only
create the file if the value calculated by the client matches the
value calculated by the service.
Acceptable values are:
md5
- Calculate and provide a checksum using the MD5 hash.crc32c
- Calculate and provide a checksum using the CRC32c hash.all
- Calculate and provide checksums for all available verifications.
Optional. The default is
nil
. Do not provide if also providing a correspondingcrc32c
ormd5
argument. See Validation for more information. -
crc32c (String) (defaults to: nil) — The CRC32c checksum of the file data, as
described in RFC 4960, Appendix
B.
If provided, Cloud Storage will only create the file if the value
matches the value calculated by the service. Do not provide if also
providing a
checksum: :crc32c
orchecksum: :all
argument. See Validation for more information. -
md5 (String) (defaults to: nil) — The MD5 hash of the file data. If provided, Cloud
Storage will only create the file if the value matches the value
calculated by the service. Do not provide if also providing a
checksum: :md5
orchecksum: :all
argument. See Validation for more information. - metadata (Hash) (defaults to: nil) — A hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.
-
storage_class (Symbol, String) (defaults to: nil) — Storage class of the file.
Determines how the file is stored and determines the SLA and the
cost of storage. Accepted values include
:standard
,:nearline
,:coldline
, and:archive
, as well as the equivalent strings returned by #storage_class.:multi_regional
,:regional
, anddurable_reduced_availability
are accepted legacy storage classes. For more information, see Storage Classes and Per-Object Storage Class. The default value is the default storage class for the bucket. -
encryption_key (String) (defaults to: nil) — Optional. A customer-supplied, AES-256
encryption key that will be used to encrypt the file. Do not provide
if
kms_key
is used. -
kms_key (String) (defaults to: nil) — Optional. Resource name of the Cloud KMS
key, of the form
projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key
, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide ifencryption_key
is used. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
- (ArgumentError)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext"
Specifying a destination path:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext"
Providing a customer-supplied encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # Key generation shown for example purposes only. Write your own. cipher = OpenSSL::Cipher.new "aes-256-cfb" cipher.encrypt key = cipher.random_key bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", encryption_key: key # Store your key and hash securely for later use. file = bucket.file "destination/path/file.ext", encryption_key: key
Providing a customer-managed Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", kms_key: kms_key_name file = bucket.file "destination/path/file.ext" file.kms_key #=> kms_key_name
Create a file with gzip-encoded data.
require "zlib" require "google/cloud/storage" storage = Google::Cloud::Storage.new gz = StringIO.new "" z = Zlib::GzipWriter.new gz z.write "Hello world!" z.close data = StringIO.new gz.string bucket = storage.bucket "my-bucket" bucket.create_file data, "path/to/gzipped.txt", content_encoding: "gzip" file = bucket.file "path/to/gzipped.txt" # The downloaded data is decompressed by default. file.download "path/to/downloaded/hello.txt" # The downloaded data remains compressed with skip_decompress. file.download "path/to/downloaded/gzipped.txt", skip_decompress: true
#new_notification
def new_notification(topic, custom_attrs: nil, event_types: nil, prefix: nil, payload: nil) -> Google::Cloud::Storage::Notification
Creates a new Pub/Sub notification subscription for the bucket.
- topic (String) — The name of the Cloud PubSub topic to which the notification subscription will publish.
- custom_attrs (Hash(String => String)) (defaults to: nil) — The custom attributes for the notification. An optional list of additional attributes to attach to each Cloud Pub/Sub message published for the notification subscription.
-
event_types (Symbol, String, Array<Symbol, String>) (defaults to: nil) —
The event types for the notification subscription. If provided, messages will only be sent for the listed event types. If empty, messages will be sent for all event types.
Acceptable values are:
:finalize
- Sent when a new object (or a new generation of an existing object) is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.:update
- Sent when the metadata of an existing object changes.:delete
- Sent when an object has been permanently deleted. This includes objects that are overwritten or are deleted as part of the bucket's lifecycle configuration. For buckets with object versioning enabled, this is not sent when an object is archived (seeOBJECT_ARCHIVE
), even if archival occurs via the File#delete method.:archive
- Only sent when the bucket has enabled object versioning. This event indicates that the live version of an object has become an archived version, either because it was archived or because it was overwritten by the upload of an object of the same name.
- prefix (String) (defaults to: nil) — The file name prefix for the notification subscription. If provided, the notification will only be applied to file names that begin with this prefix.
-
payload (Symbol, String, Boolean) (defaults to: nil) — The desired content of the
Pub/Sub message payload. Acceptable values are:
:json
ortrue
- The Pub/Sub message payload will be a UTF-8 string containing the resource representation of the file's metadata.:none
orfalse
- No payload is included with the notification.
The default value is
:json
.
require "google/cloud/pubsub" require "google/cloud/storage" pubsub = Google::Cloud::Pubsub.new storage = Google::Cloud::Storage.new topic = pubsub.create_topic "my-topic" topic.policy do |p| p.add "roles/pubsub.publisher", "serviceAccount:#{storage.service_account_email}" end bucket = storage.bucket "my-bucket" notification = bucket.create_notification topic.name
#notification
def notification(id) -> Google::Cloud::Storage::Notification, nil
Retrieves a Pub/Sub notification subscription for the bucket.
- id (String) — The Notification ID.
- (Google::Cloud::Storage::Notification, nil) — Returns nil if the notification does not exist
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" notification = bucket.notification "1" puts notification.id
#notifications
def notifications() -> Array<Google::Cloud::Storage::Notification>
Retrieves the entire list of Pub/Sub notification subscriptions for the bucket.
- (Array<Google::Cloud::Storage::Notification>)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" notifications = bucket.notifications notifications.each do |notification| puts notification.id end
#object_retention
def object_retention() -> Google::Apis::StorageV1::Bucket::ObjectRetention
The object retention configuration of the bucket
- (Google::Apis::StorageV1::Bucket::ObjectRetention)
#policy
def policy(force: nil, requested_policy_version: nil) { |policy| ... } -> Policy
Gets and updates the Cloud IAM access control policy for this bucket.
-
force (Boolean) (defaults to: nil) — [Deprecated] Force the latest policy to be
retrieved from the Storage service when
true
. Deprecated because the latest policy is now always retrieved. The default isnil
. -
requested_policy_version (Integer) (defaults to: nil) —
The requested syntax schema version of the policy. Optional. If
1
,nil
, or not provided, a PolicyV1 object is returned, which provides PolicyV1#roles and related helpers but does not provide abindings
method. If3
is provided, a PolicyV3 object is returned, which provides PolicyV3#bindings but does not provide aroles
method or related helpers. A higher version indicates that the policy contains role bindings with the newer syntax schema that is unsupported by earlier versions.The following requested policy versions are valid:
- 1 - The first version of Cloud IAM policy schema. Supports binding one role to one or more members. Does not support conditional bindings.
- 3 - Introduces the condition field in the role binding, which further constrains the role binding via context-based and attribute-based rules. See Understanding policies and Overview of Cloud IAM Conditions for more information.
- (policy) — A block for updating the policy. The latest policy will be read from the service and passed to the block. After the block completes, the modified policy will be written to the service.
- policy (Policy) — the current Cloud IAM Policy for this bucket
- (Policy) — the current Cloud IAM Policy for this bucket
Retrieving a Policy that is implicitly version 1:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy policy.version # 1 puts policy.roles["roles/storage.objectViewer"]
Retrieving a version 3 Policy using requested_policy_version
:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy requested_policy_version: 3 policy.version # 3 puts policy.bindings.find do |b| b[:role] == "roles/storage.objectViewer" end
Updating a Policy that is implicitly version 1:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.policy do |p| p.version # the value is 1 p.remove "roles/storage.admin", "user:owner@example.com" p.add "roles/storage.admin", "user:newowner@example.com" p.roles["roles/storage.objectViewer"] = ["allUsers"] end
Updating a Policy from version 1 to version 3 by adding a condition:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.uniform_bucket_level_access = true bucket.policy requested_policy_version: 3 do |p| p.version # the value is 1 p.version = 3 # Must be explicitly set to opt-in to support for conditions. expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" p.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) end
Updating a version 3 Policy:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.uniform_bucket_level_access? # true bucket.policy requested_policy_version: 3 do |p| p.version = 3 # Must be explicitly set to opt-in to support for conditions. expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" p.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) end
#policy=
def policy=(new_policy) -> Policy
Updates the Cloud IAM access control
policy for this bucket. The policy should be read from #policy. See
Policy for an explanation of the
policy etag
property and how to modify policies.
You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.
- new_policy (Policy) — a new or modified Cloud IAM Policy for this bucket
- (Policy) — The policy returned by the API update operation.
Updating a Policy that is implicitly version 1:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy policy.version # 1 policy.remove "roles/storage.admin", "user:owner@example.com" policy.add "roles/storage.admin", "user:newowner@example.com" policy.roles["roles/storage.objectViewer"] = ["allUsers"] policy = bucket.update_policy policy
Updating a Policy from version 1 to version 3 by adding a condition:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy requested_policy_version: 3 policy.version # 1 policy.version = 3 expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" policy.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) policy = bucket.update_policy policy
Updating a version 3 Policy:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy requested_policy_version: 3 policy.version # 3 indicates an existing binding with a condition. expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" policy.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) policy = bucket.update_policy policy
#policy_only=
def policy_only=(new_policy_only)
#policy_only?
def policy_only?() -> Boolean
- (Boolean)
#policy_only_locked_at
def policy_only_locked_at()
#post_object
def post_object(path, policy: nil, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil) -> PostObject
Generate a PostObject that includes the fields and URL to upload objects via HTML forms.
Generating a PostObject requires service account credentials,
either by connecting with a service account when calling
Google::Cloud.storage, or by passing in the service account
issuer
and signing_key
values. Although the private key can
be passed as a string for convenience, creating and storing
an instance of # OpenSSL::PKey::RSA
is more efficient
when making multiple calls to post_object
.
A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
- path (String) — Path to the file in Google Cloud Storage.
-
policy (Hash) (defaults to: nil) — The security policy that describes what
can and cannot be uploaded in the form. When provided, the PostObject
fields will include a signature based on the JSON representation of
this hash and the same policy in Base64 format.
If you do not provide a security policy, requests are considered to be anonymous and will only work with buckets that have granted
WRITE
orFULL_CONTROL
permission to anonymous users. See Policy Document for more information. - issuer (String) (defaults to: nil) — Service Account's Client Email.
- client_email (String) (defaults to: nil) — Service Account's Client Email.
- signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
- private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
-
signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's
Private Key or a Proc that accepts a single String parameter and returns a
RSA SHA256 signature using a valid Google Service Account Private Key.
When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.
- (PostObject) — An object containing the URL, fields, and values needed to upload files via HTML forms.
- (SignedUrlUnavailable) — If the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" post = bucket.post_object "avatars/heidi/400x400.png" post.url #=> "https://storage.googleapis.com" post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com" post.fields[:signature] #=> "ABC...XYZ=" post.fields[:policy] #=> "ABC...XYZ="
Using a policy to define the upload authorization:
require "google/cloud/storage" storage = Google::Cloud::Storage.new policy = { expiration: (Time.now + 3600).iso8601, conditions: [ ["starts-with", "$key", ""], {acl: "bucket-owner-read"}, {bucket: "travel-maps"}, {success_action_redirect: "http://example.com/success.html"}, ["eq", "$Content-Type", "image/jpeg"], ["content-length-range", 0, 1000000] ] } bucket = storage.bucket "my-todo-app" post = bucket.post_object "avatars/heidi/400x400.png", policy: policy post.url #=> "https://storage.googleapis.com" post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com" post.fields[:signature] #=> "ABC...XYZ=" post.fields[:policy] #=> "ABC...XYZ="
Using the issuer and signing_key options:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" key = OpenSSL::PKey::RSA.new post = bucket.post_object "avatars/heidi/400x400.png", issuer: "service-account@gcloud.com", signing_key: key post.url #=> "https://storage.googleapis.com" post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com" post.fields[:signature] #=> "ABC...XYZ=" post.fields[:policy] #=> "ABC...XYZ="
Using Cloud IAMCredentials signBlob to create the signature:
require "google/cloud/storage" require "google/apis/iamcredentials_v1" require "googleauth" # Issuer is the service account email that the Signed URL will be signed with # and any permission granted in the Signed URL must be granted to the # Google Service Account. issuer = "service-account@project-id.iam.gserviceaccount.com" # Create a lambda that accepts the string_to_sign signer = lambda do |string_to_sign| IAMCredentials = Google::Apis::IamcredentialsV1 iam_client = IAMCredentials::IAMCredentialsService.new # Get the environment configured authorization scopes = ["https://www.googleapis.com/auth/iam"] iam_client.authorization = Google::Auth.get_application_default scopes request = Google::Apis::IamcredentialsV1::SignBlobRequest.new( payload: string_to_sign ) resource = "projects/-/serviceAccounts/#{issuer}" response = iam_client.sign_service_account_blob resource, request response.signed_blob end storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" post = bucket.post_object "avatars/heidi/400x400.png", issuer: issuer, signer: signer post.url #=> "https://storage.googleapis.com" post.fields[:key] #=> "my-todo-app/avatars/heidi/400x400.png" post.fields[:GoogleAccessId] #=> "0123456789@gserviceaccount.com" post.fields[:signature] #=> "ABC...XYZ=" post.fields[:policy] #=> "ABC...XYZ="
#public_access_prevention
def public_access_prevention() -> String, nil
The value for Public Access Prevention in the bucket's IAM configuration. Currently, inherited
and
enforced
are supported. When set to enforced
, Public Access Prevention is enforced in the bucket's IAM
configuration. This value can be modified by calling #public_access_prevention=.
-
(String, nil) — Currently,
inherited
andenforced
are supported. Returnsnil
if the bucket has no IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :enforced bucket.public_access_prevention #=> "enforced"
#public_access_prevention=
def public_access_prevention=(new_public_access_prevention)
Sets the value for Public Access Prevention in the bucket's IAM configuration. This value can be queried by calling #public_access_prevention.
-
new_public_access_prevention (Symbol, String) — The bucket's new Public Access Prevention configuration.
Currently,
inherited
andenforced
are supported. When set toenforced
, Public Access Prevention is enforced in the bucket's IAM configuration.
Set Public Access Prevention to enforced:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :enforced bucket.public_access_prevention #=> "enforced"
Set Public Access Prevention to inherited:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :inherited bucket.public_access_prevention #=> "inherited"
#public_access_prevention_enforced?
def public_access_prevention_enforced?() -> Boolean
Whether the bucket's file IAM configuration enforces Public Access Prevention. The default is false
. This
value can be modified by calling #public_access_prevention=.
-
(Boolean) — Returns
false
if the bucket has no IAM configuration or if Public Access Prevention is notenforced
in the IAM configuration. Returnstrue
if Public Access Prevention isenforced
in the IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :enforced bucket.public_access_prevention_enforced? # true
#public_access_prevention_inherited?
def public_access_prevention_inherited?() -> Boolean
Whether the value for Public Access Prevention in the bucket's IAM configuration is inherited
. The default
is false
. This value can be modified by calling #public_access_prevention=.
-
(Boolean) — Returns
false
if the bucket has no IAM configuration or if Public Access Prevention is notinherited
in the IAM configuration. Returnstrue
if Public Access Prevention isinherited
in the IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :inherited bucket.public_access_prevention_inherited? # true
#public_access_prevention_unspecified?
def public_access_prevention_unspecified?() -> Boolean
Whether the value for Public Access Prevention in the bucket's IAM configuration is inherited
. The default
is false
. This value can be modified by calling #public_access_prevention=.
-
(Boolean) — Returns
false
if the bucket has no IAM configuration or if Public Access Prevention is notinherited
in the IAM configuration. Returnstrue
if Public Access Prevention isinherited
in the IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.public_access_prevention = :inherited bucket.public_access_prevention_inherited? # true
#refresh!
def refresh!()
Reloads the bucket with current data from the Storage service.
#reload!
def reload!()
Reloads the bucket with current data from the Storage service.
#requester_pays
def requester_pays() -> Boolean, nil
Indicates that a client accessing the bucket or a file it contains
must assume the transit costs related to the access. The requester
must pass the user_project
option to Project#bucket and
Project#buckets to indicate the project to which the access costs
should be billed.
-
(Boolean, nil) — Returns
true
if requester pays is enabled for the bucket.
#requester_pays=
def requester_pays=(new_requester_pays)
Enables requester pays for the bucket. If enabled, a client accessing
the bucket or a file it contains must assume the transit costs related
to the access. The requester must pass the user_project
option to
Project#bucket and Project#buckets to indicate the project to
which the access costs should be billed.
To pass metageneration preconditions, call this method within a block passed to #update.
-
new_requester_pays (Boolean) — When set to
true
, requester pays is enabled for the bucket.
Enable requester pays for a bucket:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.requester_pays = true # API call # Other projects must now provide `user_project` option when calling # Project#bucket or Project#buckets to access this bucket.
#requester_pays?
def requester_pays?() -> Boolean, nil
Indicates that a client accessing the bucket or a file it contains
must assume the transit costs related to the access. The requester
must pass the user_project
option to Project#bucket and
Project#buckets to indicate the project to which the access costs
should be billed.
-
(Boolean, nil) — Returns
true
if requester pays is enabled for the bucket.
#restore_file
def restore_file(file_path, generation, copy_source_acl: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, projection: nil, user_project: nil, fields: nil, options: {}) -> Google::Cloud::Storage::File
Restores a soft-deleted object.
- file_path (String) — Name of the file.
- generation (Fixnum) — Selects a specific revision of this object.
- copy_source_acl (Boolean) (defaults to: nil) — If true, copies the source file's ACL; otherwise, uses the bucket's default file ACL. The default is false.
- if_generation_match (Fixnum) (defaults to: nil) — Makes the operation conditional on whether the file's one live generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Fixnum) (defaults to: nil) — Makes the operation conditional on whether none of the file's live generations match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Fixnum) (defaults to: nil) — Makes the operation conditional on whether the file's one live metageneration matches the given value.
- if_metageneration_not_match (Fixnum) (defaults to: nil) — Makes the operation conditional on whether none of the object's live metagenerations match the given value.
- projection (String) (defaults to: nil) — Set of properties to return. Defaults to full.
- user_project (String) (defaults to: nil) — The project to be billed for this request. Required for Requester Pays buckets.
- fields (String) (defaults to: nil) — Selector specifying which fields to include in a partial response.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.restore_file "path/of/file", <generation-of-the-file>
#retention_effective_at
def retention_effective_at() -> DateTime, nil
The time from which the retention policy was effective. Whenever a retention policy is created or extended, GCS updates the effective date of the policy. The effective date signals the date starting from which objects were guaranteed to be retained for the full duration of the policy.
This field is updated when the retention policy is created or modified, including extension of a locked policy.
- (DateTime, nil) — The effective date of the bucket's retention policy, if a policy exists.
#retention_period
def retention_period() -> Integer, nil
The period of time (in seconds) that files in the bucket must be retained, and cannot be deleted, overwritten, or archived. The value must be between 0 and 100 years (in seconds.)
See also: #retention_period=, #retention_effective_at, and #retention_policy_locked?.
- (Integer, nil) — The retention period defined in seconds, if a retention policy exists for the bucket.
#retention_period=
def retention_period=(new_retention_period)
The period of time (in seconds) that files in the bucket must be
retained, and cannot be deleted, overwritten, or archived. Passing a
valid Integer value will add a new retention policy to the bucket
if none exists. Passing nil
will remove the retention policy from
the bucket if it exists, unless the policy is locked.
Locked policies can be extended in duration by using this method to set a higher value. Such an extension is permanent, and it cannot later be reduced. The extended duration will apply retroactively to all files currently in the bucket.
See also: #lock_retention_policy!, #retention_period, #retention_effective_at, and #retention_policy_locked?.
To pass metageneration preconditions, call this method within a block passed to #update.
-
new_retention_period (Integer, nil) — The retention period
defined in seconds. The value must be between 0 and 100 years (in
seconds), or
nil
.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds file = bucket.create_file "path/to/local.file.ext" file.delete # raises Google::Cloud::PermissionDeniedError
#retention_policy_locked?
def retention_policy_locked?() -> Boolean
Whether the bucket's file retention policy is locked and its retention period cannot be reduced. See #retention_period= and #lock_retention_policy!.
This value can only be set to true
by calling
#lock_retention_policy!.
-
(Boolean) — Returns
false
if there is no retention policy or if the retention policy is unlocked and the retention period can be reduced. Returnstrue
if the retention policy is locked and the retention period cannot be reduced.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds bucket.lock_retention_policy! bucket.retention_policy_locked? # true file = bucket.create_file "path/to/local.file.ext" file.delete # raises Google::Cloud::PermissionDeniedError
#rpo
def rpo() -> String, nil
Recovery Point Objective (RPO) is another attribute of a bucket, it measures how long it takes for a set of
updates to be asynchronously copied to the other region.
Currently, DEFAULT
and ASYNC_TURBO
are supported. When set to ASYNC_TURBO
, Turbo Replication is enabled
for a bucket. DEFAULT
is used to reset rpo on an existing bucket with rpo set to ASYNC_TURBO
.
This value can be modified by calling #rpo=.
-
(String, nil) — Currently,
DEFAULT
andASYNC_TURBO
are supported. Returnsnil
if the bucket has no RPO.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.rpo = :DEFAULT bucket.rpo #=> "DEFAULT"
#rpo=
def rpo=(new_rpo)
Sets the value for Recovery Point Objective (RPO) in the bucket. This value can be queried by calling #rpo.
-
new_rpo (Symbol, String) — The bucket's new Recovery Point Objective metadata.
Currently,
DEFAULT
andASYNC_TURBO
are supported. When set toASYNC_TURBO
, Turbo Replication is enabled for a bucket.
Set RPO to DEFAULT:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.rpo = :DEFAULT bucket.rpo #=> "DEFAULT"
Set RPO to ASYNC_TURBO:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.rpo = :ASYNC_TURBO bucket.rpo #=> "ASYNC_TURBO"
#signed_url
def signed_url(path = nil, method: "GET", expires: nil, content_type: nil, content_md5: nil, headers: nil, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil, query: nil, scheme: "HTTPS", virtual_hosted_style: nil, bucket_bound_hostname: nil, version: nil) -> String
Generates a signed URL. See Signed URLs for more information.
Generating a signed URL requires service account credentials, either
by connecting with a service account when calling
Google::Cloud.storage, or by passing in the service account issuer
and signing_key
values. Although the private key can be passed as a
string for convenience, creating and storing an instance of
OpenSSL::PKey::RSA
is more efficient when making multiple calls to
signed_url
.
A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
-
path (String, nil) — Path to the file in Google Cloud Storage, or
nil
to generate a URL for listing all files in the bucket. -
method (String) (defaults to: "GET") — The HTTP verb to be used with the signed URL.
Signed URLs can be used
with
GET
,HEAD
,PUT
, andDELETE
requests. Default isGET
. -
expires (Integer) (defaults to: nil) — The number of seconds until the URL expires.
If the
version
is:v2
, the default is 300 (5 minutes). If theversion
is:v4
, the default is 604800 (7 days). -
content_type (String) (defaults to: nil) — When provided, the client (browser) must
send this value in the HTTP header. e.g.
text/plain
. This param is not used if theversion
is:v4
. -
content_md5 (String) (defaults to: nil) — The MD5 digest value in base64. If you
provide this in the string, the client (usually a browser) must
provide this HTTP header with this same value in its request. This
param is not used if the
version
is:v4
. -
headers (Hash) (defaults to: nil) — Google extension headers (custom HTTP headers
that begin with
x-goog-
) that must be included in requests that use the signed URL. - issuer (String) (defaults to: nil) — Service Account's Client Email.
- client_email (String) (defaults to: nil) — Service Account's Client Email.
- signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
- private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
-
signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's
Private Key or a Proc that accepts a single String parameter and returns a
RSA SHA256 signature using a valid Google Service Account Private Key.
When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.
-
query (Hash) (defaults to: nil) — Query string parameters to include in the signed
URL. The given parameters are not verified by the signature.
Parameters such as
response-content-disposition
andresponse-content-type
can alter the behavior of the response when using the URL, but only when the file resource is missing the corresponding values. (These values can be permanently set using File#content_disposition= and File#content_type=.) -
scheme (String) (defaults to: "HTTPS") — The URL scheme. The default value is
HTTPS
. -
virtual_hosted_style (Boolean) (defaults to: nil) — Whether to use a virtual hosted-style
hostname, which adds the bucket into the host portion of the URI rather
than the path, e.g.
https://mybucket.storage.googleapis.com/...
. For V4 signing, this also sets thehost
header in the canonicalized extension headers to the virtual hosted-style host, unless that header is supplied via theheaders
param. The default value offalse
uses the form ofhttps://storage.googleapis.com/mybucket
. -
bucket_bound_hostname (String) (defaults to: nil) — Use a bucket-bound hostname, which
replaces the
storage.googleapis.com
host with the name of aCNAME
bucket, e.g. a bucket namedgcs-subdomain.my.domain.tld
, or a Google Cloud Load Balancer which routes to a bucket you own, e.g.my-load-balancer-domain.tld
. -
version (Symbol, String) (defaults to: nil) — The version of the signed credential
to create. Must be one of
:v2
or:v4
. The default value is:v2
.
- (String) — The signed URL.
- (SignedUrlUnavailable) — If the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" shared_url = bucket.signed_url "avatars/heidi/400x400.png"
Using the expires
and version
options:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" shared_url = bucket.signed_url "avatars/heidi/400x400.png", expires: 300, # 5 minutes from now version: :v4
Using the issuer
and signing_key
options:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" key = OpenSSL::PKey::RSA.new "-----BEGIN PRIVATE KEY-----\n..." shared_url = bucket.signed_url "avatars/heidi/400x400.png", issuer: "service-account@gcloud.com", signing_key: key
Using Cloud IAMCredentials signBlob to create the signature:
require "google/cloud/storage" require "google/apis/iamcredentials_v1" require "googleauth" # Issuer is the service account email that the Signed URL will be signed with # and any permission granted in the Signed URL must be granted to the # Google Service Account. issuer = "service-account@project-id.iam.gserviceaccount.com" # Create a lambda that accepts the string_to_sign signer = lambda do |string_to_sign| IAMCredentials = Google::Apis::IamcredentialsV1 iam_client = IAMCredentials::IAMCredentialsService.new # Get the environment configured authorization scopes = ["https://www.googleapis.com/auth/iam"] iam_client.authorization = Google::Auth.get_application_default scopes request = Google::Apis::IamcredentialsV1::SignBlobRequest.new( payload: string_to_sign ) resource = "projects/-/serviceAccounts/#{issuer}" response = iam_client.sign_service_account_blob resource, request response.signed_blob end storage = Google::Cloud::Storage.new bucket_name = "my-todo-app" file_path = "avatars/heidi/400x400.png" url = storage.signed_url bucket_name, file_path, method: "GET", issuer: issuer, signer: signer
Using the headers
option:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" shared_url = bucket.signed_url "avatars/heidi/400x400.png", headers: { "x-goog-acl" => "private", "x-goog-meta-foo" => "bar,baz" }
Generating a signed URL for resumable upload:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" url = bucket.signed_url "avatars/heidi/400x400.png", method: "POST", content_type: "image/png", headers: { "x-goog-resumable" => "start" } # Send the `x-goog-resumable:start` header and the content type # with the resumable upload POST request.
Omitting path
for a URL to list all files in the bucket.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" list_files_url = bucket.signed_url version: :v4
#soft_delete_policy
def soft_delete_policy() -> Google::Apis::StorageV1::Bucket::SoftDeletePolicy
The bucket's soft delete policy. If this policy is set, any deleted objects will be soft-deleted according to the time specified in the policy. This value can be modified by calling #soft_delete_policy=.
days.
- (Google::Apis::StorageV1::Bucket::SoftDeletePolicy) — The default retention policy is for 7
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.soft_delete_policy
#soft_delete_policy=
def soft_delete_policy=(new_soft_delete_policy)
Sets the value for Soft Delete Policy in the bucket. This value can be queried by calling #soft_delete_policy.
- new_soft_delete_policy (Google::Apis::StorageV1::Bucket::SoftDeletePolicy, Hash(String => String)) — The bucket's new Soft Delete Policy.
Set Soft Delete Policy to 10 days using SoftDeletePolicy class:
require "google/cloud/storage" require "date" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" soft_delete_policy = Google::Apis::StorageV1::Bucket::SoftDeletePolicy.new soft_delete_policy.retention_duration_seconds = 10*24*60*60 bucket.soft_delete_policy = soft_delete_policy
Set Soft Delete Policy to 5 days using Hash:
require "google/cloud/storage" require "date" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" soft_delete_policy = { retention_duration_seconds: 432000 } bucket.soft_delete_policy = soft_delete_policy
#storage_class
def storage_class() -> String
The bucket's storage class. This defines how objects in the bucket are
stored and determines the SLA and the cost of storage. Values include
STANDARD
, NEARLINE
, COLDLINE
, and ARCHIVE
. REGIONAL
,MULTI_REGIONAL
,
and DURABLE_REDUCED_AVAILABILITY
are supported as legacy storage
classes.
- (String)
#storage_class=
def storage_class=(new_storage_class)
Updates the bucket's storage class. This defines how objects in the
bucket are stored and determines the SLA and the cost of storage.
Accepted values include :standard
, :nearline
, :coldline
, and
:archive
, as well as the equivalent strings returned by
#storage_class. :multi_regional
, :regional
, and
durable_reduced_availability
are accepted as legacy storage classes.
For more information, see Storage
Classes.
To pass metageneration preconditions, call this method within a block passed to #update.
- new_storage_class (Symbol, String) — Storage class of the bucket.
#test_permissions
def test_permissions(*permissions) -> Array<String>
Tests the specified permissions against the Cloud IAM access control policy.
-
permissions (String, Array<String>) — The set of permissions
against which to check access. Permissions must be of the format
storage.resource.capability
, where resource is one ofbuckets
orobjects
.
- (Array<String>) — The permissions held by the caller.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" permissions = bucket.test_permissions "storage.buckets.get", "storage.buckets.delete" permissions.include? "storage.buckets.get" #=> true permissions.include? "storage.buckets.delete" #=> false
#uniform_bucket_level_access=
def uniform_bucket_level_access=(new_uniform_bucket_level_access)
Sets whether uniform bucket-level access is enabled for this bucket. When this is enabled, access to the bucket will be configured through IAM, and legacy ACL policies will not work. When it is first enabled, #uniform_bucket_level_access_locked_at will be set by the API automatically. The uniform bucket-level access can then be disabled until the time specified, after which it will become immutable and calls to change it will fail. If uniform bucket-level access is enabled, calls to access legacy ACL information will fail.
Before enabling uniform bucket-level access please review uniform bucket-level access.
To pass metageneration preconditions, call this method within a block passed to #update.
-
new_uniform_bucket_level_access (Boolean) — When set to
true
, uniform bucket-level access is enabled in the bucket's IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.uniform_bucket_level_access = true bucket.uniform_bucket_level_access? # true bucket.default_acl.public! # Google::Cloud::InvalidArgumentError # The deadline for disabling uniform bucket-level access. puts bucket.uniform_bucket_level_access_locked_at
#uniform_bucket_level_access?
def uniform_bucket_level_access?() -> Boolean
Whether the bucket's file IAM configuration enables uniform bucket-level access. The default is false. This value can be modified by calling #uniform_bucket_level_access=.
-
(Boolean) — Returns
false
if the bucket has no IAM configuration or if uniform bucket-level access is not enabled in the IAM configuration. Returnstrue
if uniform bucket-level access is enabled in the IAM configuration.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.uniform_bucket_level_access = true bucket.uniform_bucket_level_access? # true
#uniform_bucket_level_access_locked_at
def uniform_bucket_level_access_locked_at() -> DateTime, nil
The deadline time for disabling uniform bucket-level access by calling #uniform_bucket_level_access=.
After the locked time the uniform bucket-level access setting cannot be changed from true to false.
Corresponds to the property locked_time
.
-
(DateTime, nil) — The deadline time for changing #uniform_bucket_level_access= from true to
false, or
nil
if #uniform_bucket_level_access? is false.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.uniform_bucket_level_access = true # The deadline for disabling uniform bucket-level access. puts bucket.uniform_bucket_level_access_locked_at
#update
def update(if_metageneration_match: nil, if_metageneration_not_match: nil)
Updates the bucket with changes made in the given block in a single PATCH request. The following attributes may be set: #cors, #logging_bucket=, #logging_prefix=, #versioning=, #website_main=, #website_404=, and #requester_pays=.
In addition, the #cors configuration accessible in the block is completely mutable and will be included in the request. (See Cors)
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the bucket's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the bucket's current metageneration does not match the given value.
- (bucket) — a block yielding a delegate object for updating the file
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.update do |b| b.website_main = "index.html" b.website_404 = "not_found.html" b.cors[0].methods = ["GET","POST","DELETE"] b.cors[1].headers << "X-Another-Custom-Header" end
New CORS rules can also be added in a nested block:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.update do |b| b.cors do |c| c.add_rule ["http://example.org", "https://example.org"], "*", headers: ["X-My-Custom-Header"], max_age: 300 end end
With a if_metageneration_match
precondition:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" bucket.update if_metageneration_match: 6 do |b| b.website_main = "index.html" end
#update_autoclass
def update_autoclass(autoclass_attributes)
Update method to update all attributes of autoclass of a bucket It accepts params as a Hash of attributes in the following format:
{ enabled: true, terminal_storage_class: "ARCHIVE" }
terminal_storage_class field is optional. It defaults to NEARLINE
.
Valid terminal_storage_class values are NEARLINE
and ARCHIVE
.
- autoclass_attributes (Hash(String => String))
#update_policy
def update_policy(new_policy) -> Policy
Updates the Cloud IAM access control
policy for this bucket. The policy should be read from #policy. See
Policy for an explanation of the
policy etag
property and how to modify policies.
You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.
- new_policy (Policy) — a new or modified Cloud IAM Policy for this bucket
- (Policy) — The policy returned by the API update operation.
Updating a Policy that is implicitly version 1:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy policy.version # 1 policy.remove "roles/storage.admin", "user:owner@example.com" policy.add "roles/storage.admin", "user:newowner@example.com" policy.roles["roles/storage.objectViewer"] = ["allUsers"] policy = bucket.update_policy policy
Updating a Policy from version 1 to version 3 by adding a condition:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy requested_policy_version: 3 policy.version # 1 policy.version = 3 expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" policy.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) policy = bucket.update_policy policy
Updating a version 3 Policy:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" policy = bucket.policy requested_policy_version: 3 policy.version # 3 indicates an existing binding with a condition. expr = "resource.name.startsWith(\"projects/_/buckets/bucket-name/objects/prefix-a-\")" policy.bindings.insert({ role: "roles/storage.admin", members: ["user:owner@example.com"], condition: { title: "my-condition", description: "description of condition", expression: expr } }) policy = bucket.update_policy policy
#upload_file
def upload_file(file, path = nil, acl: nil, cache_control: nil, content_disposition: nil, content_encoding: nil, content_language: nil, content_type: nil, custom_time: nil, checksum: nil, crc32c: nil, md5: nil, metadata: nil, storage_class: nil, encryption_key: nil, kms_key: nil, temporary_hold: nil, event_based_hold: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil) -> Google::Cloud::Storage::File
Creates a new File object by providing a path to a local file (or any File-like object such as StringIO) to upload, along with the path at which to store it in the bucket.
Customer-supplied encryption keys
By default, Google Cloud Storage manages server-side encryption keys
on your behalf. However, a customer-supplied encryption key
can be provided with the encryption_key
option. If given, the same
key must be provided to subsequently download or copy the file. If you
use customer-supplied encryption keys, you must securely manage your
keys and ensure that they are not lost. Also, please note that file
metadata is not encrypted, with the exception of the CRC32C checksum
and MD5 hash. The names of files and buckets are also not encrypted,
and you can read or update the metadata of an encrypted file without
providing the encryption key.
-
file (String, ::File) — Path of the file on the filesystem to
upload. Can be an File object, or File-like object such as StringIO.
(If the object does not have path, a
path
argument must be also be provided.) - path (String) — Path to store the file in Google Cloud Storage.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to this file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- cache_control (String) (defaults to: nil) — The Cache-Control response header to be returned when the file is downloaded.
- content_disposition (String) (defaults to: nil) — The Content-Disposition response header to be returned when the file is downloaded.
-
content_encoding (String) (defaults to: nil) — The Content-Encoding
response
header to be returned when the file is downloaded. For example,
content_encoding: "gzip"
can indicate to clients that the uploaded data is gzip-compressed. However, there is no check to guarantee the specifiedContent-Encoding
has actually been applied to the file data, and incorrectly specifying the file's encoding could lead to unintended behavior on subsequent download requests. - content_language (String) (defaults to: nil) — The Content-Language response header to be returned when the file is downloaded.
- content_type (String) (defaults to: nil) — The Content-Type response header to be returned when the file is downloaded.
- custom_time (DateTime) (defaults to: nil) — A custom time specified by the user for the file. Once set, custom_time can't be unset, and it can only be changed to a time in the future. If custom_time must be unset, you must either perform a rewrite operation, or upload the data again and create a new file.
-
checksum (Symbol, nil) (defaults to: nil) — The type of checksum for the client to
automatically calculate and send with the create request to verify
the integrity of the object. If provided, Cloud Storage will only
create the file if the value calculated by the client matches the
value calculated by the service.
Acceptable values are:
md5
- Calculate and provide a checksum using the MD5 hash.crc32c
- Calculate and provide a checksum using the CRC32c hash.all
- Calculate and provide checksums for all available verifications.
Optional. The default is
nil
. Do not provide if also providing a correspondingcrc32c
ormd5
argument. See Validation for more information. -
crc32c (String) (defaults to: nil) — The CRC32c checksum of the file data, as
described in RFC 4960, Appendix
B.
If provided, Cloud Storage will only create the file if the value
matches the value calculated by the service. Do not provide if also
providing a
checksum: :crc32c
orchecksum: :all
argument. See Validation for more information. -
md5 (String) (defaults to: nil) — The MD5 hash of the file data. If provided, Cloud
Storage will only create the file if the value matches the value
calculated by the service. Do not provide if also providing a
checksum: :md5
orchecksum: :all
argument. See Validation for more information. - metadata (Hash) (defaults to: nil) — A hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.
-
storage_class (Symbol, String) (defaults to: nil) — Storage class of the file.
Determines how the file is stored and determines the SLA and the
cost of storage. Accepted values include
:standard
,:nearline
,:coldline
, and:archive
, as well as the equivalent strings returned by #storage_class.:multi_regional
,:regional
, anddurable_reduced_availability
are accepted legacy storage classes. For more information, see Storage Classes and Per-Object Storage Class. The default value is the default storage class for the bucket. -
encryption_key (String) (defaults to: nil) — Optional. A customer-supplied, AES-256
encryption key that will be used to encrypt the file. Do not provide
if
kms_key
is used. -
kms_key (String) (defaults to: nil) — Optional. Resource name of the Cloud KMS
key, of the form
projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key
, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide ifencryption_key
is used. - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
- (ArgumentError)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext"
Specifying a destination path:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext"
Providing a customer-supplied encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # Key generation shown for example purposes only. Write your own. cipher = OpenSSL::Cipher.new "aes-256-cfb" cipher.encrypt key = cipher.random_key bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", encryption_key: key # Store your key and hash securely for later use. file = bucket.file "destination/path/file.ext", encryption_key: key
Providing a customer-managed Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" bucket.create_file "path/to/local.file.ext", "destination/path/file.ext", kms_key: kms_key_name file = bucket.file "destination/path/file.ext" file.kms_key #=> kms_key_name
Create a file with gzip-encoded data.
require "zlib" require "google/cloud/storage" storage = Google::Cloud::Storage.new gz = StringIO.new "" z = Zlib::GzipWriter.new gz z.write "Hello world!" z.close data = StringIO.new gz.string bucket = storage.bucket "my-bucket" bucket.create_file data, "path/to/gzipped.txt", content_encoding: "gzip" file = bucket.file "path/to/gzipped.txt" # The downloaded data is decompressed by default. file.download "path/to/downloaded/hello.txt" # The downloaded data remains compressed with skip_decompress. file.download "path/to/downloaded/gzipped.txt", skip_decompress: true
#user_project
def user_project()
A boolean value or a project ID string to indicate the project to
be billed for operations on the bucket and its files. If this
attribute is set to true
, transit costs for operations on the bucket
will be billed to the current project for this client. (See
Project#project for the ID of the current project.) If this
attribute is set to a project ID, and that project is authorized for
the currently authenticated service account, transit costs will be
billed to that project. This attribute is required with requester
pays-enabled buckets. The default is nil
.
In general, this attribute should be set when first retrieving the
bucket by providing the user_project
option to Project#bucket.
See also #requester_pays= and #requester_pays.
Setting a non-default project:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "other-project-bucket", user_project: true files = bucket.files # Billed to current project bucket.user_project = "my-other-project" files = bucket.files # Billed to "my-other-project"
#user_project=
def user_project=(value)
A boolean value or a project ID string to indicate the project to
be billed for operations on the bucket and its files. If this
attribute is set to true
, transit costs for operations on the bucket
will be billed to the current project for this client. (See
Project#project for the ID of the current project.) If this
attribute is set to a project ID, and that project is authorized for
the currently authenticated service account, transit costs will be
billed to that project. This attribute is required with requester
pays-enabled buckets. The default is nil
.
In general, this attribute should be set when first retrieving the
bucket by providing the user_project
option to Project#bucket.
See also #requester_pays= and #requester_pays.
Setting a non-default project:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "other-project-bucket", user_project: true files = bucket.files # Billed to current project bucket.user_project = "my-other-project" files = bucket.files # Billed to "my-other-project"
#versioning=
def versioning=(new_versioning)
Updates whether Object Versioning is enabled for the bucket.
To pass metageneration preconditions, call this method within a block passed to #update.
- new_versioning (Boolean) — true if versioning is to be enabled for the bucket.
#versioning?
def versioning?() -> Boolean
Whether Object Versioning is enabled for the bucket.
- (Boolean)
#website_404
def website_404() -> String
The page returned from a static website served from the bucket when a site visitor requests a resource that does not exist.
- (String)
#website_404=
def website_404=(website_404)
Updates the page returned from a static website served from the bucket when a site visitor requests a resource that does not exist.
To pass metageneration preconditions, call this method within a block passed to #update.
#website_main
def website_main() -> String
The main page suffix for a static website. If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.
- (String) — The main page suffix.
#website_main=
def website_main=(website_main)
Updates the main page suffix for a static website.
To pass metageneration preconditions, call this method within a block passed to #update.
- website_main (String) — The main page suffix.