Reference documentation and code samples for the Cloud Storage API class Google::Cloud::Storage::File.
File
Represents a File (Object) that belongs to a Bucket. Files (Objects) are the individual pieces of data that you store in Google Cloud Storage. A file can be up to 5 TB in size. Files have two components: data and metadata. The data component is the data from an external file or other data source that you want to store in Google Cloud Storage. The metadata component is a collection of name-value pairs that describe various qualities of the data.
Inherits
- Object
Examples
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.download "path/to/downloaded/file.ext"
Download a public file with an unauthenticated client:
require "google/cloud/storage" storage = Google::Cloud::Storage.anonymous bucket = storage.bucket "public-bucket", skip_lookup: true file = bucket.file "path/to/public-file.ext", skip_lookup: true downloaded = file.download downloaded.rewind downloaded.read #=> "Hello world!"
Methods
.from_gs_url
def self.from_gs_url(gs_url) -> Hash(String => String)
from_gs_url is a method to fetch bucket details and file details from a gs url
- (Hash(String => String))
- (ArgumentError)
Fetch bucket_name and file_Path from gs url:
require "google/cloud/storage" gs_url= "gs://my-todo-app/avatars/heidi.jpeg" file=Google::Cloud::Storage::File file.from_gs_url(gs_url) => {"bucket_name"=>"my-todo-app", "file_path"=>"avatars/heidi.jpeg"}
Fetch bucket_name , file_Path and other query params from gs url:
require "google/cloud/storage" gs_url= "gs://my-todo-app/test_sub_folder/heidi.jpeg?params1=test1¶ms2=test2" file=Google::Cloud::Storage::File file.from_gs_url(gs_url) =>{ "bucket_name"=>"my-todo-app", "file_path"=>"test_sub_folder/heidi.jpeg", "options" => { "params1"=>"test1", "params2"=>"test2" } }
#acl
def acl() -> File::Acl
The Acl instance used to control access to the file.
A file has owners, writers, and readers. Permissions can be granted to an individual user's email address, a group's email address, as well as many predefined lists.
Grant access to a user by prepending "user-"
to an email:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" email = "heidi@example.net" file.acl.add_reader "user-#{email}"
Grant access to a group by prepending "group-"
to an email:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" email = "authors@example.net" file.acl.add_reader "group-#{email}"
Or, grant access via a predefined permissions list:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" file.acl.public!
#api_url
def api_url() -> String
A URL that can be used to access the file using the REST API.
- (String)
#bucket
def bucket() -> String
The name of the Bucket containing this file.
- (String)
#cache_control
def cache_control() -> String
The Cache-Control
directive for the file data. If omitted, and the file is accessible
to all anonymous users, the default will be public, max-age=3600
.
- (String)
#cache_control=
def cache_control=(cache_control)
Updates the
Cache-Control
directive for the file data. If omitted, and the file is accessible
to all anonymous users, the default will be public, max-age=3600
.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- cache_control (String) — The Cache-Control directive.
#content_disposition
def content_disposition() -> String
The Content-Disposition of the file data.
- (String)
#content_disposition=
def content_disposition=(content_disposition)
Updates the Content-Disposition of the file data.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- content_disposition (String) — The Content-Disposition of the file.
#content_encoding
def content_encoding() -> String
The Content-Encoding of the file data.
- (String)
#content_encoding=
def content_encoding=(content_encoding)
Updates the Content-Encoding of the file data.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- content_encoding (String) — The Content-Encoding of the file.
#content_language
def content_language() -> String
The Content-Language of the file data.
- (String)
#content_language=
def content_language=(content_language)
Updates the Content-Language of the file data.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- content_language (String) — The Content-Language of the file.
#content_type
def content_type() -> String
The Content-Type of the file data.
- (String)
#content_type=
def content_type=(content_type)
Updates the Content-Type of the file data.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- content_type (String) — The Content-Type of the file.
#copy
def copy(dest_bucket_or_path, dest_path = nil, acl: nil, generation: nil, encryption_key: nil, force_copy_metadata: nil) -> Google::Cloud::Storage::File
Copies the file to a new location. Metadata excluding ACL from the source object will be copied to the destination object unless a block is provided.
If an optional block for updating is provided, only the updates made in
this block will appear in the destination object, and other metadata
fields in the destination object will not be copied. To copy the other
source file metadata fields while updating destination fields in a
block, use the force_copy_metadata: true
flag, and the client library
will copy metadata from source metadata into the copy request.
If a customer-supplied encryption
key
was used with Bucket#create_file, the encryption_key
option must
be provided.
- dest_bucket_or_path (String) — Either the bucket to copy the file to, or the path to copy the file to in the current bucket.
- dest_path (String) — If a bucket was provided in the first parameter, this contains the path to copy the file to in the given bucket.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to new file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- generation (Integer) (defaults to: nil) — Select a specific revision of the file to copy. The default is the latest version.
- encryption_key (String) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to encrypt the file, if one was provided to Bucket#create_file.
-
force_copy_metadata (Boolean) (defaults to: nil) — Optional. If
true
and if updates are made in a block, the following fields will be copied from the source file to the destination file (except when changed by updates):cache_control
content_disposition
content_encoding
content_language
content_type
metadata
If
nil
orfalse
, only the updates made in the yielded block will be applied to the destination object. The default isnil
.
- (file) — a block yielding a delegate object for updating
The file can be copied to a new path in the current bucket:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.copy "path/to/destination/file.ext"
The file can also be copied to a different bucket:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.copy "new-destination-bucket", "path/to/destination/file.ext"
The file can also be copied by specifying a generation:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.copy "copy/of/previous/generation/file.ext", generation: 123456
The file can be modified during copying:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.copy "new-destination-bucket", "path/to/destination/file.ext" do |f| f.metadata["copied_from"] = "#{file.bucket}/#{file.name}" end
#crc32c
def crc32c() -> String
The CRC32c checksum of the data, as described in RFC 4960, Appendix B. Encoded using base64 in big-endian byte order.
- (String)
#created_at
def created_at() -> DateTime
Creation time of the file.
- (DateTime)
#custom_time
def custom_time() -> DateTime, nil
A custom time specified by the user for the file, or nil
.
- (DateTime, nil)
#custom_time=
def custom_time=(custom_time)
Updates the custom time specified by the user for the file. Once set, custom_time can't be unset, and it can only be changed to a time in the future. If custom_time must be unset, you must either perform a rewrite operation, or upload the data again and create a new file.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- custom_time (DateTime) — A custom time specified by the user for the file.
#delete
def delete(generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil) -> Boolean
Permanently deletes the file.
Raises PermissionDeniedError if the object is subject to an active retention policy or hold. (See #retention_expires_at, Bucket#retention_period, #temporary_hold? and #event_based_hold?.)
-
generation (Boolean, Integer) (defaults to: nil) — Specify a version of the file to
delete. When
true
, it will delete the version returned by #generation. The default behavior is to delete the latest version of the file (regardless of the version to which the file is set, which is the version returned by #generation.) - if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
-
(Boolean) — Returns
true
if the file was deleted.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.delete
The file's generation can used by passing true
:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.delete generation: true
A generation can also be specified:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.delete generation: 123456
#download
def download(path = nil, verify: :md5, encryption_key: nil, range: nil, skip_decompress: nil) -> ::File, StringIO
Downloads the file's contents to a local file or an File-like object.
By default, the download is verified by calculating the MD5 digest.
If a customer-supplied encryption
key
was used with Bucket#create_file, the encryption_key
option must
be provided.
- path (String, ::File) — The path on the local file system to write the data to. The path provided must be writable. Can also be an File object, or File-like object such as StringIO. If an file object, the object will be written to, not the filesystem. If omitted, a new StringIO instance will be written to and returned. Optional.
-
verify (Symbol) (defaults to: :md5) —
The verification algorithm used to ensure the downloaded file contents are correct. Default is
:md5
.Acceptable values are:
md5
- Verify file content match using the MD5 hash.crc32c
- Verify file content match using the CRC32c hash.all
- Perform all available file content verification.none
- Don't perform file content verification.
- encryption_key (String) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to encrypt the file, if one was provided to Bucket#create_file.
-
range (Range, String) (defaults to: nil) — Optional. The byte range of the file's
contents to download or a string header value. Provide this to
perform a partial download. When a range is provided, no
verification is performed regardless of the
verify
parameter's value. -
skip_decompress (Boolean) (defaults to: nil) — Optional. If
true
, the data for a Storage object returning aContent-Encoding: gzip
response header will not be automatically decompressed by this client library. The default isnil
. Note that all requests by this client library send theAccept-Encoding: gzip
header, so decompressive transcoding is not performed in the Storage service. (See Transcoding of gzip-compressed files)
-
(::File, StringIO) — Returns a file object representing the file
data. This will ordinarily be a
::File
object referencing the local file system. However, if the argument topath
isnil
, a StringIO instance will be returned. If the argument topath
is an File-like object, then that object will be returned.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.download "path/to/downloaded/file.ext"
Use the CRC32c digest by passing :crc32c.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.download "path/to/downloaded/file.ext", verify: :crc32c
Use the MD5 and CRC32c digests by passing :all.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.download "path/to/downloaded/file.ext", verify: :all
Disable the download verification by passing :none.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.download "path/to/downloaded/file.ext", verify: :none
Download to an in-memory StringIO object.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" downloaded = file.download downloaded.rewind downloaded.read #=> "Hello world!"
Download a public file with an unauthenticated client.
require "google/cloud/storage" storage = Google::Cloud::Storage.anonymous bucket = storage.bucket "public-bucket", skip_lookup: true file = bucket.file "path/to/public-file.ext", skip_lookup: true downloaded = file.download downloaded.rewind downloaded.read #=> "Hello world!"
Upload and download gzip-encoded file data.
require "zlib" require "google/cloud/storage" storage = Google::Cloud::Storage.new gz = StringIO.new "" z = Zlib::GzipWriter.new gz z.write "Hello world!" z.close data = StringIO.new gz.string bucket = storage.bucket "my-bucket" bucket.create_file data, "path/to/gzipped.txt", content_encoding: "gzip" file = bucket.file "path/to/gzipped.txt" # The downloaded data is decompressed by default. file.download "path/to/downloaded/hello.txt" # The downloaded data remains compressed with skip_decompress. file.download "path/to/downloaded/gzipped.txt", skip_decompress: true
Partially download.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" downloaded = file.download range: 6..10 downloaded.rewind downloaded.read #=> "world"
#encryption_key_sha256
def encryption_key_sha256() -> String, nil
An RFC 4648 Base64-encoded string of the SHA256 hash of the customer-supplied encryption key. You can use this SHA256 hash to uniquely identify the AES-256 encryption key required to decrypt this file.
-
(String, nil) — The encoded SHA256 hash, or
nil
if there is no customer-supplied encryption key for this file.
#etag
def etag() -> String
HTTP 1.1 Entity tag for the file.
- (String)
#event_based_hold?
def event_based_hold?() -> Boolean
Whether there is an event-based hold on the file. An event-based
hold will be enforced on the file as long as this property is true
,
even if the bucket-level retention policy would normally allow
deletion. Removing the event-based hold extends the retention duration
of the file to the current date plus the bucket-level policy duration.
Removing the event-based hold represents that a retention-related
event has occurred, and thus the retention clock ticks from the moment
of the event as opposed to the creation date of the object. The
default value is configured at the bucket level (which defaults to
false
), and is assigned to newly-created objects.
See #set_event_based_hold!, #release_event_based_hold!, Bucket#default_event_based_hold? and Bucket#default_event_based_hold=.
If a bucket's retention policy duration is modified after the event-based hold flag is unset, the updated retention duration applies retroactively to objects that previously had event-based holds. For example:
- If the bucket's [unlocked] retention policy is removed, objects with event-based holds may be deleted immediately after the hold is removed (the duration of a nonexistent policy for the purpose of event-based holds is considered to be zero).
- If the bucket's [unlocked] policy is reduced, objects with previously released event-based holds will be have their retention expiration dates reduced accordingly.
- If the bucket's policy is extended, objects with previously released event-based holds will have their retention expiration dates extended accordingly. However, note that objects with event-based holds released prior to the effective date of the new policy may have already been deleted by the user.
-
(Boolean) — Returns
true
if there is an event-based hold on the file, otherwisefalse
.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds file = bucket.file "path/to/my-file.ext" file.event_based_hold? #=> false file.set_event_based_hold! file.delete # raises Google::Cloud::PermissionDeniedError file.release_event_based_hold! # The end of the retention period is calculated from the time that # the event-based hold was released. file.retention_expires_at
#exists?
def exists?() -> Boolean
Determines whether the file exists in the Storage service.
- (Boolean)
#generation
def generation() -> Fixnum
The content generation of this file. Used for object versioning.
- (Fixnum)
#generations
def generations() -> Array<Google::Cloud::Storage::File>
Retrieves a list of versioned files for the current object.
Useful for listing archived versions of the file, restoring the live version of the file to an older version, or deleting an archived version. You can turn versioning on or off for a bucket at any time with Bucket#versioning=. Turning versioning off leaves existing file versions in place and causes the bucket to stop accumulating new archived object versions. (See Bucket#versioning? and #generation)
- (Array<Google::Cloud::Storage::File>) — (See List)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.generation #=> 1234567890 file.generations.each do |versioned_file| versioned_file.generation end
#hard_delete_time
def hard_delete_time() -> DateTime, nil
This hard delete time is The time when the file will be permanently deleted.
-
(DateTime, nil) — A DateTime representing the time at
which the file will be permanently deleted, or
nil
if the file is not soft deleted.
#id
def id() -> String
The ID of the file.
- (String)
#kind
def kind() -> String
The kind of item this is. For files, this is always storage#object.
- (String)
#kms_key
def kms_key() -> String, nil
The Cloud KMS encryption key that was used to protect the file, or
nil
if none has been configured.
-
(String, nil) — A Cloud KMS encryption key, or
nil
if none has been configured.
#md5
def md5() -> String
MD5 hash of the data; encoded using base64.
- (String)
#media_url
def media_url() -> String
A URL that can be used to download the file using the REST API.
- (String)
#metadata
def metadata() -> Hash(String => String)
A hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.
- (Hash(String => String))
#metadata=
def metadata=(metadata)
Updates the hash of custom, user-provided web-safe keys and arbitrary string values that will returned with requests for the file as "x-goog-meta-" response headers.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- metadata (Hash(String => String)) — The user-provided metadata, in key/value pairs.
#metageneration
def metageneration() -> Fixnum
The version of the metadata for this file at this generation. Used for preconditions and for detecting changes in metadata. A metageneration number is only meaningful in the context of a particular generation of a particular file.
- (Fixnum)
#name
def name() -> String
The name of this file.
- (String)
#public_url
def public_url(protocol: :https) -> String
Public URL to access the file. If the file is not public, requests to the URL will return an error. (See Acl#public! and Bucket::DefaultAcl#public!) To share a file that is not public see #signed_url.
-
protocol (String) (defaults to: :https) — The protocol to use for the URL. Default is
HTTPS
.
- (String)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" public_url = file.public_url
Generate the URL with a protocol other than HTTPS:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" public_url = file.public_url protocol: "http"
#refresh!
def refresh!(generation: nil)
Reloads the file with current data from the Storage service.
-
generation (Boolean, Integer) (defaults to: nil) — Specify a version of the file to
reload with. When
true
, it will reload the version returned by #generation. The default behavior is to reload with the latest version of the file (regardless of the version to which the file is set, which is the version returned by #generation.)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.reload!
The file's generation can used by passing true
:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext", generation: 123456 file.reload! generation: true
A generation can also be specified:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext", generation: 123456 file.reload! generation: 123457
#release_event_based_hold!
def release_event_based_hold!()
Sets the event-based hold property of the file to false
. Removing
the event-based hold extends the retention duration of the file to the
current date plus the bucket-level policy duration. Removing the
event-based hold represents that a retention-related event has
occurred, and thus the retention clock ticks from the moment of the
event as opposed to the creation date of the object. The default value
is configured at the bucket level (which defaults to false
), and is
assigned to newly-created objects.
See #event_based_hold?, #set_event_based_hold!, Bucket#default_event_based_hold? and Bucket#default_event_based_hold=.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds file = bucket.file "path/to/my-file.ext" file.event_based_hold? #=> false file.set_event_based_hold! file.delete # raises Google::Cloud::PermissionDeniedError file.release_event_based_hold! # The end of the retention period is calculated from the time that # the event-based hold was released. file.retention_expires_at
#release_temporary_hold!
def release_temporary_hold!()
Sets the temporary hold property of the file to false
. This property
is used to enforce a temporary hold on a file. While it is set to
true
, the file is protected against deletion and overwrites. Once
removed, the file's retention_expires_at
date is not changed. The
default value is false
.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.temporary_hold? #=> false file.set_temporary_hold! file.delete # raises Google::Cloud::PermissionDeniedError file.release_temporary_hold! file.delete
#reload!
def reload!(generation: nil)
Reloads the file with current data from the Storage service.
-
generation (Boolean, Integer) (defaults to: nil) — Specify a version of the file to
reload with. When
true
, it will reload the version returned by #generation. The default behavior is to reload with the latest version of the file (regardless of the version to which the file is set, which is the version returned by #generation.)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.reload!
The file's generation can used by passing true
:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext", generation: 123456 file.reload! generation: true
A generation can also be specified:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext", generation: 123456 file.reload! generation: 123457
#retention
def retention() -> Google::Apis::StorageV1::Object::Retention
A collection of object level retention parameters. The full list of available options are outlined at the JSON API docs.
- (Google::Apis::StorageV1::Object::Retention)
#retention=
def retention=(new_retention_attributes)
Update method to update retention parameter of an object / file It accepts params as a Hash of attributes in the following format:
{ mode: 'Locked|Unlocked', retain_until_time: '2023-12-19T03:22:23+00:00' }
- new_retention_attributes (Hash(String => String))
Update retention parameters for the File / Object
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "avatars/heidi/400x400.png" retention_params = { mode: 'Unlocked', retain_until_time: '2023-12-19T03:22:23+00:00'.to_datetime } file.retention = retention_params
Update retention parameters for the File / Object with override enabled
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "avatars/heidi/400x400.png" retention_params = { mode: 'Unlocked', retain_until_time: '2023-12-19T03:22:23+00:00'.to_datetime, override_unlocked_retention: true } file.retention = retention_params
#retention_expires_at
def retention_expires_at() -> DateTime, nil
The retention expiration time of the file. This field is indirectly mutable when the retention policy applicable to the object changes. The date represents the earliest time that the object could be deleted, assuming no temporary hold is set. (See #temporary_hold?.) It is provided when both of the following are true:
- There is a retention policy on the bucket.
- The eventBasedHold flag is unset on the object.
Note that it can be provided even when #temporary_hold? is true
(so that the user can reason about policy without having to first
unset the temporary hold).
-
(DateTime, nil) — A DateTime representing the earliest time at
which the object can be deleted, or
nil
if there are no restrictions on deleting the object.
#retention_mode
def retention_mode() -> String
Mode of object level retention configuration. Valid values are 'Locked' or 'Unlocked'
- (String)
#retention_retain_until_time
def retention_retain_until_time() -> DateTime
The earliest time in RFC 3339 UTC "Zulu" format that the object can be deleted or replaced.
- (DateTime)
#rewrite
def rewrite(dest_bucket_or_path, dest_path = nil, acl: nil, generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, if_source_generation_match: nil, if_source_generation_not_match: nil, if_source_metageneration_match: nil, if_source_metageneration_not_match: nil, encryption_key: nil, new_encryption_key: nil, new_kms_key: nil, force_copy_metadata: nil) -> Google::Cloud::Storage::File
Rewrites the file to a new location. Or the same location can be provided to rewrite the file in place. Metadata from the source object will be copied to the destination object unless a block is provided.
If an optional block for updating is provided, only the updates made in
this block will appear in the destination object, and other metadata
fields in the destination object will not be copied. To copy the other
source file metadata fields while updating destination fields in a
block, use the force_copy_metadata: true
flag, and the client library
will copy metadata from source metadata into the copy request.
If a customer-supplied encryption
key
was used with Bucket#create_file, the encryption_key
option must
be provided. Unlike #copy, separate encryption keys are used to read
(encryption_key) and to write (new_encryption_key) file contents.
- dest_bucket_or_path (String) — Either the bucket to rewrite the file to, or the path to rewrite the file to in the current bucket.
- dest_path (String) — If a bucket was provided in the first parameter, this contains the path to rewrite the file to in the given bucket.
-
acl (String) (defaults to: nil) —
A predefined set of access controls to apply to new file.
Acceptable values are:
auth
,auth_read
,authenticated
,authenticated_read
,authenticatedRead
- File owner gets OWNER access, and allAuthenticatedUsers get READER access.owner_full
,bucketOwnerFullControl
- File owner gets OWNER access, and project team owners get OWNER access.owner_read
,bucketOwnerRead
- File owner gets OWNER access, and project team owners get READER access.private
- File owner gets OWNER access.project_private
,projectPrivate
- File owner gets OWNER access, and project team members get access according to their roles.public
,public_read
,publicRead
- File owner gets OWNER access, and allUsers get READER access.
- generation (Integer) (defaults to: nil) — Select a specific revision of the file to rewrite. The default is the latest version.
- if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the destination file's current metageneration does not match the given value.
- if_source_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the source object's current generation matches the given value.
- if_source_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the source object's current generation does not match the given value.
- if_source_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the source object's current metageneration matches the given value.
- if_source_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the source object's current metageneration does not match the given value.
- encryption_key (String) (defaults to: nil) — Optional. The customer-supplied, AES-256 encryption key used to decrypt the file, if the existing file is encrypted.
-
new_encryption_key (String, nil) (defaults to: nil) — Optional. The new
customer-supplied, AES-256 encryption key with which to encrypt the
file. If not provided, the rewritten file will be encrypted using
the default server-side encryption, or the
new_kms_key
if one is provided. Do not provide ifnew_kms_key
is used. -
new_kms_key (String) (defaults to: nil) — Optional. Resource name of the Cloud KMS
key, of the form
projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key
, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide ifnew_encryption_key
is used. -
force_copy_metadata (Boolean) (defaults to: nil) — Optional. If
true
and if updates are made in a block, the following fields will be copied from the source file to the destination file (except when changed by updates):cache_control
content_disposition
content_encoding
content_language
content_type
metadata
If
nil
orfalse
, only the updates made in the yielded block will be applied to the destination object. The default isnil
.
- (file) — a block yielding a delegate object for updating
The file can be rewritten to a new path in the bucket:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.rewrite "path/to/destination/file.ext"
The file can also be rewritten to a different bucket:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.rewrite "new-destination-bucket", "path/to/destination/file.ext"
The file can also be rewritten by specifying a generation:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.rewrite "copy/of/previous/generation/file.ext", generation: 123456
The file can be modified during rewriting:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.rewrite "new-destination-bucket", "path/to/destination/file.ext" do |f| f.metadata["rewritten_from"] = "#{file.bucket}/#{file.name}" end
Rewriting with a customer-supplied encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # Old key was stored securely for later use. old_key = "y\x03\"\x0E\xB6\xD3\x9B\x0E\xAB*\x19\xFAv\xDEY\xBEI..." # Key generation shown for example purposes only. Write your own. cipher = OpenSSL::Cipher.new "aes-256-cfb" cipher.encrypt new_key = cipher.random_key file = bucket.file "path/to/my-file.ext", encryption_key: old_key file.rewrite "new-destination-bucket", "path/to/destination/file.ext", encryption_key: old_key, new_encryption_key: new_key do |f| f.metadata["rewritten_from"] = "#{file.bucket}/#{file.name}" end
Rewriting with a customer-managed Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" # Old customer-supplied key was stored securely for later use. old_key = "y\x03\"\x0E\xB6\xD3\x9B\x0E\xAB*\x19\xFAv\xDEY\xBEI..." file = bucket.file "path/to/my-file.ext", encryption_key: old_key file.rewrite "new-destination-bucket", "path/to/destination/file.ext", encryption_key: old_key, new_kms_key: kms_key_name do |f| f.metadata["rewritten_from"] = "#{file.bucket}/#{file.name}" end
#rotate
def rotate(encryption_key: nil, new_encryption_key: nil, new_kms_key: nil) -> Google::Cloud::Storage::File
Rewrites the file to the same #bucket and #name with a new customer-supplied encryption key.
If a new key is provided to this method, the new key must be used to subsequently download or copy the file. You must securely manage your keys and ensure that they are not lost. Also, please note that file metadata is not encrypted, with the exception of the CRC32C checksum and MD5 hash. The names of files and buckets are also not encrypted, and you can read or update the metadata of an encrypted file without providing the encryption key.
- encryption_key (String, nil) (defaults to: nil) — Optional. The last customer-supplied, AES-256 encryption key used to encrypt the file, if one was used.
-
new_encryption_key (String, nil) (defaults to: nil) — Optional. The new
customer-supplied, AES-256 encryption key with which to encrypt the
file. If not provided, the rewritten file will be encrypted using
the default server-side encryption, or the
new_kms_key
if one is provided. Do not provide ifnew_kms_key
is used. -
new_kms_key (String) (defaults to: nil) — Optional. Resource name of the Cloud KMS
key, of the form
projects/my-prj/locations/kr-loc/keyRings/my-kr/cryptoKeys/my-key
, that will be used to encrypt the file. The KMS key ring must use the same location as the bucket.The Service Account associated with your project requires access to this encryption key. Do not provide ifnew_encryption_key
is used.
Rotating to a new customer-supplied encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # Old key was stored securely for later use. old_key = "y\x03\"\x0E\xB6\xD3\x9B\x0E\xAB*\x19\xFAv\xDEY\xBEI..." file = bucket.file "path/to/my-file.ext", encryption_key: old_key # Key generation shown for example purposes only. Write your own. cipher = OpenSSL::Cipher.new "aes-256-cfb" cipher.encrypt new_key = cipher.random_key file.rotate encryption_key: old_key, new_encryption_key: new_key
Rotating to a customer-managed Cloud KMS encryption key:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" # KMS key ring must use the same location as the bucket. kms_key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d" # Old key was stored securely for later use. old_key = "y\x03\"\x0E\xB6\xD3\x9B\x0E\xAB*\x19\xFAv\xDEY\xBEI..." file = bucket.file "path/to/my-file.ext", encryption_key: old_key file.rotate encryption_key: old_key, new_kms_key: kms_key_name
#set_event_based_hold!
def set_event_based_hold!()
Sets the event-based hold property of the file to true
. This
property enforces an event-based hold on the file as long as this
property is true
, even if the bucket-level retention policy would
normally allow deletion. The default value is configured at the
bucket level (which defaults to false
), and is assigned to
newly-created objects.
See #event_based_hold?, #release_event_based_hold!, Bucket#default_event_based_hold? and Bucket#default_event_based_hold=.
If a bucket's retention policy duration is modified after the event-based hold is removed, the updated retention duration applies retroactively to objects that previously had event-based holds. For example:
- If the bucket's [unlocked] retention policy is removed, objects with event-based holds may be deleted immediately after the hold is removed (the duration of a nonexistent policy for the purpose of event-based holds is considered to be zero).
- If the bucket's [unlocked] policy is reduced, objects with previously released event-based holds will be have their retention expiration dates reduced accordingly.
- If the bucket's policy is extended, objects with previously released event-based holds will have their retention expiration dates extended accordingly. However, note that objects with event-based holds released prior to the effective date of the new policy may have already been deleted by the user.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" bucket.retention_period = 2592000 # 30 days in seconds file = bucket.file "path/to/my-file.ext" file.event_based_hold? #=> false file.set_event_based_hold! file.delete # raises Google::Cloud::PermissionDeniedError file.release_event_based_hold! # The end of the retention period is calculated from the time that # the event-based hold was released. file.retention_expires_at
#set_temporary_hold!
def set_temporary_hold!()
Sets the temporary hold property of the file to true
. This property
is used to enforce a temporary hold on a file. While it is set to
true
, the file is protected against deletion and overwrites. Once
removed, the file's retention_expires_at
date is not changed. The
default value is false
.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.temporary_hold? #=> false file.set_temporary_hold! file.delete # raises Google::Cloud::PermissionDeniedError
#signed_url
def signed_url(method: "GET", expires: nil, content_type: nil, content_md5: nil, headers: nil, issuer: nil, client_email: nil, signing_key: nil, private_key: nil, signer: nil, query: nil, scheme: "HTTPS", virtual_hosted_style: nil, bucket_bound_hostname: nil, version: nil) -> String
Generates a signed URL for the file. See Signed URLs for more information.
Generating a signed URL requires service account credentials, either
by connecting with a service account when calling
Google::Cloud.storage, or by passing in the service account issuer
and signing_key
values. Although the private key can be passed as a
string for convenience, creating and storing an instance of
OpenSSL::PKey::RSA
is more efficient when making multiple calls to
signed_url
.
A SignedUrlUnavailable is raised if the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
-
method (String) (defaults to: "GET") — The HTTP verb to be used with the signed URL.
Signed URLs can be used
with
GET
,HEAD
,PUT
, andDELETE
requests. Default isGET
. -
expires (Integer) (defaults to: nil) — The number of seconds until the URL expires.
If the
version
is:v2
, the default is 300 (5 minutes). If theversion
is:v4
, the default is 604800 (7 days). -
content_type (String) (defaults to: nil) — When provided, the client (browser) must
send this value in the HTTP header. e.g.
text/plain
. This param is not used if theversion
is:v4
. -
content_md5 (String) (defaults to: nil) — The MD5 digest value in base64. If you
provide this in the string, the client (usually a browser) must
provide this HTTP header with this same value in its request. This
param is not used if the
version
is:v4
. -
headers (Hash) (defaults to: nil) — Google extension headers (custom HTTP headers
that begin with
x-goog-
) that must be included in requests that use the signed URL. - issuer (String) (defaults to: nil) — Service Account's Client Email.
- client_email (String) (defaults to: nil) — Service Account's Client Email.
- signing_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
- private_key (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's Private Key or a Proc that accepts a single String parameter and returns a RSA SHA256 signature using a valid Google Service Account Private Key.
-
signer (OpenSSL::PKey::RSA, String, Proc) (defaults to: nil) — Service Account's
Private Key or a Proc that accepts a single String parameter and returns a
RSA SHA256 signature using a valid Google Service Account Private Key.
When using this method in environments such as GAE Flexible Environment, GKE, or Cloud Functions where the private key is unavailable, it may be necessary to provide a Proc (or lambda) via the signer parameter. This Proc should return a signature created using a RPC call to the Service Account Credentials signBlob method as shown in the example below.
-
query (Hash) (defaults to: nil) — Query string parameters to include in the signed
URL. The given parameters are not verified by the signature.
Parameters such as
response-content-disposition
andresponse-content-type
can alter the behavior of the response when using the URL, but only when the file resource is missing the corresponding values. (These values can be permanently set using #content_disposition= and #content_type=.) -
scheme (String) (defaults to: "HTTPS") — The URL scheme. The default value is
HTTPS
. -
virtual_hosted_style (Boolean) (defaults to: nil) — Whether to use a virtual hosted-style
hostname, which adds the bucket into the host portion of the URI rather
than the path, e.g.
https://mybucket.storage.googleapis.com/...
. For V4 signing, this also sets thehost
header in the canonicalized extension headers to the virtual hosted-style host, unless that header is supplied via theheaders
param. The default value offalse
uses the form ofhttps://storage.googleapis.com/mybucket
. -
bucket_bound_hostname (String) (defaults to: nil) — Use a bucket-bound hostname, which
replaces the
storage.googleapis.com
host with the name of aCNAME
bucket, e.g. a bucket namedgcs-subdomain.my.domain.tld
, or a Google Cloud Load Balancer which routes to a bucket you own, e.g.my-load-balancer-domain.tld
. -
version (Symbol, String) (defaults to: nil) — The version of the signed credential
to create. Must be one of
:v2
or:v4
. The default value is:v2
.
- (String) — The signed URL.
- (SignedUrlUnavailable) — If the service account credentials are missing. Service account credentials are acquired by following the steps in Service Account Authentication.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" shared_url = file.signed_url
Using the expires
and version
options:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" shared_url = file.signed_url expires: 300, # 5 minutes from now version: :v4
Using the issuer
and signing_key
options:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" key = OpenSSL::PKey::RSA.new "-----BEGIN PRIVATE KEY-----\n..." shared_url = file.signed_url issuer: "service-account@gcloud.com", signing_key: key
Using the headers
option:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" shared_url = file.signed_url method: "GET", headers: { "x-goog-acl" => "public-read", "x-goog-meta-foo" => "bar,baz" }
Generating a signed URL for resumable upload:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png", skip_lookup: true url = file.signed_url method: "POST", content_type: "image/png", headers: { "x-goog-resumable" => "start" } # Send the `x-goog-resumable:start` header and the content type # with the resumable upload POST request.
Using Cloud IAMCredentials signBlob to create the signature:
require "google/cloud/storage" require "google/apis/iamcredentials_v1" require "googleauth" # Issuer is the service account email that the Signed URL will be signed with # and any permission granted in the Signed URL must be granted to the # Google Service Account. issuer = "service-account@project-id.iam.gserviceaccount.com" # Create a lambda that accepts the string_to_sign signer = lambda do |string_to_sign| IAMCredentials = Google::Apis::IamcredentialsV1 iam_client = IAMCredentials::IAMCredentialsService.new # Get the environment configured authorization scopes = ["https://www.googleapis.com/auth/iam"] iam_client.authorization = Google::Auth.get_application_default scopes request = Google::Apis::IamcredentialsV1::SignBlobRequest.new( payload: string_to_sign ) resource = "projects/-/serviceAccounts/#{issuer}" response = iam_client.sign_service_account_blob resource, request response.signed_blob end storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png", skip_lookup: true url = file.signed_url method: "GET", issuer: issuer, signer: signer
#size
def size() -> Integer
Content-Length of the data in bytes.
- (Integer)
#soft_delete_time
def soft_delete_time() -> DateTime, nil
This soft delete time is the time when the object became soft-deleted.
-
(DateTime, nil) — A DateTime representing the time at
which the object became soft-deleted, or
nil
if the file was not deleted.
#storage_class
def storage_class() -> String
The file's storage class. This defines how the file is stored and determines the SLA and the cost of storage. For more information, see Storage Classes and Per-Object Storage Class.
- (String)
#storage_class=
def storage_class=(storage_class)
Rewrites the file with a new storage class, which determines the SLA and the cost of storage. Accepted values include:
:standard
:nearline
:coldline
:archive
as well as the equivalent strings returned by #storage_class or Bucket#storage_class. For more information, see Storage Classes and Per-Object Storage Class. The default value is the default storage class for the bucket. See Bucket#storage_class.
To pass generation and/or metageneration preconditions, call this method within a block passed to #update.
- storage_class (Symbol, String) — Storage class of the file.
#temporary_hold?
def temporary_hold?() -> Boolean
Whether there is a temporary hold on the file. A temporary hold will
be enforced on the file as long as this property is true
, even if
the bucket-level retention policy would normally allow deletion. When
the temporary hold is removed, the normal bucket-level policy rules
once again apply. The default value is false
.
-
(Boolean) — Returns
true
if there is a temporary hold on the file, otherwisefalse
.
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.temporary_hold? #=> false file.set_temporary_hold! file.delete # raises Google::Cloud::PermissionDeniedError
#update
def update(generation: nil, if_generation_match: nil, if_generation_not_match: nil, if_metageneration_match: nil, if_metageneration_not_match: nil, override_unlocked_retention: nil)
Updates the file with changes made in the given block in a single PATCH request. The following attributes may be set: #cache_control=, #content_disposition=, #content_encoding=, #content_language=, #content_type=, #custom_time= and #metadata=. The #metadata hash accessible in the block is completely mutable and will be included in the request.
- generation (Integer) (defaults to: nil) — Select a specific revision of the file to update. The default is the latest version.
- if_generation_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the file.
- if_generation_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current generation does not match the given value. If no live file exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the file.
- if_metageneration_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration matches the given value.
- if_metageneration_not_match (Integer) (defaults to: nil) — Makes the operation conditional on whether the file's current metageneration does not match the given value.
- override_unlocked_retention (Boolean) (defaults to: nil) — Must be true to remove the retention configuration, reduce its unlocked retention period, or change its mode from unlocked to locked.
- (file) — a block yielding a delegate object for updating the file
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.update do |f| f.cache_control = "private, max-age=0, no-cache" f.content_disposition = "inline; filename=filename.ext" f.content_encoding = "deflate" f.content_language = "de" f.content_type = "application/json" f.custom_time = DateTime.new 2025, 12, 31 f.metadata["player"] = "Bob" f.metadata["score"] = "10" end
With a if_generation_match
precondition:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-bucket" file = bucket.file "path/to/my-file.ext" file.update if_generation_match: 1602263125261858 do |f| f.cache_control = "private, max-age=0, no-cache" end
#updated_at
def updated_at() -> DateTime
The creation or modification time of the file. For buckets with versioning enabled, changing an object's metadata does not change this property.
- (DateTime)
#url
def url(protocol: :https) -> String
Public URL to access the file. If the file is not public, requests to the URL will return an error. (See Acl#public! and Bucket::DefaultAcl#public!) To share a file that is not public see #signed_url.
-
protocol (String) (defaults to: :https) — The protocol to use for the URL. Default is
HTTPS
.
- (String)
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" public_url = file.public_url
Generate the URL with a protocol other than HTTPS:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "my-todo-app" file = bucket.file "avatars/heidi/400x400.png" public_url = file.public_url protocol: "http"
#user_project
def user_project()
If this attribute is set to true
, transit costs for operations on
the file will be billed to the current project for this client. (See
Project#project for the ID of the current project.) If this
attribute is set to a project ID, and that project is authorized for
the currently authenticated service account, transit costs will be
billed to that project. This attribute is required with requester
pays-enabled buckets. The default is nil
.
In general, this attribute should be set when first retrieving the
owning bucket by providing the user_project
option to
Project#bucket or Project#buckets.
See also Bucket#requester_pays= and Bucket#requester_pays.
Setting a non-default project:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "other-project-bucket", user_project: true file = bucket.file "path/to/file.ext" # Billed to current project file.user_project = "my-other-project" file.download "file.ext" # Billed to "my-other-project"
#user_project=
def user_project=(value)
If this attribute is set to true
, transit costs for operations on
the file will be billed to the current project for this client. (See
Project#project for the ID of the current project.) If this
attribute is set to a project ID, and that project is authorized for
the currently authenticated service account, transit costs will be
billed to that project. This attribute is required with requester
pays-enabled buckets. The default is nil
.
In general, this attribute should be set when first retrieving the
owning bucket by providing the user_project
option to
Project#bucket or Project#buckets.
See also Bucket#requester_pays= and Bucket#requester_pays.
Setting a non-default project:
require "google/cloud/storage" storage = Google::Cloud::Storage.new bucket = storage.bucket "other-project-bucket", user_project: true file = bucket.file "path/to/file.ext" # Billed to current project file.user_project = "my-other-project" file.download "file.ext" # Billed to "my-other-project"