Batches
Create / interact with a batch of updates / deletes.
Batches provide the ability to execute multiple operations in a single request to the Cloud Datastore API.
See https://cloud.google.com/datastore/docs/concepts/entities#batch_operations
class google.cloud.datastore.batch.Batch(client)
Bases: object
An abstraction representing a collected group of updates / deletes.
Used to build up a bulk mutation.
For example, the following snippet of code will put the two save
operations and the delete
operation into the same mutation, and send
them to the server in a single API request:
>>> from google.cloud import datastore
>>> client = datastore.Client()
>>> batch = client.batch()
>>> batch.begin()
>>> batch.put(entity1)
>>> batch.put(entity2)
>>> batch.delete(key3)
>>> batch.commit()
You can also use a batch as a context manager, in which case
commit()
will be called automatically if its block exits without
raising an exception:
>>> with batch:
... batch.put(entity1)
... batch.put(entity2)
... batch.delete(key3)
By default, no updates will be sent if the block exits with an error:
>>> with batch:
... do_some_work(batch)
... raise Exception() # rolls back
Parameters
client (
google.cloud.datastore.client.Client
) – The client used to connect to datastore.
begin()
Begins a batch.
This method is called automatically when entering a with statement, however it can be called explicitly if you don’t want to use a context manager.
Overridden by google.cloud.datastore.transaction.Transaction
.
Raises
ValueError
if the batch has already begun.
commit()
Commits the batch.
This is called automatically upon exiting a with statement, however it can be called explicitly if you don’t want to use a context manager.
Raises
ValueError
if the batch is not in progress.
current()
Return the topmost batch / transaction, or None.
delete(key)
Remember a key to be deleted during commit()
.
Parameters
key (
google.cloud.datastore.key.Key
) – the key to be deleted.Raises
ValueError
if the batch is not in progress, if key is not complete, or if the key’sproject
does not match ours.
property mutations()
Getter for the changes accumulated by this batch.
Every batch is committed with a single commit request containing all
the work to be done as mutations. Inside a batch, calling put()
with an entity, or delete()
with a key, builds up the request by
adding a new mutation. This getter returns the protobuf that has been
built-up so far.
Return type
iterable
Returns
The list of
datastore_pb2.Mutation
protobufs to be sent in the commit request.
property namespace()
Getter for namespace in which the batch will run.
Return type
Returns
The namespace in which the batch will run.
property project()
Getter for project in which the batch will run.
Return type
Returns
The project in which the batch will run.
put(entity)
Remember an entity’s state to be saved during commit()
.
NOTE: Any existing properties for the entity will be replaced by those currently set on this instance. Already-stored properties which do not correspond to keys set on this instance will be removed from the datastore.
NOTE: Property values which are “text” (‘unicode’ in Python2, ‘str’ in Python3) map to ‘string_value’ in the datastore; values which are “bytes” (‘str’ in Python2, ‘bytes’ in Python3) map to ‘blob_value’.
When an entity has a partial key, calling commit()
sends it as
an insert
mutation and the key is completed. On return,
the key for the entity
passed in is updated to match the key ID
assigned by the server.
Parameters
entity (
google.cloud.datastore.entity.Entity
) – the entity to be saved.Raises
ValueError
if the batch is not in progress, if entity has no key assigned, or if the key’sproject
does not match ours.
rollback()
Rolls back the current batch.
Marks the batch as aborted (can’t be used again).
Overridden by google.cloud.datastore.transaction.Transaction
.
Raises
ValueError
if the batch is not in progress.