Client(project=None, namespace=None, credentials=None, client_info=<google.api_core.gapic_v1.client_info.ClientInfo object>, client_options=None, database=None, _http=None, _use_grpc=None)
Convenience wrapper for invoking APIs/factories w/ a project.
.. doctest::
from google.cloud import datastore client = datastore.Client()
Parameters |
|
---|---|
Name | Description |
project |
str
(Optional) The project to pass to proxied API methods. |
namespace |
str
(Optional) namespace to pass to proxied API methods. |
credentials |
(Optional) The OAuth2 Credentials to use for this client. If not passed (and if no |
client_info |
(Optional) The client info used to send a user-agent string along with API requests. If |
client_options |
(Optional) Client options used to set user options on the client. API Endpoint should be set through client_options. |
_http |
(Optional) HTTP object to make requests. Can be any object that defines |
_use_grpc |
bool
(Optional) Explicitly specifies whether to use the gRPC transport (via GAX) or HTTP. If unset, falls back to the |
database |
str
(Optional) database to pass to proxied API methods. |
Properties
base_url
Getter for API base URL.
current_batch
Currently-active batch.
Returns | |
---|---|
Type | Description |
Batch, or an object implementing its API, or |
The batch/transaction at the top of the batch stack. |
current_transaction
Currently-active transaction.
Returns | |
---|---|
Type | Description |
Transaction, or an object implementing its API, or |
The transaction at the top of the batch stack. |
database
Getter for database
Methods
aggregation_query
aggregation_query(query)
Proxy to xref_AggregationQuery.
Using aggregation_query to count over a query:
.. testsetup:: aggregation_query
import uuid
from google.cloud import datastore
from <xref uid="google.cloud.datastore.aggregation">google.cloud.datastore.aggregation</xref> import CountAggregation
unique = str(uuid.uuid4())[0:8]
client = datastore.Client(namespace='ns{}'.format(unique))
def do_something_with(entity):
pass
.. doctest:: aggregation_query
>>> query = client.query(kind='MyKind')
>>> aggregation_query = client.aggregation_query(query)
>>> aggregation_query.count(alias='total')
<<xref uid="google.cloud.datastore.aggregation.AggregationQuery">google.cloud.datastore.aggregation.AggregationQuery</xref> object at ...>
>>> aggregation_query.fetch()
<<xref uid="google.cloud.datastore.aggregation.AggregationResultIterator">google.cloud.datastore.aggregation.AggregationResultIterator</xref> object at ...>
Adding an aggregation to the aggregation_query
.. doctest:: aggregation_query
>>> query = client.query(kind='MyKind')
>>> aggregation_query.add_aggregation(CountAggregation(alias='total'))
>>> aggregation_query.fetch()
<<xref uid="google.cloud.datastore.aggregation.AggregationResultIterator">google.cloud.datastore.aggregation.AggregationResultIterator</xref> object at ...>
Adding multiple aggregations to the aggregation_query
.. doctest:: aggregation_query
>>> query = client.query(kind='MyKind')
>>> total_count = CountAggregation(alias='total')
>>> all_count = CountAggregation(alias='all')
>>> aggregation_query.add_aggregations([total_count, all_count])
>>> aggregation_query.fetch()
<<xref uid="google.cloud.datastore.aggregation.AggregationResultIterator">google.cloud.datastore.aggregation.AggregationResultIterator</xref> object at ...>
Using the aggregation_query iterator
.. doctest:: aggregation_query
>>> query = client.query(kind='MyKind')
>>> aggregation_query = client.aggregation_query(query)
>>> aggregation_query.count(alias='total')
<<xref uid="google.cloud.datastore.aggregation.AggregationQuery">google.cloud.datastore.aggregation.AggregationQuery</xref> object at ...>
>>> aggregation_query_iter = aggregation_query.fetch()
>>> for aggregation_result in aggregation_query_iter:
... do_something_with(aggregation_result)
or manually page through results
.. doctest:: aggregation_query
>>> aggregation_query_iter = aggregation_query.fetch()
>>> pages = aggregation_query_iter.pages
>>>
>>> first_page = next(pages)
>>> first_page_entities = list(first_page)
>>> aggregation_query_iter.next_page_token is None
True
Returns | |
---|---|
Type | Description |
AggregationQuery |
An AggregationQuery object. |
allocate_ids
allocate_ids(incomplete_key, num_ids, retry=None, timeout=None)
Allocate a list of IDs from a partial key.
Parameters | |
---|---|
Name | Description |
incomplete_key |
Key
Partial key to use as base for allocated IDs. |
num_ids |
int
The number of IDs to allocate. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if incomplete_key is not a partial key. |
Returns | |
---|---|
Type | Description |
list of Key |
The (complete) keys allocated with incomplete_key as root. |
batch
batch()
Proxy to Batch.
delete
delete(key, retry=None, timeout=None)
Delete the key in the Cloud Datastore.
Parameters | |
---|---|
Name | Description |
key |
Key, Entity
The key to be deleted from the datastore. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
delete_multi
delete_multi(keys, retry=None, timeout=None)
Delete keys from the Cloud Datastore.
Parameters | |
---|---|
Name | Description |
keys |
list of Key, Entity
The keys to be deleted from the Datastore. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
entity
entity(key=None, exclude_from_indexes=())
Proxy to Entity.
get
get(
key,
missing=None,
deferred=None,
transaction=None,
eventual=False,
retry=None,
timeout=None,
read_time=None,
)
Retrieve an entity from a single key (if it exists).
Parameters | |
---|---|
Name | Description |
key |
Key
The key to be retrieved from the datastore. |
missing |
list
(Optional) If a list is passed, the key-only entities returned by the backend as "missing" will be copied into it. |
deferred |
list
(Optional) If a list is passed, the keys returned by the backend as "deferred" will be copied into it. |
transaction |
Transaction
(Optional) Transaction to use for read consistency. If not passed, uses current transaction, if set. |
eventual |
bool
(Optional) Defaults to strongly consistent (False). Setting True will use eventual consistency, but cannot be used inside a transaction or with read_time, or will raise ValueError. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
read_time |
datetime
Read the entity from the specified time (may be null). Cannot be used with eventual consistency or inside a transaction, or will raise ValueError. This feature is in private preview. |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if more than one of eventual==True , transaction , and read_time is specified. |
Returns | |
---|---|
Type | Description |
Entity or |
The requested entity if it exists. |
get_multi
get_multi(
keys,
missing=None,
deferred=None,
transaction=None,
eventual=False,
retry=None,
timeout=None,
read_time=None,
)
Retrieve entities, along with their attributes.
Parameters | |
---|---|
Name | Description |
keys |
list of Key
The keys to be retrieved from the datastore. |
missing |
list
(Optional) If a list is passed, the key-only entities returned by the backend as "missing" will be copied into it. If the list is not empty, an error will occur. |
deferred |
list
(Optional) If a list is passed, the keys returned by the backend as "deferred" will be copied into it. If the list is not empty, an error will occur. |
transaction |
Transaction
(Optional) Transaction to use for read consistency. If not passed, uses current transaction, if set. |
eventual |
bool
(Optional) Defaults to strongly consistent (False). Setting True will use eventual consistency, but cannot be used inside a transaction or will raise ValueError. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
read_time |
datetime
(Optional) Read time to use for read consistency. This feature is in private preview. |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if one or more of keys has a project which does not match our project; or if more than one of eventual==True , transaction , and read_time is specified. |
Returns | |
---|---|
Type | Description |
list of Entity |
The requested entities. |
key
key(*path_args, **kwargs)
Proxy to Key.
Passes our project
and our database
.
put
put(entity, retry=None, timeout=None)
Save an entity in the Cloud Datastore.
Parameters | |
---|---|
Name | Description |
entity |
Entity
The entity to be saved to the datastore. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
put_multi
put_multi(entities, retry=None, timeout=None)
Save entities in the Cloud Datastore.
Parameters | |
---|---|
Name | Description |
entities |
list of Entity
The entities to be saved to the datastore. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if entities is a single entity. |
query
query(**kwargs)
Proxy to xref_Query.
Passes our project
.
Using query to search a datastore:
.. testsetup:: query
import uuid
from google.cloud import datastore
unique = str(uuid.uuid4())[0:8]
client = datastore.Client(namespace='ns{}'.format(unique))
def do_something_with(entity):
pass
.. doctest:: query
>>> query = client.query(kind='MyKind')
>>> query.add_filter('property', '=', 'val')
<<xref uid="google.cloud.datastore.query.Query">google.cloud.datastore.query.Query</xref> object at ...>
Using the query iterator
.. doctest:: query
>>> filters = [('property', '=', 'val')]
>>> query = client.query(kind='MyKind', filters=filters)
>>> query_iter = query.fetch()
>>> for entity in query_iter:
... do_something_with(entity)
or manually page through results
.. doctest:: query
>>> query_iter = query.fetch()
>>> pages = query_iter.pages
>>>
>>> first_page = next(pages)
>>> first_page_entities = list(first_page)
>>> query_iter.next_page_token is None
True
Returns | |
---|---|
Type | Description |
Query |
A query object. |
reserve_ids
reserve_ids(complete_key, num_ids, retry=None, timeout=None)
Reserve a list of IDs sequentially from a complete key.
DEPRECATED. Alias for reserve_ids_sequential
.
Please use either reserve_ids_multi
(recommended) or
reserve_ids_sequential
.
reserve_ids_multi
reserve_ids_multi(complete_keys, retry=None, timeout=None)
Reserve IDs from a list of complete keys.
Parameters | |
---|---|
Name | Description |
complete_keys |
Complete keys for which to reserve IDs. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if any of complete_keys is not a Complete key. |
Returns | |
---|---|
Type | Description |
class: |
None |
reserve_ids_sequential
reserve_ids_sequential(complete_key, num_ids, retry=None, timeout=None)
Reserve a list of IDs sequentially from a complete key.
This will reserve the key passed as complete_key
as well as
additional keys derived by incrementing the last ID in the path of
complete_key
sequentially to obtain the number of keys specified in
num_ids
.
Parameters | |
---|---|
Name | Description |
complete_key |
Key
Complete key to use as base for reserved IDs. Key must use a numeric ID and not a string name. |
num_ids |
int
The number of IDs to reserve. |
retry |
A retry object used to retry requests. If |
timeout |
float
Time, in seconds, to wait for the request to complete. Note that if |
Exceptions | |
---|---|
Type | Description |
`ValueError |
if complete_key is not a Complete key. |
Returns | |
---|---|
Type | Description |
class: |
None |
transaction
transaction(**kwargs)
Proxy to xref_Transaction.