Batch ingestion lets you import feature values in bulk from a valid data source. In a batch ingestion request, you can import values for up to 100 features for an entity type. Note that you can have only one batch ingestion job running per entity type to avoid collisions.
In a batch ingestion request, specify the location of your source data and how it maps to features in your featurestore. Because each batch ingestion request is for a single entity type, your source data must also be for a single entity type.
After the import has successfully completed, feature values are available to subsequent read operations.
- For information about source data requirements, see Source data requirements.
- For information about how long Vertex AI Feature Store retains your data in the offline store, see Vertex AI Feature Store in Quotas and limits.
- For information about the oldest feature value timestamp that you can ingest, see Vertex AI Feature Store in Quotas and limits.
Ingestion job performance
Vertex AI Feature Store provides high throughput ingestion, but the minimum latency can take a few minutes. Each request to Vertex AI Feature Store starts a job to complete the work. An ingestion job takes a few minutes to complete even if you are ingesting a single record.
If you want to make adjustments to how a job performs, change the following two variables:
- The number of featurestore online serving nodes.
- The number of workers used for the ingestion job. Workers process and write data into the featurestore.
The recommended number of workers is one worker for every 10 online serving nodes on the featurestore. You can go higher if the online serving load is low. You can specify a maximum of 100 workers. For more guidance, see monitor and tune resources accordingly to optimize batch ingestion.
If the online serving cluster is under-provisioned, the ingestion job might fail. In the event of a failure, retry the import request when the online serving load is low, or increase the node count of your featurestore and then retry the request.
If the featurestore doesn't have an online store (zero online serving nodes), the ingestion job writes only to the offline store, and the performance of the job depends solely on the number of ingestion workers.
Data consistency
Inconsistencies can be introduced if the source data is modified during import. Ensure that any source data modifications are complete before you start an ingestion job. Also, duplicate feature values can result in different values being served between online and batch requests. Ensure that you have one feature value for each entity ID and timestamp pair.
If an import operation fails, the featurestore might only have partial data, which can lead to inconsistent values being returned between online and batch serving requests. To avoid this inconsistency, retry the same import request again and wait until the request successfully completes.
Null values and empty arrays
During ingestion, Vertex AI Feature Store considers null scalar values or empty arrays as empty values. These include empty values in a CSV column. Vertex AI Feature Store doesn't support non-scalar null values, such as a null
value in an array.
During online serving and batch serving, Vertex AI Feature Store returns the latest non-null or non-empty value of the feature. If a historical value of the feature isn't available, then Vertex AI Feature Store returns null
.
NaN values
Vertex AI Feature Store supports NaN (Not a Number) values in Double
and DoubleArray
. During ingestion, you can enter NaN
in the serving input CSV file to represent a NaN value. During online serving and batch serving, Vertex AI Feature Store returns NaN
for NaN values.
Batch ingestion
Import values in bulk into a featurestore for one or more features of a single entity type.
Web UI
- In the Vertex AI section of the Google Cloud console, go to the Features page.
- Select a region from the Region drop-down list.
- In the features table, view the Entity type column and find the entity type that contains the features that you want to ingest values for.
- Click the name of the entity type.
- From the action bar, click Ingest values.
- For Data source, select one of the following:
- Cloud Storage CSV file: Select this option to ingest data from multiple CSV files from Cloud Storage. Specify the path and name of the CSV file. To specify additional files, click Add another file.
- Cloud Storage AVRO file: Select this option to ingest data from an AVRO file from Cloud Storage. Specify the path and name of the AVRO file.
- BigQuery table: Select this option to ingest data from a BigQuery table or BigQuery view. Browse and select a table or view to use, which is in the following format:
PROJECT_ID.DATASET_ID.TABLE_ID
- Click Continue.
- For Map column to features, specify which columns in your source data
map to entities and features in your featurestore.
- Specify the column name in your source data that contains the entity IDs.
- For the timestamp, specify a timestamp column in your source data or specify a single timestamp associated with all feature values that you ingest.
- In the list of features, enter the source data column name that maps to each feature. By default, Vertex AI Feature Store assumes that the feature name and column name match.
- Click Ingest.
REST
To ingest feature values for existing features, send a POST request by using the
featurestores.entityTypes.importFeatureValues
method. Note that if the names of the source data columns and the destination
feature IDs are different, include the sourceField
parameter.
Before using any of the request data, make the following replacements:
- LOCATION_ID: Region where the featurestore is created. For example,
us-central1
. - PROJECT_ID: Your project ID.
- FEATURESTORE_ID: ID of the featurestore.
- ENTITY_TYPE_ID: ID of the entity type.
- ENTITY_SOURCE_COLUMN_ID: ID of source column that contains entity IDs.
- FEATURE_TIME_ID: ID of source column that contains the feature timestamps for the feature values.
- FEATURE_ID: ID of an existing feature in the featurestore to import values for.
- FEATURE_SOURCE_COLUMN_ID: ID of source column that contain feature values for the entities.
- SOURCE_DATA_DETAILS: The source data location, which also indicates the format, such as
"bigquerySource": { "inputUri": "bq://test.dataset.sourcetable" }
for a BigQuery table or BigQuery view. - WORKER_COUNT: The number of workers to use to write data to the featurestore.
HTTP method and URL:
POST https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:importFeatureValues
Request JSON body:
{ "entityIdField": "ENTITY_SOURCE_COLUMN_ID", "featureTimeField": "FEATURE_TIME_ID", SOURCE_DATA_DETAILS, "featureSpecs": [{ "id": "FEATURE_ID", "sourceField": "FEATURE_SOURCE_COLUMN_ID" }], "workerCount": WORKER_COUNT }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:importFeatureValues"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID:importFeatureValues" | Select-Object -Expand Content
You should see output similar to the following. You can use the OPERATION_ID in the response to get the status of the operation.
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION_ID/featurestores/FEATURESTORE_ID/entityTypes/ENTITY_TYPE_ID/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ImportFeatureValuesOperationMetadata", "genericMetadata": { "createTime": "2021-03-02T00:04:13.039166Z", "updateTime": "2021-03-02T00:04:13.039166Z" } } }
Vertex AI SDK for Python
To learn how to install the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Python
The client library for Vertex AI is included when you install the Vertex AI SDK for Python. To learn how to install the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Java
To learn how to install and use the client library for Vertex AI, see Vertex AI client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
To learn how to install and use the client library for Vertex AI, see Vertex AI client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
View ingestion jobs
Use the Google Cloud console to view batch ingestion jobs in a Google Cloud project.
Web UI
- In the Vertex AI section of the Google Cloud console, go to the Features page.
- Select a region from the Region drop-down list.
- From the action bar, click View ingestion jobs to list ingestion jobs for all featurestores.
- Click the ID of an ingestion job to view its details such as its data source, number of ingested entities, and number of ingested feature values.
Overwrite existing data in a featurestore
You can re-import values to overwrite existing feature values if they both have the same timestamps. You don't need to delete existing feature values first. For example, you might rely on an underlying source data that was recently changed. To keep your featurestore consistent with that underlying data, import your feature values again. If you have mismatched timestamps, the imported values are considered unique and the old values continue to exist (they aren't overwritten).
To ensure consistency between online and batch serving requests, wait until the ingestion job is complete before making any serving requests.
Backfill historical data
If you're backfilling data, where you're ingesting past feature values, disable online serving for your ingestion job. Online serving is for serving the latest feature values only, which backfilling doesn't include. Disabling online serving is useful because you eliminate any load on your online serving nodes and increase throughput for your ingestion job, which can decrease its completion time.
You can disable online serving for ingestion jobs when you use the API or client
libraries. For more information, see the disableOnlineServing
field for the
importFeatureValue
method.
What's next
- Learn how to serve features through online serving or batch serving.
- Learn how to monitor ingested feature values over time.
- View the Vertex AI Feature Store concurrent batch job quota.
- Troubleshoot common Vertex AI Feature Store issues.