This page shows you how to train an AutoML sentiment analysis model from a text dataset using either the Google Cloud console or the Vertex AI API.
Before you begin
Before you can train a text sentiment analysis model, you must complete the following:
Train an AutoML model
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go to the Datasets page.
Click the name of the dataset you want to use to train your model to open its details page.
Click Train new model.
For the training method, select
AutoML.Click Continue.
Enter a name for the model.
If you want manually set how your training data is split, expand Advanced options and select a data split option. Learn more.
Click Start Training.
Model training can take many hours, depending on the size and complexity of your data and your training budget, if you specified one. You can close this tab and return to it later. You will receive an email when your model has completed training.
API
Select a tab for your language or environment:
REST
Create a TrainingPipeline
object to train a model.
Before using any of the request data, make the following replacements:
- LOCATION: The region where the model will be created, such as
us-central1
- PROJECT: Your project ID
- MODEL_DISPLAY_NAME: Name for the model as it appears in the user interface
- SENTIMENT_MAX: The max sentiment score in your training dataset
- DATASET_ID: The ID for the dataset
- PROJECT_NUMBER: Your project's automatically generated project number
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines
Request JSON body:
{ "displayName": "MODEL_DISPLAY_NAME", "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_sentiment_1.0.0.yaml", "trainingTaskInputs": { "sentimentMax": SENTIMENT_MAX }, "modelToUpload": { "displayName": "MODEL_DISPLAY_NAME" }, "inputDataConfig": { "datasetId": "DATASET_ID" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "name": "projects/PROJECT_NUMBER/locations/us-central1/trainingPipelines/PIPELINE_ID", "displayName": "MODEL_DISPLAY_NAME", "inputDataConfig": { "datasetId": "DATASET_ID" }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_sentiment_1.0.0.yaml", "trainingTaskInputs": { "sentimentMax": SENTIMENT_MAX }, "modelToUpload": { "displayName": "MODEL_DISPLAY_NAME" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2020-04-18T01:22:57.479336Z", "updateTime": "2020-04-18T01:22:57.479336Z" }
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Control the data split using REST
You can control how your training data is split between the training,
validation, and test sets. When using the Vertex AI API, use the
Split
object to determine
your data split. The Split
object can be included in the InputConfig
object
as one of several object types, each of which provides a different way to
split the training data. You can select one method only.
-
FractionSplit
:- TRAINING_FRACTION: The fraction of the training data to be used for the training set.
- VALIDATION_FRACTION: The fraction of the training data to be used for the validation set. Not used for video data.
- TEST_FRACTION: The fraction of the training data to be used for the test set.
If any of the fractions are specified, all must be specified. The fractions must add up to 1.0. The default values for the fractions differ depending on your data type. Learn more.
"fractionSplit": { "trainingFraction": TRAINING_FRACTION, "validationFraction": VALIDATION_FRACTION, "testFraction": TEST_FRACTION },
-
FilterSplit
: - TRAINING_FILTER: Data items that match this filter are used for the training set.
- VALIDATION_FILTER: Data items that match this filter are used for the validation set. Must be "-" for video data.
- TEST_FILTER: Data items that match this filter are used for the test set.
These filters can be used with the ml_use
label,
or with any labels you apply to your data. Learn more about using
the ml-use label
and other labels
to filter your data.
The following example shows how to use the filterSplit
object with the ml_use
label, with the validation
set included:
"filterSplit": { "trainingFilter": "labels.aiplatform.googleapis.com/ml_use=training", "validationFilter": "labels.aiplatform.googleapis.com/ml_use=validation", "testFilter": "labels.aiplatform.googleapis.com/ml_use=test" }
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-20 UTC.