Managing datasets

A dataset contains representative samples of the type of content you want to analyze, each labeled with a value indicating how positive the sentiment is within the content. The dataset serves as the input for training a model.

The main steps for building a dataset are:

  1. Create a dataset.
  2. Import data items into the dataset.
  3. Label the items.

A project can have multiple datasets, each used to train a separate model. You can get a list of the available datasets and can delete datasets you no longer need.

Creating a dataset

The first step in creating a custom model is to create an empty dataset that will eventually hold the training data for the model. The newly created dataset doesn't contain any data until you import items into it.

When you create a dataset for sentiment analysis, you specify the maximum positive sentiment score so that AutoML Natural Language Sentiment Analysis knows the proper range. The sentiment score is an integer ranging from 0 (relatively negative) to a maximum value of your choice (positive). For example, if you want to identify whether the sentiment is negative, positive, or neutral, you would label the training data with sentiment scores of 0 (negative), 1 (neutral), and 2 (positive). The Maximum sentiment score (sentiment_max) for the dataset is 2. If you want to capture more granularity with five levels of sentiment, you still label items with the most negative sentiment as 0 and use 4 for the most positive sentiment. The Maximum sentiment score (sentiment_max) for the dataset is 5.

Web UI

The AutoML Natural Language Sentiment Analysis UI enables you to create a new dataset and import items into it from the same page. If you would rather import items later, select Import text items later at step 4 below.

  1. Visit the AutoML Natural Language Sentiment Analysis page and select the Launch app link in the AutoML Sentiment Analysis box.

    The Datasets page shows the status of previously created datasets for the current project.

    To add a dataset for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the New Dataset button in the title bar.

  3. On the Create dataset page, enter a name for the dataset and select Sentiment analysis as the objective.

  4. Under Import text items, specify where to find the labeled text items to use for training the model.

    You can:

    • Upload a .csv file that contains the training text items and their associated sentiment values from your local computer or from Google Cloud Storage.

    • Upload a collection of .txt or .zip files that contain the training text items from your local computer.

    • Postpone uploading text items and labels until later.

  5. Under Sentiment score, choose the Maximum sentiment score used in the training data.

    The sentiment value is an integer ranging from 0 (relatively negative) to a maximum value of your choice (positive).

  6. Click Create dataset.

    You're returned to the Datasets page; your dataset will show an in progress animation while your documents are being imported. This process should take approximately 10 minutes per 1000 documents, but may take more or less time.

    After the dataset is successfully created, you will receive a message at the email address you used to sign up for the program.

Command-line

In the command below, replace project-id with the ID for your project.

  curl \
    -X POST \
    -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
    -H "Content-Type: application/json" \
    https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/datasets \
    -d '{
      "displayName": "test_dataset",
      "textSentimentDatasetMetadata": {
        "sentimentMax": 4
      }
    }'

You should see output similar to the following:

name: "projects/000000000000/locations/us-central1/datasets/TST8962998974766436002"
display_name: "test_dataset_name"
create_time {
  seconds: 1538855662
  nanos: 51542000
}
text_sentiment_dataset_metadata {
  sentiment_max: 7
}

Note the name and ID if the new dataset, which you will need for other operations such as importing items into your dataset and training a model. The dataset name has the format projects/{project-id}/locations/us-central1/datasets/{dataset-id}; the dataset ID is the element that appears after datasets/ in the "name" value of the response.

Java

import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.Dataset;
import com.google.cloud.automl.v1beta1.LocationName;
import com.google.cloud.automl.v1beta1.TextSentimentDatasetMetadata;
import java.io.IOException;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

class CreateDataset {

  // Create an empty dataset
  static void createDataset(
      String projectId, String computeRegion, String datasetName, String sentimentMax)
      throws IOException {
    // String projectId = "YOUR_PROJECT_ID";
    // String computeRegion = "us-central1";
    // String datasetName = "YOUR_DATASET_DISPLAY_NAME";
    // Integer value representing the maximum positive sentiment score. The range would be from 0
    // to sentimentMax.
    // For additional details, please view the documentation at:
    // https://cloud.google.com/natural-language/automl/sentiment/docs/datasets
    // String sentimentMax = "YOUR_SENTIMENT_VALUE";

    // Instantiates a client.
    try (AutoMlClient client = AutoMlClient.create()) {

      // A resource that represents Google Cloud Platform location.
      LocationName projectLocation = LocationName.of(projectId, computeRegion);

      // Specify the text sentiment dataset metadata for the dataset.
      TextSentimentDatasetMetadata textSentimentDatasetMetadata =
          TextSentimentDatasetMetadata.newBuilder()
              .setSentimentMax(Integer.valueOf(sentimentMax))
              .build();

      // Set dataset name and dataset metadata.
      Dataset myDataset =
          Dataset.newBuilder()
              .setDisplayName(datasetName)
              .setTextSentimentDatasetMetadata(textSentimentDatasetMetadata)
              .build();

      // Create a dataset with the dataset metadata in the region.
      Dataset dataset = client.createDataset(projectLocation, myDataset);

      // Display the dataset information.
      System.out.println(String.format("Dataset name: %s", dataset.getName()));
      System.out.println(
          String.format(
              "Dataset Id: %s",
              dataset.getName().split("/")[dataset.getName().split("/").length - 1]));
      System.out.println(String.format("Dataset display name: %s", dataset.getDisplayName()));
      System.out.println("Text sentiment dataset metadata:");
      System.out.print(String.format("\t%s", dataset.getTextSentimentDatasetMetadata()));
      System.out.println(String.format("Dataset example count: %d", dataset.getExampleCount()));
      DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
      String createTime =
          dateFormat.format(new java.util.Date(dataset.getCreateTime().getSeconds() * 1000));
      System.out.println(String.format("Dataset create time: %s", createTime));
    }
  }
}

Node.js

const automl = require(`@google-cloud/automl`);
const util = require(`util`);
const client = new automl.v1beta1.AutoMlClient();

/**
 * Demonstrates using the AutoML client to create a dataset
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = '[PROJECT_ID]' e.g., "my-gcloud-project";
// const computeRegion = '[REGION_NAME]' e.g., "us-central1";
// const datasetName = '[DATASET_NAME]' e.g., "myDataset";

// A resource that represents Google Cloud Platform location.
const projectLocation = client.locationPath(projectId, computeRegion);

const datasetMetadata = {sentimentMax: sentimentMax};

// Set dataset name and metadata.
const myDataset = {
  displayName: datasetName,
  textSentimentDatasetMetadata: datasetMetadata,
};

// Create a dataset with the dataset metadata in the region.
client
  .createDataset({parent: projectLocation, dataset: myDataset})
  .then(responses => {
    const dataset = responses[0];

    // Display the dataset information.
    console.log(`Dataset name: ${dataset.name}`);
    console.log(`Dataset Id: ${dataset.name.split(`/`).pop(-1)}`);
    console.log(`Dataset display name: ${dataset.displayName}`);
    console.log(`Text sentiment dataset metadata:`);
    console.log(
      `\t${util.inspect(dataset.textSentimentDatasetMetadata, false, null)}`
    );
    console.log(`Dataset example count: ${dataset.exampleCount}`);
  })
  .catch(err => {
    console.error(err);
  });

Python

    # TODO(developer): Uncomment and set the following variables
    # project_id = '[PROJECT_ID]'
    # compute_region = '[COMPUTE_REGION]'
    # dataset_name = '[DATASET_NAME]'
    # sentiment_max = Integer score for sentiment with max of 10

    from google.cloud import automl_v1beta1 as automl

    client = automl.AutoMlClient()

    # A resource that represents Google Cloud Platform location.
    project_location = client.location_path(project_id, compute_region)

    # Specify the sentiment score for the dataset.
    dataset_metadata = {"sentiment_max": sentiment_max}

    # Set dataset name and metadata.
    my_dataset = {
        "display_name": dataset_name,
        "text_sentiment_dataset_metadata": dataset_metadata,
    }

    # Create a dataset with the dataset metadata in the region.
    dataset = client.create_dataset(project_location, my_dataset)

    # Display the dataset information.
    print("Dataset name: {}".format(dataset.name))
    print("Dataset id: {}".format(dataset.name.split("/")[-1]))
    print("Dataset display name: {}".format(dataset.display_name))
    print("Text sentiment dataset metadata:")
    print("\t{}".format(dataset.text_sentiment_dataset_metadata))
    print("Dataset example count: {}".format(dataset.example_count))
    print("Model create time: {}".format(datetime.fromtimestamp(dataset.create_time.seconds).strftime("%Y-%m-%dT%H:%M:%SZ")))

Importing items into a dataset

After you have created a dataset, you can import training items from a CSV file stored in a Google Cloud Storage bucket. For details on preparing your data and creating a CSV file for import, see Preparing your training data.

You can import items into an empty dataset or import additional items into an existing dataset.

Web UI

The AutoML Natural Language Sentiment Analysis UI enables you to create a new dataset and import items into it from the same page; see Creating a dataset. The steps below import items into an existing dataset.

  1. Open the AutoML Natural Language Sentiment Analysis UI, select the Launch app link in the AutoML Sentiment Analysis box, and select the dataset from the Datasets page.

  2. On the Text items page, click Add items in the title bar and select the import method from the drop-down list.

    You can:

    • Upload a .csv file that contains the training text items and their associated category labels from your local computer or from Google Cloud Storage.

    • Upload .txt or .zip files that contain the training text items from your local computer.

  3. Select the file(s) to import.

Command-line

In the command below, replace project-id and dataset-id with your IDs, and replace csv-file-URI with the path to dataset.csv in your Google Cloud Storage bucket.

  curl \
    -X POST \
    -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
    -H "Content-Type: application/json" \
    https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/datasets/dataset-id:importData \
    -d '{
    "inputConfig": {
      "gcsSource": {
        "inputUris": ["csv-file-URI"]
      }
    }
  }"

You should see output similar to the following. You can use the operation ID to get the status of the task. For an example, see Getting the status of an operation.

{
  "name": "projects/000000000000/locations/us-central1/operations/1979469554520650937",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2018-04-27T01:28:36.128120Z",
    "updateTime": "2018-04-27T01:28:36.128150Z",
    "cancellable": true
  }
}

Java

import com.google.api.gax.longrunning.OperationFuture;
import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.DatasetName;
import com.google.cloud.automl.v1beta1.GcsSource;
import com.google.cloud.automl.v1beta1.InputConfig;
import com.google.cloud.automl.v1beta1.OperationMetadata;
import com.google.protobuf.Empty;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;

class ImportData {

  // Import data from Google Cloud Storage into a dataset
  static void importData(String projectId, String computeRegion, String datasetId, String[] gcsUris)
      throws InterruptedException, ExecutionException, IOException, TimeoutException {
    // String projectId = "YOUR_PROJECT_ID";
    // String computeRegion = "us-central1";
    // String datasetId = "YOUR_DATASET_ID";
    // String[] gcsUris = {"PATH_TO_YOUR_DATA_FILE1","PATH_TO_YOUR_DATA_FILE2"};

    // Instantiates a client
    try (AutoMlClient client = AutoMlClient.create()) {

      // Get the complete path of the dataset.
      DatasetName datasetFullId = DatasetName.of(projectId, computeRegion, datasetId);

      GcsSource.Builder gcsSource = GcsSource.newBuilder();

      // Get multiple training data files to be imported from gcsSource.
      for (String inputUri : gcsUris) {
        gcsSource.addInputUris(inputUri);
      }

      // Import data from the input URI
      InputConfig inputConfig = InputConfig.newBuilder().setGcsSource(gcsSource).build();

      OperationFuture<Empty, OperationMetadata> response = client
          .importDataAsync(datasetFullId, inputConfig);

      System.out
          .format("Import data operation name: %s \n", response.getInitialFuture().get().getName());
      System.out.println("Processing import...");
      //Cancel operation to prevent charges when testing
      client.getOperationsClient().cancelOperation(response.getInitialFuture().get().getName());
    }
  }
}

Node.js

const automl = require(`@google-cloud/automl`);
const client = new automl.v1beta1.AutoMlClient();

/**
 * Demonstrates using the AutoML client to import labeled items.
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = '[PROJECT_ID]' e.g., "my-gcloud-project";
// const computeRegion = '[REGION_NAME]' e.g., "us-central1";
// const datasetId = '[DATASET_ID]' e.g., "TST8051890775971069952";
// const gcsPath = '[GCS_PATH]' e.g., "gs://<bucket-name>/<csv file>",
// `.csv paths in AutoML Natural Language Sentiment CSV format`;

// Get the full path of the dataset.
const datasetFullId = client.datasetPath(projectId, computeRegion, datasetId);

// Get the multiple Google Cloud Storage URIs.
const inputUris = gcsPath.split(`,`);
const inputConfig = {
  gcsSource: {
    inputUris: inputUris,
  },
};

// Import the data from the input URI.
client
  .importData({name: datasetFullId, inputConfig: inputConfig})
  .then(responses => {
    const operation = responses[0];
    console.log(`Processing import...`);
    return operation.promise();
  })
  .then(responses => {
    // The final result of the operation.
    const operationDetails = responses[2];

    // Get the data import details.
    console.log('Data import details:');
    console.log(`\tOperation details:`);
    console.log(`\t\tName: ${operationDetails.name}`);
    console.log(`\t\tDone: ${operationDetails.done}`);
  })
  .catch(err => {
    console.error(err);
  });

Python

    # TODO(developer): Uncomment and set the following variables
    # project_id = '[PROJECT_ID]'
    # compute_region = '[COMPUTE_REGION]'
    # dataset_id = '[DATASET_ID]'
    # path = 'gs://path/to/file.csv'

    from google.cloud import automl_v1beta1 as automl

    client = automl.AutoMlClient()

    # Get the full path of the dataset.
    dataset_full_id = client.dataset_path(
        project_id, compute_region, dataset_id
    )

    # Get the multiple Google Cloud Storage URIs.
    input_uris = path.split(",")
    input_config = {"gcs_source": {"input_uris": input_uris}}

    # Import the dataset from the input URI.
    response = client.import_data(dataset_full_id, input_config)

    print("Processing import...")
    # synchronous check of operation status.
    print("Data imported. {}".format(response.result()))

Labeling training items

To be useful for training a model, each item in a dataset must have a sentiment value assigned to it. AutoML Natural Language Sentiment Analysis ignores items without a sentiment value. You can provide sentiment values for your training items in three ways:

  • Include the values in your .csv file
  • Set the values for your items in the AutoML Natural Language Sentiment Analysis UI
  • Request labeling from human labelers using the AI Platform Data Labeling Service

The AutoML API does not include methods for labeling.

For details about assigning sentiment values in your .csv file, see Preparing your training data.

To label items in the AutoML Natural Language Sentiment Analysis UI, select the dataset from the dataset listing page to see its details. The display name of the selected dataset appears in the title bar, and the page lists the individual items in the dataset along with their assigned sentiment values. The navigation bar along the left summarizes the number of labeled and unlabeled items and enables you to filter the item list by sentiment value.

Text items page

To assign sentiment values to unlabeled items or change item values, select the items you want to update and the sentiment value you want to assign to them. Click the check box next to the items you want to update, then select the sentiment value to apply from the Label drop-down list that appears at the top of the item list.

Listing datasets

A project can include numerous datasets. This section describes how to retrieve a list of the available datasets for a project.

Web UI

To see a list of the available datasets using the AutoML Natural Language Sentiment Analysis UI, click the Datasets link at the top of the left navigation menu.

To see the datasets for a different project, select the project from the drop-down list in the upper right of the title bar.

Command-line

In the command below, replace project-id with the ID for your project.

curl \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/datasets

You should see output similar to the following:

{
  "datasets": [
    {
      "name": "projects/000000000000/locations/us-central1/datasets/356587829854924648",
      "displayName": "test_dataset",
      "createTime": "2018-04-26T18:02:59.825060Z",
      "textSentimentDatasetMetadata": {
         "sentimentMax": 4
      }
    },
    {
      "name": "projects/000000000000/locations/us-central1/datasets/3104518874390609379",
      "displayName": "test",
      "createTime": "2017-12-16T01:10:38.328280Z",
      "textSentimentDatasetMetadata": {
          "sentimentMax": 4
      }
      }
    }
  ]
}

Java

import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.Dataset;
import com.google.cloud.automl.v1beta1.ListDatasetsRequest;
import com.google.cloud.automl.v1beta1.LocationName;
import java.io.IOException;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

class ListDatasets {

  // List all datasets for a given project based on the filter expression
  static void listDatasets(String projectId, String computeRegion, String filter)
      throws IOException {
    // String projectId = "YOUR_PROJECT_ID";
    // String computeRegion = "us-central1";
    // String filter = "YOUR_FILTER_EXPRESSION";

    // Instantiates a client
    try (AutoMlClient client = AutoMlClient.create()) {

      // A resource that represents Google Cloud Platform location.
      LocationName projectLocation = LocationName.of(projectId, computeRegion);

      // Build the List datasets request
      ListDatasetsRequest request =
          ListDatasetsRequest.newBuilder()
              .setParent(projectLocation.toString())
              .setFilter(filter)
              .build();

      // List all the datasets available in the region by applying filter.
      System.out.println("List of datasets:");
      for (Dataset dataset : client.listDatasets(request).iterateAll()) {
        // Display the dataset information.
        System.out.println(String.format("\nDataset name: %s", dataset.getName()));
        System.out.println(
            String.format(
                "Dataset Id: %s",
                dataset.getName().split("/")[dataset.getName().split("/").length - 1]));
        System.out.println(String.format("Dataset display name: %s", dataset.getDisplayName()));
        System.out.println("Text sentiment dataset metadata:");
        System.out.print(String.format("\t%s", dataset.getTextSentimentDatasetMetadata()));
        System.out.println(String.format("Dataset example count: %d", dataset.getExampleCount()));
        DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
        String createTime =
            dateFormat.format(new java.util.Date(dataset.getCreateTime().getSeconds() * 1000));
        System.out.println(String.format("Dataset create time: %s", createTime));
      }
    }
  }
}

Node.js

const automl = require(`@google-cloud/automl`);
const util = require(`util`);
const client = new automl.v1beta1.AutoMlClient();

/**
 * Demonstrates using the AutoML client to list all datasets.
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = '[PROJECT_ID]' e.g., "my-gcloud-project";
// const computeRegion = '[REGION_NAME]' e.g., "us-central1";
// const filter = '[FILTER_EXPRESSIONS]'
// e.g., "textSentimentDatasetMetadata:*";

// A resource that represents Google Cloud Platform location.
const projectLocation = client.locationPath(projectId, computeRegion);

// List all the datasets available in the region by applying filter.
client
  .listDatasets({parent: projectLocation, filter: filter})
  .then(responses => {
    const dataset = responses[0];

    // Display the dataset information.
    console.log(`List of datasets:`);
    for (let i = 0; i < dataset.length; i++) {
      console.log(`\nDataset name: ${dataset[i].name}`);
      console.log(`Dataset Id: ${dataset[i].name.split(`/`).pop(-1)}`);
      console.log(`Dataset display name: ${dataset[i].displayName}`);
      console.log(`Text sentiment dataset metadata:`);
      console.log(
        `\t${util.inspect(
          dataset[i].textSentimentDatasetMetadata,
          false,
          null
        )}`
      );
      console.log(`Dataset example count: ${dataset[i].exampleCount}`);
    }
  })
  .catch(err => {
    console.error(err);
  });

Python

    # TODO(developer): Uncomment and set the following variables
    # project_id = '[PROJECT_ID]'
    # compute_region = '[COMPUTE_REGION]'
    # filter_ = 'filter expression here'

    from google.cloud import automl_v1beta1 as automl

    client = automl.AutoMlClient()

    # A resource that represents Google Cloud Platform location.
    project_location = client.location_path(project_id, compute_region)

    # List all the datasets available in the region by applying filter.
    response = client.list_datasets(project_location, filter_)

    print("List of datasets:")
    for dataset in response:
        # Display the dataset information.
        print("Dataset name: {}".format(dataset.name))
        print("Dataset id: {}".format(dataset.name.split("/")[-1]))
        print("Dataset display name: {}".format(dataset.display_name))
        print("Text sentiment dataset metadata:")
        print("\t{}".format(dataset.text_sentiment_dataset_metadata))
        print("Dataset example count: {}".format(dataset.example_count))
        print("Model create time: {}".format(datetime.fromtimestamp(dataset.create_time.seconds).strftime("%Y-%m-%dT%H:%M:%SZ")))

Deleting a dataset

Web UI

  1. In the AutoML Natural Language Sentiment Analysis UI, click the Datasets link at the top of the left navigation menu to display the list of available datasets.

  2. Click the three-dot menu at the far right of the row you want to delete and select Delete dataset.

  3. Click Delete in the confirmation dialog box.

Command-line

Replace dataset-name with the full name of your dataset, from the response when you created the dataset. The full name has the format: projects/{project-id}/locations/us-central1/datasets/{dataset-id}

curl -X DELETE \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" https://automl.googleapis.com/v1beta1/dataset-name

You should see output similar to the following:

{
  "name": "projects/000000000000/locations/us-central1/operations/3512013641657611176",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2018-05-04T01:45:16.735340Z",
    "updateTime": "2018-05-04T01:45:16.735360Z",
    "cancellable": true
  }
}

Java

import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.DatasetName;
import com.google.protobuf.Empty;
import java.io.IOException;
import java.util.concurrent.ExecutionException;

class DeleteDataset {

  // Delete a dataset
  static void deleteDataset(String projectId, String computeRegion, String datasetId)
      throws InterruptedException, ExecutionException, IOException {
    // String projectId = "YOUR_PROJECT_ID";
    // String computeRegion = "us-central1";
    // String datasetId = "YOUR_DATASET_ID";

    // Instantiates a client
    try (AutoMlClient client = AutoMlClient.create()) {

      // Get the complete path of the dataset.
      DatasetName datasetFullId = DatasetName.of(projectId, computeRegion, datasetId);

      // Delete a dataset.
      Empty response = client.deleteDatasetAsync(datasetFullId).get();
      System.out.println(String.format("Dataset deleted. %s", response));

    }
  }
}

Node.js

const automl = require(`@google-cloud/automl`);
const client = new automl.v1beta1.AutoMlClient();

/**
 * Demonstrates using the AutoML client to delete a dataset.
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = '[PROJECT_ID]' e.g., "my-gcloud-project";
// const computeRegion = '[REGION_NAME]' e.g., "us-central1";
// const datasetId = '[DATASET_ID]' e.g., "TST8051890775971069952";

// Get the full path of the dataset.
const datasetFullId = client.datasetPath(projectId, computeRegion, datasetId);

// Delete a dataset.
client
  .deleteDataset({name: datasetFullId})
  .then(responses => {
    const operation = responses[0];
    return operation.promise();
  })
  .then(responses => {
    // The final result of the operation.
    const operationDetails = responses[2];

    // Get the Dataset delete details.
    console.log('Dataset delete details:');
    console.log(`\tOperation details:`);
    console.log(`\t\tName: ${operationDetails.name}`);
    console.log(`\t\tDone: ${operationDetails.done}`);
  })
  .catch(err => {
    console.error(err);
  });

Python

    # TODO(developer): Uncomment and set the following variables
    # project_id = '[PROJECT_ID]'
    # compute_region = '[COMPUTE_REGION]'
    # dataset_id = '[DATASET_ID]'

    from google.cloud import automl_v1beta1 as automl

    client = automl.AutoMlClient()

    # Get the full path of the dataset.
    dataset_full_id = client.dataset_path(
        project_id, compute_region, dataset_id
    )

    # Delete a dataset.
    response = client.delete_dataset(dataset_full_id)

    # synchronous check of operation status.
    print("Dataset deleted. {}".format(response.result()))

Was this page helpful? Let us know how we did:

Send feedback about...

AutoML Natural Language Sentiment Analysis