Labeling Images with Semi-Supervised Learning

Submit a few labeled images (as low as 5 images per class) and a large amount of unlabeled images (hundreds of thousands, or more), and receive back predicted labels for all unlabeled images.

Apply for access Private documentation

Intended use

Problem types: This Experiment adds predicted labels to unlabeled data by training image classification models with a few labeled images and a large amount of unlabeled images. The Experiment returns labels and its confidence for each unlabeled image.

Inputs and outputs:

  • Users provide: Images, of which some include an object of interest. Some (as low as 5 images per class) images are labeled with class labels.
  • Users receive: a tuple of class label and the confidence for each unlabeled image.

What data do I need?

Data and label types: The Labeling images with Semi-Supervised Learning Experiment is likely to be effective with a wide range of image types, including real-world and abstract objects, fine-grained objects, and so on. It is likely the most effective if the images are object-centered. The Experiment supports a single-label prediction only and does not support when there are multiple objects in a single image.

Specifications:

  • Number of labeled images must be larger than or equal to 5 per class.
  • Number of unlabeled images is recommended to be more than 1000. The larger the number of unlabeled images, the better.
  • Each image may be assigned to one class.
  • The data distributions between labeled and unlabeled data may be different.

What skills do I need?

As with all AI Workshop Experiments, successful users are likely to be savvy with core AI concepts and skills in order to both deploy the experiment technology and interact with our AI researchers and engineers.

In particular, users of this experiment should:

  • Be familiar with using Google Cloud Projects and Google Cloud Storage
  • Have preliminary understanding of Image Classification for future use of returned data with label.