This documentation is for AutoML Vision, which is different from Vertex AI. If you are using Vertex AI, see the Vertex AI documentation.

Base64 Encoding

You can provide image data to the AutoML Vision API by sending the image data as base64-encoded text.

Most development environments contain a native "base64" utility to encode a binary image into ASCII text data. To encode an image file:

Linux

    base64 input.jpg > output.txt

Mac OSX

    base64 -i input.jpg -o output.txt

Windows

    C:> Base64.exe -e input.jpg > output.txt

PowerShell

    [Convert]::ToBase64String([IO.File]::ReadAllBytes("./input.jpg")) > output.txt

You can then use this output image data natively within the JSON request:

{
  "payload": {
        "image": {
            "imageBytes": "/9j/4QAYRXhpZgAA...base64-encoded-image...9tAVx/zDQDlGxn//2Q=="
        }
  },
  "params": {
    string: string,
    ...
  }
}

Each programming language has its own way of base64 encoding image files:

Python

In Python, you can base64 encode image files as follows:

# Import the base64 encoding library.
import base64

# Pass the image data to an encoding function.
def encode_image(image):
  image_content = image.read()
  return base64.b64encode(image_content)

Node.js

In Node.js, you can base64 encode image files as follows:

// Read the file into memory.
var fs = require('fs');
var imageFile = fs.readFileSync('/path/to/file');

// Convert the image data to a Buffer and base64 encode it.
var encoded = Buffer.from(imageFile).toString('base64');

Java

In Java, you can base64 encode image files as follows:

// Import the Base64 encoding library.
import org.apache.commons.codec.binary.Base64;

// Encode the image.
byte[] imageData = Base64.encodeBase64(imageFile.getBytes());