Optical Character Recognition (OCR) Tutorial

Learn how to perform optical character recognition (OCR) on Google Cloud Platform. This tutorial demonstrates how to upload image files to Google Cloud Storage, extract text from the images using the Google Cloud Vision API, translate the text using the Google Cloud Translation API, and save your translations back to Cloud Storage. Google Cloud Pub/Sub is used to queue various tasks and trigger the right Cloud Functions to carry them out.

Objectives

  • Write and deploy several Background Cloud Functions.
  • Upload images to Cloud Storage.
  • Extract, translate and save text contained in uploaded images.

Costs

This tutorial uses billable components of Cloud Platform, including:

  • Google Cloud Functions
  • Google Cloud Pub/Sub
  • Google Cloud Storage
  • Google Cloud Translation API
  • Google Cloud Vision API

Use the Pricing Calculator to generate a cost estimate based on your projected usage.

New Cloud Platform users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the project selector page

  3. Projeniz için faturalandırmanın etkinleştirildiğinden emin olun.

    Faturalandırmayı etkinleştirmeyi öğren

  4. Enable the Cloud Functions, Cloud Pub/Sub, Cloud Storage, Cloud Translation, and Cloud Vision APIs.

    Enable the APIs

  5. Update gcloud components:
    gcloud components update
  6. Prepare your development environment.

Visualizing the flow of data

The flow of data in the OCR tutorial application involves several steps:

  1. An image is uploaded to Cloud Storage with text in any language (text that appears in the image itself).
  2. A Cloud Function is triggered, which uses the Vision API to extract the text, and queues the text to be translated into the configured translation languages (by publishing a message to a Pub/Sub topic, which in turn triggers another function).
  3. For each queued translation, a Cloud Function is triggered which uses the Translation API to translate the text and queue it to be saved to Cloud Storage (again, by publishing a message to a Pub/Sub topic).
  4. For each translated text, a Cloud Function is triggered which saves the translated text to Cloud Storage.

It may help to visualize the steps:

Preparing the application

  1. Create a Cloud Storage bucket to upload your images, where YOUR_IMAGE_BUCKET_NAME is a globally unique bucket name:

    gsutil mb gs://YOUR_IMAGE_BUCKET_NAME
    
  2. Create a Cloud Storage bucket to save the translations, where YOUR_TEXT_BUCKET_NAME is a globally unique bucket name:

    gsutil mb gs://YOUR_TEXT_BUCKET_NAME
    
  3. Clone the sample app repository to your local machine:

    Node.js

    git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git

    Alternatively, you can download the sample as a zip file and extract it.

    Python

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git

    Alternatively, you can download the sample as a zip file and extract it.

    Go

    git clone https://github.com/GoogleCloudPlatform/golang-samples.git

    Alternatively, you can download the sample as a zip file and extract it.

  4. Change to the directory that contains the Cloud Functions sample code:

    Node.js

    cd nodejs-docs-samples/functions/ocr/app/

    Python

    cd python-docs-samples/functions/ocr/app/

    Go

    cd golang-samples/functions/ocr/app/

  5. Configure the app:

    Node.js

    Using the config.default.json file as a template, create a config.json file in the app directory with the following contents:
    {
    "RESULT_TOPIC": "YOUR_RESULT_TOPIC_NAME",
    "RESULT_BUCKET": "YOUR_TEXT_BUCKET_NAME",
    "TRANSLATE_TOPIC": "YOUR_TRANSLATE_TOPIC_NAME",
    "TRANSLATE": true,
    "TO_LANG": ["en", "fr", "es", "ja", "ru"]
    }
    • Replace YOUR_RESULT_TOPIC_NAME with a Pub/Sub topic name to be used for saving results once the translation is complete.
    • Replace YOUR_TEXT_BUCKET_NAME with a bucket name used for saving results.
    • Replace YOUR_TRANSLATE_TOPIC_NAME with a Pub/Sub topic name to be used for translating results.

    Python

    Edit the config.json file in the app directory to have the following contents:
    {
    "RESULT_TOPIC": "YOUR_RESULT_TOPIC_NAME",
    "RESULT_BUCKET": "YOUR_TEXT_BUCKET_NAME",
    "TRANSLATE_TOPIC": "YOUR_TRANSLATE_TOPIC_NAME",
    "TRANSLATE": true,
    "TO_LANG": ["en", "fr", "es", "ja", "ru"]
    }
    • Replace YOUR_RESULT_TOPIC_NAME with a Pub/Sub topic name to be used for saving results once the translation is complete.
    • Replace YOUR_TEXT_BUCKET_NAME with a bucket name used for saving results.
    • Replace YOUR_TRANSLATE_TOPIC_NAME with a Pub/Sub topic name to be used for translating results.

    Go

    Edit the config.json file in the app directory to have the following contents:
    {
    "RESULT_TOPIC": "YOUR_RESULT_TOPIC_NAME",
    "RESULT_BUCKET": "YOUR_TEXT_BUCKET_NAME",
    "TRANSLATE_TOPIC": "YOUR_TRANSLATE_TOPIC_NAME",
    "TO_LANG": ["en", "fr", "es", "ja", "ru"]
    }
    • Replace YOUR_RESULT_TOPIC_NAME with a Pub/Sub topic name to be used for saving results once the translation is complete.
    • Replace YOUR_TEXT_BUCKET_NAME with a bucket name used for saving results.
    • Replace YOUR_TRANSLATE_TOPIC_NAME with a Pub/Sub topic name to be used for translating results.

Understanding the code

Importing dependencies

The application must import several dependencies in order to communicate with Google Cloud Platform services:

Node.js

const config = require('./config.json');

// Get a reference to the Pub/Sub component
const {PubSub} = require('@google-cloud/pubsub');
const pubsub = new PubSub();
// Get a reference to the Cloud Storage component
const {Storage} = require('@google-cloud/storage');
const storage = new Storage();

// Get a reference to the Cloud Vision API component
const Vision = require('@google-cloud/vision');
const vision = new Vision.ImageAnnotatorClient();

// Get a reference to the Translate API component
const {Translate} = require('@google-cloud/translate');
const translate = new Translate();

const Buffer = require('safe-buffer').Buffer;

Python

import base64
import json
import os

from google.cloud import pubsub_v1
from google.cloud import storage
from google.cloud import translate
from google.cloud import vision

vision_client = vision.ImageAnnotatorClient()
translate_client = translate.Client()
publisher = pubsub_v1.PublisherClient()
storage_client = storage.Client()

project_id = os.environ['GCP_PROJECT']

with open('config.json') as f:
    data = f.read()
config = json.loads(data)

Go


// Package ocr contains Go samples for creating OCR
// (Optical Character Recognition) Cloud functions.
package ocr

import (
	"context"
	"encoding/json"
	"fmt"
	"os"
	"time"

	"cloud.google.com/go/pubsub"
	"cloud.google.com/go/storage"
	"cloud.google.com/go/translate"
	vision "cloud.google.com/go/vision/apiv1"
	"golang.org/x/text/language"
)

type configuration struct {
	ProjectID      string   `json:"PROJECT_ID"`
	ResultTopic    string   `json:"RESULT_TOPIC"`
	ResultBucket   string   `json:"RESULT_BUCKET"`
	TranslateTopic string   `json:"TRANSLATE_TOPIC"`
	ToLang         []string `json:"TO_LANG"`
}

type ocrMessage struct {
	Text     string       `json:"text"`
	FileName string       `json:"fileName"`
	Lang     language.Tag `json:"lang"`
	SrcLang  language.Tag `json:"srcLang"`
}

// GCSEvent is the payload of a GCS event.
type GCSEvent struct {
	Bucket         string    `json:"bucket"`
	Name           string    `json:"name"`
	Metageneration string    `json:"metageneration"`
	ResourceState  string    `json:"resourceState"`
	TimeCreated    time.Time `json:"timeCreated"`
	Updated        time.Time `json:"updated"`
}

// PubSubMessage is the payload of a Pub/Sub event.
type PubSubMessage struct {
	Data []byte `json:"data"`
}

var (
	visionClient    *vision.ImageAnnotatorClient
	translateClient *translate.Client
	pubsubClient    *pubsub.Client
	storageClient   *storage.Client
	config          *configuration
)

func setup(ctx context.Context) error {
	if config == nil {
		cfgFile, err := os.Open("config.json")
		if err != nil {
			return fmt.Errorf("os.Open: %v", err)
		}

		d := json.NewDecoder(cfgFile)
		config = &configuration{}
		if err = d.Decode(config); err != nil {
			return fmt.Errorf("Decode: %v", err)
		}
	}

	var err error // Prevent shadowing clients with :=.

	if visionClient == nil {
		visionClient, err = vision.NewImageAnnotatorClient(ctx)
		if err != nil {
			return fmt.Errorf("vision.NewImageAnnotatorClient: %v", err)
		}
	}

	if translateClient == nil {
		translateClient, err = translate.NewClient(ctx)
		if err != nil {
			return fmt.Errorf("translate.NewClient: %v", err)
		}
	}

	if pubsubClient == nil {
		pubsubClient, err = pubsub.NewClient(ctx, config.ProjectID)
		if err != nil {
			return fmt.Errorf("translate.NewClient: %v", err)
		}
	}

	if storageClient == nil {
		storageClient, err = storage.NewClient(ctx)
		if err != nil {
			return fmt.Errorf("storage.NewClient: %v", err)
		}
	}
	return nil
}

Processing images

The following function reads an uploaded image file from Cloud Storage and calls a function to detect whether the image contains text:

Node.js

/**
 * This function is exported by index.js, and is executed when
 * a file is uploaded to the Cloud Storage bucket you created
 * for uploading images.
 *
 * @param {object} event.data (Node 6) A Google Cloud Storage File object.
 * @param {object} event (Node 8+) A Google Cloud Storage File object.
 */
exports.processImage = event => {
  const file = event.data || event;

  return Promise.resolve()
    .then(() => {
      if (file.resourceState === 'not_exists') {
        // This was a deletion event, we don't want to process this
        return;
      }

      if (!file.bucket) {
        throw new Error(
          'Bucket not provided. Make sure you have a "bucket" property in your request'
        );
      }
      if (!file.name) {
        throw new Error(
          'Filename not provided. Make sure you have a "name" property in your request'
        );
      }

      return detectText(file.bucket, file.name);
    })
    .then(() => {
      console.log(`File ${file.name} processed.`);
    });
};

Python

def process_image(file, context):
    """Cloud Function triggered by Cloud Storage when a file is changed.
    Args:
        file (dict): Metadata of the changed file, provided by the triggering
                                 Cloud Storage event.
        context (google.cloud.functions.Context): Metadata of triggering event.
    Returns:
        None; the output is written to stdout and Stackdriver Logging
    """
    bucket = validate_message(file, 'bucket')
    name = validate_message(file, 'name')

    detect_text(bucket, name)

    print('File {} processed.'.format(file['name']))

Go


package ocr

import (
	"context"
	"fmt"
	"log"
)

// ProcessImage is executed when a file is uploaded to the Cloud Storage bucket you
// created for uploading images. It runs detectText, which processes the image for text.
func ProcessImage(ctx context.Context, event GCSEvent) error {
	if err := setup(ctx); err != nil {
		return fmt.Errorf("ProcessImage: %v", err)
	}
	if event.Bucket == "" {
		return fmt.Errorf("empty file.Bucket")
	}
	if event.Name == "" {
		return fmt.Errorf("empty file.Name")
	}
	if err := detectText(ctx, event.Bucket, event.Name); err != nil {
		return fmt.Errorf("detectText: %v", err)
	}
	log.Printf("File %s processed.", event.Name)
	return nil
}

The following function extracts text from the image using the Cloud Vision API and queues the text for translation:

Node.js

/**
 * Detects the text in an image using the Google Vision API.
 *
 * @param {string} bucketName Cloud Storage bucket name.
 * @param {string} filename Cloud Storage file name.
 * @returns {Promise}
 */
function detectText(bucketName, filename) {
  let text;

  console.log(`Looking for text in image ${filename}`);
  return vision
    .textDetection(`gs://${bucketName}/${filename}`)
    .then(([detections]) => {
      const annotation = detections.textAnnotations[0];
      text = annotation ? annotation.description : '';
      console.log(`Extracted text from image (${text.length} chars)`);
      return translate.detect(text);
    })
    .then(([detection]) => {
      if (Array.isArray(detection)) {
        detection = detection[0];
      }
      console.log(`Detected language "${detection.language}" for ${filename}`);

      // Submit a message to the bus for each language we're going to translate to
      const tasks = config.TO_LANG.map(lang => {
        let topicName = config.TRANSLATE_TOPIC;
        if (detection.language === lang) {
          topicName = config.RESULT_TOPIC;
        }
        const messageData = {
          text: text,
          filename: filename,
          lang: lang,
        };

        return publishResult(topicName, messageData);
      });

      return Promise.all(tasks);
    });
}

Python

def detect_text(bucket, filename):
    print('Looking for text in image {}'.format(filename))

    futures = []

    text_detection_response = vision_client.text_detection({
        'source': {'image_uri': 'gs://{}/{}'.format(bucket, filename)}
    })
    annotations = text_detection_response.text_annotations
    if len(annotations) > 0:
        text = annotations[0].description
    else:
        text = ''
    print('Extracted text {} from image ({} chars).'.format(text, len(text)))

    detect_language_response = translate_client.detect_language(text)
    src_lang = detect_language_response['language']
    print('Detected language {} for text {}.'.format(src_lang, text))

    # Submit a message to the bus for each target language
    for target_lang in config.get('TO_LANG', []):
        topic_name = config['TRANSLATE_TOPIC']
        if src_lang == target_lang or src_lang == 'und':
            topic_name = config['RESULT_TOPIC']
        message = {
            'text': text,
            'filename': filename,
            'lang': target_lang,
            'src_lang': src_lang
        }
        message_data = json.dumps(message).encode('utf-8')
        topic_path = publisher.topic_path(project_id, topic_name)
        future = publisher.publish(topic_path, data=message_data)
        futures.append(future)
    for future in futures:
        future.result()

Go


package ocr

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	"cloud.google.com/go/pubsub"
	"golang.org/x/text/language"
	visionpb "google.golang.org/genproto/googleapis/cloud/vision/v1"
)

// detectText detects the text in an image using the Google Vision API.
func detectText(ctx context.Context, bucketName, fileName string) error {
	log.Printf("Looking for text in image %v", fileName)
	maxResults := 1
	image := &visionpb.Image{
		Source: &visionpb.ImageSource{
			GcsImageUri: fmt.Sprintf("gs://%s/%s", bucketName, fileName),
		},
	}
	annotations, err := visionClient.DetectTexts(ctx, image, &visionpb.ImageContext{}, maxResults)
	if err != nil {
		return fmt.Errorf("DetectTexts: %v", err)
	}
	text := ""
	if len(annotations) > 0 {
		text = annotations[0].Description
	}
	if len(annotations) == 0 || len(text) == 0 {
		log.Printf("No text detected in image %q. Returning early.", fileName)
		return nil
	}
	log.Printf("Extracted text %q from image (%d chars).", text, len(text))

	detectResponse, err := translateClient.DetectLanguage(ctx, []string{text})
	if err != nil {
		return fmt.Errorf("DetectLanguage: %v", err)
	}
	if len(detectResponse) == 0 || len(detectResponse[0]) == 0 {
		return fmt.Errorf("DetectLanguage gave empty response")
	}
	srcLang := detectResponse[0][0].Language.String()
	log.Printf("Detected language %q for text %q.", srcLang, text)

	// Submit a message to the bus for each target language
	for _, targetLang := range config.ToLang {
		topicName := config.TranslateTopic
		if srcLang == targetLang || srcLang == "und" { // detection returns "und" for undefined language
			topicName = config.ResultTopic
		}
		targetTag, err := language.Parse(targetLang)
		if err != nil {
			return fmt.Errorf("language.Parse: %v", err)
		}
		srcTag, err := language.Parse(srcLang)
		if err != nil {
			return fmt.Errorf("language.Parse: %v", err)
		}
		message, err := json.Marshal(ocrMessage{
			Text:     text,
			FileName: fileName,
			Lang:     targetTag,
			SrcLang:  srcTag,
		})
		if err != nil {
			return fmt.Errorf("json.Marshal: %v", err)
		}
		topic := pubsubClient.Topic(topicName)
		ok, err := topic.Exists(ctx)
		if err != nil {
			return fmt.Errorf("Exists: %v", err)
		}
		if !ok {
			topic, err = pubsubClient.CreateTopic(ctx, topicName)
			if err != nil {
				return fmt.Errorf("CreateTopic: %v", err)
			}
		}
		msg := &pubsub.Message{
			Data: []byte(message),
		}
		if _, err = topic.Publish(ctx, msg).Get(ctx); err != nil {
			return fmt.Errorf("Get: %v", err)
		}
	}
	return nil
}

Translating text

The following function translates the extracted text and queues the translated text to be saved back to Cloud Storage:

Node.js

/**
 * This function is exported by index.js, and is executed when
 * a message is published to the Cloud Pub/Sub topic specified
 * by the TRANSLATE_TOPIC value in the config.json file. The
 * function translates text using the Google Translate API.
 *
 * @param {object} event.data (Node 6) The Cloud Pub/Sub Message object.
 * @param {object} event (Node 8+) The Cloud Pub/Sub Message object.
 * @param {string} {messageObject}.data The "data" property of the Cloud Pub/Sub
 * Message. This property will be a base64-encoded string that you must decode.
 */
exports.translateText = event => {
  const pubsubData = event.data.data || event.data;
  const jsonStr = Buffer.from(pubsubData, 'base64').toString();
  const payload = JSON.parse(jsonStr);

  return Promise.resolve()
    .then(() => {
      if (!payload.text) {
        throw new Error(
          'Text not provided. Make sure you have a "text" property in your request'
        );
      }
      if (!payload.filename) {
        throw new Error(
          'Filename not provided. Make sure you have a "filename" property in your request'
        );
      }
      if (!payload.lang) {
        throw new Error(
          'Language not provided. Make sure you have a "lang" property in your request'
        );
      }

      console.log(`Translating text into ${payload.lang}`);
      return translate.translate(payload.text, payload.lang);
    })
    .then(([translation]) => {
      const messageData = {
        text: translation,
        filename: payload.filename,
        lang: payload.lang,
      };

      return publishResult(config.RESULT_TOPIC, messageData);
    })
    .then(() => {
      console.log(`Text translated to ${payload.lang}`);
    });
};

Python

def translate_text(event, context):
    if event.get('data'):
        message_data = base64.b64decode(event['data']).decode('utf-8')
        message = json.loads(message_data)
    else:
        raise ValueError('Data sector is missing in the Pub/Sub message.')

    text = validate_message(message, 'text')
    filename = validate_message(message, 'filename')
    target_lang = validate_message(message, 'lang')
    src_lang = validate_message(message, 'src_lang')

    print('Translating text into {}.'.format(target_lang))
    translated_text = translate_client.translate(text,
                                                 target_language=target_lang,
                                                 source_language=src_lang)
    topic_name = config['RESULT_TOPIC']
    message = {
        'text': translated_text['translatedText'],
        'filename': filename,
        'lang': target_lang,
    }
    message_data = json.dumps(message).encode('utf-8')
    topic_path = publisher.topic_path(project_id, topic_name)
    future = publisher.publish(topic_path, data=message_data)
    future.result()

Go


package ocr

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	"cloud.google.com/go/pubsub"
	"cloud.google.com/go/translate"
)

// TranslateText is executed when a message is published to the Cloud Pub/Sub topic specified
// by TRANSLATE_TOPIC in config.json, and translates the text using the Google Translate API.
func TranslateText(ctx context.Context, event PubSubMessage) error {
	if err := setup(ctx); err != nil {
		return fmt.Errorf("setup: %v", err)
	}
	if event.Data == nil {
		return fmt.Errorf("empty data")
	}
	var message ocrMessage
	if err := json.Unmarshal(event.Data, &message); err != nil {
		return fmt.Errorf("json.Unmarshal: %v", err)
	}

	log.Printf("Translating text into %s.", message.Lang.String())
	opts := translate.Options{
		Source: message.SrcLang,
	}
	translateResponse, err := translateClient.Translate(ctx, []string{message.Text}, message.Lang, &opts)
	if err != nil {
		return fmt.Errorf("Translate: %v", err)
	}
	if len(translateResponse) == 0 {
		return fmt.Errorf("Empty Translate response")
	}
	translatedText := translateResponse[0]

	messageData, err := json.Marshal(ocrMessage{
		Text:     translatedText.Text,
		FileName: message.FileName,
		Lang:     message.Lang,
		SrcLang:  message.SrcLang,
	})
	if err != nil {
		return fmt.Errorf("json.Marshal: %v", err)
	}

	topic := pubsubClient.Topic(config.ResultTopic)
	ok, err := topic.Exists(ctx)
	if err != nil {
		return fmt.Errorf("Exists: %v", err)
	}
	if !ok {
		topic, err = pubsubClient.CreateTopic(ctx, config.ResultTopic)
		if err != nil {
			return fmt.Errorf("CreateTopic: %v", err)
		}
	}
	msg := &pubsub.Message{
		Data: messageData,
	}
	if _, err = topic.Publish(ctx, msg).Get(ctx); err != nil {
		return fmt.Errorf("Get: %v", err)
	}
	log.Printf("Sent translation: %q", translatedText.Text)
	return nil
}

Saving the translations

Finally, the following function receives the translated text and saves it back to Cloud Storage:

Node.js

/**
 * This function is exported by index.js, and is executed when
 * a message is published to the Cloud Pub/Sub topic specified
 * by the RESULT_TOPIC value in the config.json file. The
 * function saves the data packet to a file in GCS.
 *
 * @param {object} event.data (Node 6) The Cloud Pub/Sub Message object.
 * @param {object} event (Node 8+) The Cloud Pub/Sub Message object.
 * @param {string} {messageObject}.data The "data" property of the Cloud Pub/Sub
 * Message. This property will be a base64-encoded string that you must decode.
 */
exports.saveResult = event => {
  const pubsubData = event.data.data || event.data;
  const jsonStr = Buffer.from(pubsubData, 'base64').toString();
  const payload = JSON.parse(jsonStr);

  return Promise.resolve()
    .then(() => {
      if (!payload.text) {
        throw new Error(
          'Text not provided. Make sure you have a "text" property in your request'
        );
      }
      if (!payload.filename) {
        throw new Error(
          'Filename not provided. Make sure you have a "filename" property in your request'
        );
      }
      if (!payload.lang) {
        throw new Error(
          'Language not provided. Make sure you have a "lang" property in your request'
        );
      }

      console.log(`Received request to save file ${payload.filename}`);

      const bucketName = config.RESULT_BUCKET;
      const filename = renameImageForSave(payload.filename, payload.lang);
      const file = storage.bucket(bucketName).file(filename);

      console.log(`Saving result to ${filename} in bucket ${bucketName}`);

      return file.save(payload.text);
    })
    .then(() => {
      console.log(`File saved.`);
    });
};

Python

def save_result(event, context):
    if event.get('data'):
        message_data = base64.b64decode(event['data']).decode('utf-8')
        message = json.loads(message_data)
    else:
        raise ValueError('Data sector is missing in the Pub/Sub message.')

    text = validate_message(message, 'text')
    filename = validate_message(message, 'filename')
    lang = validate_message(message, 'lang')

    print('Received request to save file {}.'.format(filename))

    bucket_name = config['RESULT_BUCKET']
    result_filename = '{}_{}.txt'.format(filename, lang)
    bucket = storage_client.get_bucket(bucket_name)
    blob = bucket.blob(result_filename)

    print('Saving result to {} in bucket {}.'.format(result_filename,
                                                     bucket_name))

    blob.upload_from_string(text)

    print('File saved.')

Go


package ocr

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
)

// SaveResult is executed when a message is published to the Cloud Pub/Sub topic specified by
// RESULT_TOPIC in config.json file, and saves the data packet to a file in GCS.
func SaveResult(ctx context.Context, event PubSubMessage) error {
	if err := setup(ctx); err != nil {
		return fmt.Errorf("ProcessImage: %v", err)
	}
	var message ocrMessage
	if event.Data == nil {
		return fmt.Errorf("Empty data")
	}
	if err := json.Unmarshal(event.Data, &message); err != nil {
		return fmt.Errorf("json.Unmarshal: %v", err)
	}
	log.Printf("Received request to save file %q.", message.FileName)

	bucketName := config.ResultBucket
	resultFilename := fmt.Sprintf("%s_%s.txt", message.FileName, message.Lang)
	bucket := storageClient.Bucket(bucketName)

	log.Printf("Saving result to %q in bucket %q.", resultFilename, bucketName)

	w := bucket.Object(resultFilename).NewWriter(ctx)
	defer w.Close()
	fmt.Fprint(w, message.Text)

	log.Printf("File saved.")
	return nil
}

Deploying the functions

This section describes how to deploy your functions.

  1. To deploy the image processing function with a Cloud Storage trigger, run the following command in the app directory:

    Node.js 8

    gcloud functions deploy ocr-extract --runtime nodejs8 --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point processImage

    Node.js 10 (Beta)

    gcloud functions deploy ocr-extract --runtime nodejs10 --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point processImage

    Node.js 6 (Deprecated)

    gcloud functions deploy ocr-extract --runtime nodejs6 --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point processImage

    Python

    gcloud functions deploy ocr-extract --runtime python37 --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point process_image

    Go

    gcloud functions deploy ocr-extract --runtime go111 --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point ProcessImage

    where YOUR_IMAGE_BUCKET_NAME is the name of your Cloud Storage bucket where you will be uploading images.

  2. To deploy the text translation function with a Cloud Pub/Sub trigger, run the following command in the app directory:

    Node.js 8

    gcloud functions deploy ocr-translate --runtime nodejs8 --trigger-topic YOUR_TRANSLATE_TOPIC_NAME --entry-point translateText

    Node.js 10 (Beta)

    gcloud functions deploy ocr-translate --runtime nodejs10 --trigger-topic YOUR_TRANSLATE_TOPIC_NAME --entry-point translateText

    Node.js 6 (Deprecated)

    gcloud functions deploy ocr-translate --runtime nodejs6 --trigger-topic YOUR_TRANSLATE_TOPIC_NAME --entry-point translateText

    Python

    gcloud functions deploy ocr-translate  --runtime python37 --trigger-topic YOUR_TRANSLATE_TOPIC_NAME --entry-point translate_text

    Go

    gcloud functions deploy ocr-translate --runtime go111 --trigger-topic YOUR_TRANSLATE_TOPIC_NAME --entry-point TranslateText

    where YOUR_TRANSLATE_TOPIC_NAME is the name of your Cloud Pub/Sub topic with which translations will be triggered.

  3. To deploy the function that saves results to Cloud Storage with a Cloud Pub/Sub trigger, run the following command in the app directory:

    Node.js 8

    gcloud functions deploy ocr-save --runtime nodejs8 --trigger-topic YOUR_RESULT_TOPIC_NAME --entry-point saveResult

    Node.js 10 (Beta)

    gcloud functions deploy ocr-save --runtime nodejs10 --trigger-topic YOUR_RESULT_TOPIC_NAME --entry-point saveResult

    Node.js 6 (Deprecated)

    gcloud functions deploy ocr-save --runtime nodejs6 --trigger-topic YOUR_RESULT_TOPIC_NAME --entry-point saveResult

    Python

    gcloud functions deploy ocr-save --runtime python37 --trigger-topic YOUR_RESULT_TOPIC_NAME --entry-point save_result

    Go

    gcloud functions deploy ocr-save --runtime go111 --trigger-topic YOUR_RESULT_TOPIC_NAME --entry-point SaveResult

    where YOUR_RESULT_TOPIC_NAME is the name of your Cloud Pub/Sub topic with which saving of results will be triggered.

Uploading an image

  1. Upload an image to your image Cloud Storage bucket:

    gsutil cp PATH_TO_IMAGE gs://YOUR_IMAGE_BUCKET_NAME
    

    where

    • PATH_TO_IMAGE is a path to an image file (that contains text) on your local system.
    • YOUR_IMAGE_BUCKET_NAME is the name of the bucket where you are uploading images.

    You can download one of the images from the sample project.

  2. Watch the logs to be sure the executions have completed:

    gcloud functions logs read --limit 100
    
  3. You can view the saved translations in the Cloud Storage bucket specified by the RESULT_BUCKET value in your configuration file.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the GCP Console, go to the Manage resources page.

    Go to the Manage resources page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Deleting the Cloud Functions

Deleting Cloud Functions does not remove any resources stored in Cloud Storage.

To delete a Cloud Function, run the following command:

gcloud functions delete NAME_OF_FUNCTION

where NAME_OF_FUNCTION is the name of the function to delete.

You can also delete Cloud Functions from the Google Cloud Platform Console.

Bu sayfayı yararlı buldunuz mu? Lütfen görüşünüzü bildirin:

Şunun hakkında geri bildirim gönderin...

Cloud Functions Documentation