This page stores a set of Cloud Vision Samples. We will be adding samples to this page as they are created. Samples are organized by language and mobile platform. We've also tried to cover the feature set of the Vision API within these samples as well.
Label Tagging Using Kubernetes
Making Text Within Images Searchable
This sample uses
TEXT_DETECTION Vision API requests to build an
inverted index from the stemmed words found in the images, and stores that index
in a Redis database. The example uses the
nltk (Natural Language Toolkit) library for
finding stopwords and doing stemming. The resulting index can be queried to find
images that match a given set of words, and to list text that was found in each
Mobile Platform Examples
Cloud Vision and More With ML Kit for Firebase
These sample apps show how you can easily use the Cloud Vision label detection, landmark detection, and text recognition APIs from your mobile apps with ML Kit for Firebase. ML Kit also provides APIs to perform face detection, barcode scanning, inference using custom ML models, and more, all on the device, without requiring a network call.
Image Detection Using Android Device Photos
This simple single-activity sample that shows you how to make a call to the Vision API with an image picked from your device's gallery.
Image Detection Using iOS Device Photos
The Swift and Objective-C versions of this app use the Vision API to run label and face detection on an image from the device's photo library. The resulting labels and face metadata from the API response are displayed in the UI.
Check out the Swift or Objective-C READMEs for specific getting started instructions.