[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[],[],null,["# Sample applications\n\nThis page lists a set of Vision API samples. Samples are organized by\nlanguage and mobile platform.\n\nProduct Search examples\n-----------------------\n\n### Search for matching products with Cloud Vision Product Search\n\nUsing Cloud Vision Product Search you can create a product set (catalog)\nwith corresponding reference images of select\n[product categories](/vision/product-search/docs/product-categories). You can\nthen use the service to\ntake a new image of a product and search for matching products in your\nproduct set. See the [official documentation](/vision/product-search/docs)\nand [tutorial](/vision/product-search/docs/tutorial) for more information.\n\nLanguage examples\n-----------------\n\n### Label tagging using Kubernetes\n\n*Awwvision* is a [Kubernetes](https://github.com/kubernetes/kubernetes/) and\n[Cloud Vision API](https://cloud.google.com/vision/) sample that uses the\nVision API to classify (label) images from Reddit's\n[/r/aww](https://reddit.com/r/aww) subreddit, and display the labeled results\nin a web application.\n\n[Documentation and Python code](https://github.com/GoogleCloudPlatform/cloud-vision/tree/master/python/awwvision)\n\nMobile platform examples\n------------------------\n\n### Vision and more with ML Kit\n\nThese sample apps show how you can easily use the Cloud Vision label detection,\nlandmark detection, and text recognition APIs from your mobile apps with\n[ML Kit](https://developers.google.com/ml-kit). ML Kit also\nprovides APIs to perform face detection, barcode scanning, inference using\ncustom ML models, and more, all on the device, without requiring a network call.\n\n[Android code samples](https://github.com/googlesamples/mlkit/tree/master/android)\n\n[iOS code samples](https://github.com/googlesamples/mlkit/tree/master/ios)\n\n### Image detection using Android device photos\n\nThis simple single-activity sample shows you how to make a call to the\nVision API with an image picked from your device's gallery.\n\n[Documentation](https://github.com/GoogleCloudPlatform/cloud-vision/blob/master/android/README.md)\n\n[Android code](https://github.com/GoogleCloudPlatform/cloud-vision/tree/master/android/CloudVision)\n\n### Image detection using iOS device photos\n\nThe Swift and Objective-C versions of this app use the Vision API\nto run label and face detection on an image from the device's photo library. The\nresulting labels and face metadata from the API response are displayed in the UI.\n\nCheck out the Swift or Objective-C READMEs for specific getting started\ninstructions.\n\n[Documentation (Objective-C)](https://github.com/GoogleCloudPlatform/cloud-vision/blob/master/ios/Objective-C/README.md)\n\n[Documentation (Swift)](https://github.com/GoogleCloudPlatform/cloud-vision/blob/master/ios/Swift/README.md)\n\n[iOS Sample code](https://github.com/GoogleCloudPlatform/cloud-vision/tree/master/ios/)"]]