使用 Cloud Vision API 判斷圖片內容是否安全無虞
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
本教學課程示範如何使用 Cloud Run、Cloud Vision API 和 ImageMagick,偵測並模糊處理上傳至 Cloud Storage 值區的令人反感圖片。
深入探索
如需包含這個程式碼範例的詳細說明文件,請參閱下列內容:
程式碼範例
Java
如要向 Cloud Run 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
Node.js
如要向 Cloud Run 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
Python
如要向 Cloud Run 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Use Cloud Vision API to determine if image is safe\n\nThis tutorial demonstrates using Cloud Run, Cloud Vision API, and ImageMagick to detect and blur offensive images uploaded to a Cloud Storage bucket.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Process images from Cloud Storage tutorial](/run/docs/tutorials/image-processing)\n- [Processing images asynchronously](/anthos/run/archive/docs/tutorials/image-processing)\n\nCode sample\n-----------\n\n### Go\n\n\nTo authenticate to Cloud Run, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n // GCSEvent is the payload of a GCS event.\n type GCSEvent struct {\n \tBucket string `json:\"bucket\"`\n \tName string `json:\"name\"`\n }\n\n // BlurOffensiveImages blurs offensive images uploaded to GCS.\n func BlurOffensiveImages(ctx context.Context, e GCSEvent) error {\n \toutputBucket := os.Getenv(\"BLURRED_BUCKET_NAME\")\n \tif outputBucket == \"\" {\n \t\treturn errors.New(\"BLURRED_BUCKET_NAME must be set\")\n \t}\n\n \timg := vision.NewImageFromURI(fmt.Sprintf(\"gs://%s/%s\", e.Bucket, e.Name))\n\n \tresp, err := visionClient.DetectSafeSearch(ctx, img, nil)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"AnnotateImage: %w\", err)\n \t}\n\n \tif resp.GetAdult() == visionpb.Likelihood_VERY_LIKELY ||\n \t\tresp.GetViolence() == visionpb.Likelihood_VERY_LIKELY {\n \t\treturn blur(ctx, e.Bucket, outputBucket, e.Name)\n \t}\n \tlog.Printf(\"The image %q was detected as OK.\", e.Name)\n \treturn nil\n }\n\n### Java\n\n\nTo authenticate to Cloud Run, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Blurs uploaded images that are flagged as Adult or Violence.\n public static void blurOffensiveImages(JsonObject data) {\n String fileName = data.get(\"name\").getAsString();\n String bucketName = data.get(\"bucket\").getAsString();\n BlobInfo blobInfo = BlobInfo.newBuilder(bucketName, fileName).build();\n // Construct URI to GCS bucket and file.\n String gcsPath = String.format(\"gs://%s/%s\", bucketName, fileName);\n System.out.println(String.format(\"Analyzing %s\", fileName));\n\n // Construct request.\n List\u003cAnnotateImageRequest\u003e requests = new ArrayList\u003c\u003e();\n ImageSource imgSource = ImageSource.newBuilder().setImageUri(gcsPath).build();\n Image img = Image.newBuilder().setSource(imgSource).build();\n Feature feature = Feature.newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();\n AnnotateImageRequest request =\n AnnotateImageRequest.newBuilder().addFeatures(feature).setImage(img).build();\n requests.add(request);\n\n // Send request to the Vision API.\n try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {\n BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);\n List\u003cAnnotateImageResponse\u003e responses = response.getResponsesList();\n for (AnnotateImageResponse res : responses) {\n if (res.hasError()) {\n System.out.println(String.format(\"Error: %s\\n\", res.getError().getMessage()));\n return;\n }\n // Get Safe Search Annotations\n SafeSearchAnnotation annotation = res.getSafeSearchAnnotation();\n if (annotation.getAdultValue() == 5 || annotation.getViolenceValue() == 5) {\n System.out.println(String.format(\"Detected %s as inappropriate.\", fileName));\n blur(blobInfo);\n } else {\n System.out.println(String.format(\"Detected %s as OK.\", fileName));\n }\n }\n } catch (Exception e) {\n System.out.println(String.format(\"Error with Vision API: %s\", e.getMessage()));\n }\n }\n\n### Node.js\n\n\nTo authenticate to Cloud Run, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Blurs uploaded images that are flagged as Adult or Violence.\n exports.blurOffensiveImages = async event =\u003e {\n // This event represents the triggering Cloud Storage object.\n const object = event;\n\n const file = storage.bucket(object.bucket).file(object.name);\n const filePath = `gs://${object.bucket}/${object.name}`;\n\n console.log(`Analyzing ${file.name}.`);\n\n try {\n const [result] = await client.safeSearchDetection(filePath);\n const detections = result.safeSearchAnnotation || {};\n\n if (\n // Levels are defined in https://cloud.google.com/vision/docs/reference/rest/v1/AnnotateImageResponse#likelihood\n detections.adult === 'VERY_LIKELY' ||\n detections.violence === 'VERY_LIKELY'\n ) {\n console.log(`Detected ${file.name} as inappropriate.`);\n return blurImage(file, BLURRED_BUCKET_NAME);\n } else {\n console.log(`Detected ${file.name} as OK.`);\n }\n } catch (err) {\n console.error(`Failed to analyze ${file.name}.`, err);\n throw err;\n }\n };\n\n### Python\n\n\nTo authenticate to Cloud Run, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n def blur_offensive_images(data):\n \"\"\"Blurs uploaded images that are flagged as Adult or Violence.\n\n Args:\n data: Pub/Sub message data\n \"\"\"\n file_data = data\n\n file_name = file_data[\"name\"]\n bucket_name = file_data[\"bucket\"]\n\n blob = storage_client.bucket(bucket_name).get_blob(file_name)\n blob_uri = f\"gs://{bucket_name}/{file_name}\"\n blob_source = vision.Image(source=vision.ImageSource(image_uri=blob_uri))\n\n # Ignore already-blurred files\n if file_name.startswith(\"blurred-\"):\n print(f\"The image {file_name} is already blurred.\")\n return\n\n print(f\"Analyzing {file_name}.\")\n\n result = vision_client.safe_search_detection(image=blob_source)\n detected = result.safe_search_annotation\n\n # Process image\n if detected.adult == 5 or detected.violence == 5:\n print(f\"The image {file_name} was detected as inappropriate.\")\n return __blur_image(blob)\n else:\n print(f\"The image {file_name} was detected as OK.\")\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=cloudrun)."]]