教學課程:對 Cloud Run 的事件路由進行偵錯


本教學課程說明如何排解使用 Eventarc 將 Cloud Storage 事件透過 Cloud 稽核記錄轉送至未經驗證的 Cloud Run 服務時,所發生的執行階段錯誤。

目標

本教學課程將說明如何完成下列工作:

  1. 建立 Artifact Registry 標準存放區,用於儲存容器映像檔。
  2. 建立 Cloud Storage bucket 做為事件來源。
  3. 建構、上傳容器映像檔,並部署至 Cloud Run。
  4. 建立 Eventarc 觸發條件。
  5. 將檔案上傳至 Cloud Storage bucket。
  6. 排解並修正執行階段錯誤。

費用

在本文件中,您會使用 Google Cloud的下列計費元件:

如要根據預測用量估算費用,請使用 Pricing Calculator

初次使用 Google Cloud 的使用者可能符合免費試用資格。

事前準備

貴機構定義的安全性限制,可能會導致您無法完成下列步驟。如需疑難排解資訊,請參閱「在受限的 Google Cloud 環境中開發應用程式」。

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. Install the Google Cloud CLI.

  3. 如果您使用外部識別資訊提供者 (IdP),請先 使用聯合身分登入 gcloud CLI

  4. 如要初始化 gcloud CLI,請執行下列指令:

    gcloud init
  5. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  6. Verify that billing is enabled for your Google Cloud project.

  7. Enable the Artifact Registry, Cloud Build, Cloud Logging, Cloud Run, Cloud Storage, Eventarc, and Pub/Sub APIs:

    gcloud services enable artifactregistry.googleapis.com cloudbuild.googleapis.com eventarc.googleapis.com logging.googleapis.com pubsub.googleapis.com run.googleapis.com storage.googleapis.com
  8. Install the Google Cloud CLI.

  9. 如果您使用外部識別資訊提供者 (IdP),請先 使用聯合身分登入 gcloud CLI

  10. 如要初始化 gcloud CLI,請執行下列指令:

    gcloud init
  11. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  12. Verify that billing is enabled for your Google Cloud project.

  13. Enable the Artifact Registry, Cloud Build, Cloud Logging, Cloud Run, Cloud Storage, Eventarc, and Pub/Sub APIs:

    gcloud services enable artifactregistry.googleapis.com cloudbuild.googleapis.com eventarc.googleapis.com logging.googleapis.com pubsub.googleapis.com run.googleapis.com storage.googleapis.com
  14. 如果您是專案建立者,系統會授予基本「擁有者」角色 (roles/owner)。根據預設,這個身分與存取權管理 (IAM) 角色包含完整存取大多數資源所需的權限,因此您可以略過這個步驟。 Google Cloud

    如果您不是專案建立者,必須在專案中將必要權限授予適當的主體。舉例來說,主體可以是 Google 帳戶 (適用於使用者) 或服務帳戶 (適用於應用程式和運算工作負載)。詳情請參閱活動目的地的「角色和權限」頁面。

    請注意,根據預設,Cloud Build 權限包含上傳及下載 Artifact Registry 構件的權限

    所需權限

    如要取得完成本教學課程所需的權限,請要求管理員為您授予專案的下列 IAM 角色:

    如要進一步瞭解如何授予角色,請參閱「管理專案、資料夾和機構的存取權」。

    您或許還可透過自訂角色或其他預先定義的角色取得必要權限。

  15. 如果是 Cloud Storage,請為 ADMIN_READDATA_WRITEDATA_READ 資料存取類型啟用稽核記錄。

    1. 讀取與 Google Cloud 專案、資料夾或機構相關聯的 Identity and Access Management (IAM) 政策,並儲存在暫時檔案中:
      gcloud projects get-iam-policy PROJECT_ID > /tmp/policy.yaml
    2. 在文字編輯器中開啟 /tmp/policy.yaml,然後在 auditConfigs 區段中新增或變更稽核記錄設定:

      auditConfigs:
      - auditLogConfigs:
        - logType: ADMIN_READ
        - logType: DATA_WRITE
        - logType: DATA_READ
        service: storage.googleapis.com
      bindings:
      - members:
      [...]
      etag: BwW_bHKTV5U=
      version: 1
    3. 撰寫新的 IAM 政策:
      gcloud projects set-iam-policy PROJECT_ID /tmp/policy.yaml

      如果上述指令回報與其他變更發生衝突,請重複這些步驟,從讀取 IAM 政策開始。詳情請參閱「使用 API 設定資料存取稽核記錄」一文。

  16. eventarc.eventReceiver 角色指派給 Compute Engine 服務帳戶:

    export PROJECT_NUMBER="$(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')"
    
    gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
        --member=serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
        --role='roles/eventarc.eventReceiver'

  17. 如果您是在 2021 年 4 月 8 日當天或之前啟用 Pub/Sub 服務帳戶,請將 iam.serviceAccountTokenCreator 角色授予 Pub/Sub 服務帳戶:

    gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
        --member="serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-pubsub.iam.gserviceaccount.com"\
        --role='roles/iam.serviceAccountTokenCreator'

  18. 設定本教學課程中使用的預設值:
    export REGION=us-central1
    gcloud config set run/region ${REGION}
    gcloud config set run/platform managed
    gcloud config set eventarc/location ${REGION}
  19. 建立 Artifact Registry 標準存放區

    建立 Artifact Registry 標準存放區,用於儲存容器映像檔:

    gcloud artifacts repositories create REPOSITORY \
        --repository-format=docker \
        --location=$REGION

    REPOSITORY 替換成存放區的專屬名稱。

    建立 Cloud Storage 值區

    在兩個區域中各建立一個 Cloud Storage bucket,做為 Cloud Run 服務的事件來源:

    1. us-east1 中建立 bucket:

      export BUCKET1="troubleshoot-bucket1-PROJECT_ID"
      gcloud storage buckets create gs://${BUCKET1} --location=us-east1
    2. us-west1 中建立 bucket:

      export BUCKET2="troubleshoot-bucket2-PROJECT_ID"
      gcloud storage buckets create gs://${BUCKET2} --location=us-west1

    建立事件來源後,請在 Cloud Run 上部署事件接收器服務。

    部署事件接收器

    部署可接收及記錄事件的 Cloud Run 服務。

    1. 複製 GitHub 存放區來擷取程式碼範例:

      Go

      git clone https://github.com/GoogleCloudPlatform/golang-samples.git
      cd golang-samples/eventarc/audit_storage
      

      Java

      git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git
      cd java-docs-samples/eventarc/audit-storage
      

      .NET

      git clone https://github.com/GoogleCloudPlatform/dotnet-docs-samples.git
      cd dotnet-docs-samples/eventarc/audit-storage
      

      Node.js

      git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
      cd nodejs-docs-samples/eventarc/audit-storage
      

      Python

      git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
      cd python-docs-samples/eventarc/audit-storage
      
    2. 請檢查本教學課程的程式碼,其中包含下列項目:

      • 事件處理常式,會在 HTTP POST 要求中,以 CloudEvent 形式接收傳入的事件:

        Go

        
        // Processes CloudEvents containing Cloud Audit Logs for Cloud Storage
        package main
        
        import (
        	"fmt"
        	"log"
        	"net/http"
        	"os"
        
        	cloudevent "github.com/cloudevents/sdk-go/v2"
        )
        
        // HelloEventsStorage receives and processes a Cloud Audit Log event with Cloud Storage data.
        func HelloEventsStorage(w http.ResponseWriter, r *http.Request) {
        	if r.Method != http.MethodPost {
        		http.Error(w, "Expected HTTP POST request with CloudEvent payload", http.StatusMethodNotAllowed)
        		return
        	}
        
        	event, err := cloudevent.NewEventFromHTTPRequest(r)
        	if err != nil {
        		log.Printf("cloudevent.NewEventFromHTTPRequest: %v", err)
        		http.Error(w, "Failed to create CloudEvent from request.", http.StatusBadRequest)
        		return
        	}
        	s := fmt.Sprintf("Detected change in Cloud Storage bucket: %s", event.Subject())
        	fmt.Fprintln(w, s)
        }
        

        Java

        import io.cloudevents.CloudEvent;
        import io.cloudevents.rw.CloudEventRWException;
        import io.cloudevents.spring.http.CloudEventHttpUtils;
        import org.springframework.http.HttpHeaders;
        import org.springframework.http.HttpStatus;
        import org.springframework.http.ResponseEntity;
        import org.springframework.web.bind.annotation.RequestBody;
        import org.springframework.web.bind.annotation.RequestHeader;
        import org.springframework.web.bind.annotation.RequestMapping;
        import org.springframework.web.bind.annotation.RequestMethod;
        import org.springframework.web.bind.annotation.RestController;
        
        @RestController
        public class EventController {
        
          @RequestMapping(value = "/", method = RequestMethod.POST, consumes = "application/json")
          public ResponseEntity<String> receiveMessage(
              @RequestBody String body, @RequestHeader HttpHeaders headers) {
            CloudEvent event;
            try {
              event =
                  CloudEventHttpUtils.fromHttp(headers)
                      .withData(headers.getContentType().toString(), body.getBytes())
                      .build();
            } catch (CloudEventRWException e) {
              return new ResponseEntity<>(e.getMessage(), HttpStatus.BAD_REQUEST);
            }
        
            String ceSubject = event.getSubject();
            String msg = "Detected change in Cloud Storage bucket: " + ceSubject;
            System.out.println(msg);
            return new ResponseEntity<>(msg, HttpStatus.OK);
          }
        }

        .NET

        
        using Microsoft.AspNetCore.Builder;
        using Microsoft.AspNetCore.Hosting;
        using Microsoft.AspNetCore.Http;
        using Microsoft.Extensions.DependencyInjection;
        using Microsoft.Extensions.Hosting;
        using Microsoft.Extensions.Logging;
        
        public class Startup
        {
            public void ConfigureServices(IServiceCollection services)
            {
            }
        
            public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILogger<Startup> logger)
            {
                if (env.IsDevelopment())
                {
                    app.UseDeveloperExceptionPage();
                }
        
                logger.LogInformation("Service is starting...");
        
                app.UseRouting();
        
                app.UseEndpoints(endpoints =>
                {
                    endpoints.MapPost("/", async context =>
                    {
                        logger.LogInformation("Handling HTTP POST");
        
                        var ceSubject = context.Request.Headers["ce-subject"];
                        logger.LogInformation($"ce-subject: {ceSubject}");
        
                        if (string.IsNullOrEmpty(ceSubject))
                        {
                            context.Response.StatusCode = 400;
                            await context.Response.WriteAsync("Bad Request: expected header Ce-Subject");
                            return;
                        }
        
                        await context.Response.WriteAsync($"GCS CloudEvent type: {ceSubject}");
                    });
                });
            }
        }
        

        Node.js

        const express = require('express');
        const app = express();
        
        app.use(express.json());
        app.post('/', (req, res) => {
          if (!req.header('ce-subject')) {
            return res
              .status(400)
              .send('Bad Request: missing required header: ce-subject');
          }
        
          console.log(
            `Detected change in Cloud Storage bucket: ${req.header('ce-subject')}`
          );
          return res
            .status(200)
            .send(
              `Detected change in Cloud Storage bucket: ${req.header('ce-subject')}`
            );
        });
        
        module.exports = app;

        Python

        @app.route("/", methods=["POST"])
        def index():
            # Create a CloudEvent object from the incoming request
            event = from_http(request.headers, request.data)
            # Gets the GCS bucket name from the CloudEvent
            # Example: "storage.googleapis.com/projects/_/buckets/my-bucket"
            bucket = event.get("subject")
        
            print(f"Detected change in Cloud Storage bucket: {bucket}")
            return (f"Detected change in Cloud Storage bucket: {bucket}", 200)
        
        
      • 使用事件處理常式的伺服器:

        Go

        
        func main() {
        	http.HandleFunc("/", HelloEventsStorage)
        	// Determine port for HTTP service.
        	port := os.Getenv("PORT")
        	if port == "" {
        		port = "8080"
        	}
        	// Start HTTP server.
        	log.Printf("Listening on port %s", port)
        	if err := http.ListenAndServe(":"+port, nil); err != nil {
        		log.Fatal(err)
        	}
        }
        

        Java

        
        import org.springframework.boot.SpringApplication;
        import org.springframework.boot.autoconfigure.SpringBootApplication;
        
        @SpringBootApplication
        public class Application {
          public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
          }
        }

        .NET

            public static void Main(string[] args)
            {
                CreateHostBuilder(args).Build().Run();
            }
            public static IHostBuilder CreateHostBuilder(string[] args)
            {
                var port = Environment.GetEnvironmentVariable("PORT") ?? "8080";
                var url = $"http://0.0.0.0:{port}";
        
                return Host.CreateDefaultBuilder(args)
                    .ConfigureWebHostDefaults(webBuilder =>
                    {
                        webBuilder.UseStartup<Startup>().UseUrls(url);
                    });
            }
        

        Node.js

        const app = require('./app.js');
        const PORT = parseInt(process.env.PORT) || 8080;
        
        app.listen(PORT, () =>
          console.log(`nodejs-events-storage listening on port ${PORT}`)
        );

        Python

        import os
        
        from cloudevents.http import from_http
        
        from flask import Flask, request
        
        app = Flask(__name__)
        if __name__ == "__main__":
            app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
      • 定義服務作業環境的 Dockerfile。Dockerfile 的內容依程式語言而有所不同:

        Go

        
        # Use the official Go image to create a binary.
        # This is based on Debian and sets the GOPATH to /go.
        # https://hub.docker.com/_/golang
        FROM golang:1.23-bookworm as builder
        
        # Create and change to the app directory.
        WORKDIR /app
        
        # Retrieve application dependencies.
        # This allows the container build to reuse cached dependencies.
        # Expecting to copy go.mod and if present go.sum.
        COPY go.* ./
        RUN go mod download
        
        # Copy local code to the container image.
        COPY . ./
        
        # Build the binary.
        RUN go build -v -o server
        
        # Use the official Debian slim image for a lean production container.
        # https://hub.docker.com/_/debian
        # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
        FROM debian:bookworm-slim
        RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
            ca-certificates && \
            rm -rf /var/lib/apt/lists/*
        
        # Copy the binary to the production image from the builder stage.
        COPY --from=builder /app/server /server
        
        # Run the web service on container startup.
        CMD ["/server"]
        

        Java

        
        # Use the official maven image to create a build artifact.
        # https://hub.docker.com/_/maven
        FROM maven:3-eclipse-temurin-17-alpine as builder
        
        # Copy local code to the container image.
        WORKDIR /app
        COPY pom.xml .
        COPY src ./src
        
        # Build a release artifact.
        RUN mvn package -DskipTests
        
        # Use Eclipse Temurin for base image.
        # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
        FROM eclipse-temurin:17.0.16_8-jre-alpine
        
        # Copy the jar to the production image from the builder stage.
        COPY --from=builder /app/target/audit-storage-*.jar /audit-storage.jar
        
        # Run the web service on container startup.
        CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/audit-storage.jar"]
        

        .NET

        
        # Use Microsoft's official build .NET image.
        # https://hub.docker.com/_/microsoft-dotnet-core-sdk/
        FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
        WORKDIR /app
        
        # Install production dependencies.
        # Copy csproj and restore as distinct layers.
        COPY *.csproj ./
        RUN dotnet restore
        
        # Copy local code to the container image.
        COPY . ./
        WORKDIR /app
        
        # Build a release artifact.
        RUN dotnet publish -c Release -o out
        
        
        # Use Microsoft's official runtime .NET image.
        # https://hub.docker.com/_/microsoft-dotnet-core-aspnet/
        FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
        WORKDIR /app
        COPY --from=build /app/out ./
        
        # Run the web service on container startup.
        ENTRYPOINT ["dotnet", "AuditStorage.dll"]

        Node.js

        
        # Use the official lightweight Node.js image.
        # https://hub.docker.com/_/node
        FROM node:20-slim
        # Create and change to the app directory.
        WORKDIR /usr/src/app
        
        # Copy application dependency manifests to the container image.
        # A wildcard is used to ensure both package.json AND package-lock.json are copied.
        # Copying this separately prevents re-running npm install on every code change.
        COPY package*.json ./
        
        # Install dependencies.
        # if you need a deterministic and repeatable build create a 
        # package-lock.json file and use npm ci:
        # RUN npm ci --omit=dev
        # if you need to include development dependencies during development
        # of your application, use:
        # RUN npm install --dev
        
        RUN npm install --omit=dev
        
        # Copy local code to the container image.
        COPY . .
        
        # Run the web service on container startup.
        CMD [ "npm", "start" ]
        

        Python

        
        # Use the official Python image.
        # https://hub.docker.com/_/python
        FROM python:3.11-slim
        
        # Allow statements and log messages to immediately appear in the Cloud Run logs
        ENV PYTHONUNBUFFERED True
        
        # Copy application dependency manifests to the container image.
        # Copying this separately prevents re-running pip install on every code change.
        COPY requirements.txt ./
        
        # Install production dependencies.
        RUN pip install -r requirements.txt
        
        # Copy local code to the container image.
        ENV APP_HOME /app
        WORKDIR $APP_HOME
        COPY . ./
        
        # Run the web service on container startup. 
        # Use gunicorn webserver with one worker process and 8 threads.
        # For environments with multiple CPU cores, increase the number of workers
        # to be equal to the cores available.
        CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app

    3. 使用 Cloud Build 建構容器映像檔,並將映像檔上傳至 Artifact Registry:

      export PROJECT_ID=$(gcloud config get-value project)
      export SERVICE_NAME=troubleshoot-service
      gcloud builds submit --tag $REGION-docker.pkg.dev/${PROJECT_ID}/REPOSITORY/${SERVICE_NAME}:v1
    4. 將容器映像檔部署至 Cloud Run:

      gcloud run deploy ${SERVICE_NAME} \
          --image $REGION-docker.pkg.dev/${PROJECT_ID}/REPOSITORY/${SERVICE_NAME}:v1 \
          --allow-unauthenticated

      部署成功後,指令列會顯示服務網址。

    建立觸發條件

    部署 Cloud Run 服務後,請設定觸發條件,透過稽核記錄監聽 Cloud Storage 的事件。

    1. 建立 Eventarc 觸發條件,監聽透過 Cloud 稽核記錄轉送的 Cloud Storage 事件:

      gcloud eventarc triggers create troubleshoot-trigger \
          --destination-run-service=troubleshoot-service \
          --event-filters="type=google.cloud.audit.log.v1.written" \
          --event-filters="serviceName=storage.googleapis.com" \
          --event-filters="methodName=storage.objects.create" \
          --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com
      

      這項操作會建立名為 troubleshoot-trigger 的觸發條件。

    2. 如要確認 troubleshoot-trigger 是否已建立,請執行:

      gcloud eventarc triggers list
      

      畫面會顯示如下的輸出內容:

      NAME: troubleshoot-trigger
      TYPE: google.cloud.audit.log.v1.written
      DESTINATION: Cloud Run service: troubleshoot-service
      ACTIVE: By 20:03:37
      LOCATION: us-central1
      

    產生及查看活動

    確認您已成功部署服務,且可以接收 Cloud Storage 傳送的事件。

    1. 建立檔案並上傳至 BUCKET1 storage bucket:

       echo "Hello World" > random.txt
       gcloud storage cp random.txt gs://${BUCKET1}/random.txt
      
    2. 監控記錄,確認服務是否收到事件。如要查看記錄項目,請完成下列步驟:

      1. 篩選記錄檔項目,並以 JSON 格式傳回輸出內容:

        gcloud logging read "resource.labels.service_name=troubleshoot-service \
            AND textPayload:random.txt" \
            --format=json
      2. 尋找類似下列內容的記錄項目:

        "textPayload": "Detected change in Cloud Storage bucket: ..."
        

    請注意,一開始不會傳回任何記錄項目。這表示設定有問題,您必須進行調查。

    調查問題

    請逐步調查服務未收到事件的原因。

    初始化時間

    雖然觸發條件會立即建立,但最多可能需要兩分鐘才能傳播並篩選事件。執行下列指令,確認觸發條件是否處於啟用狀態:

    gcloud eventarc triggers list
    

    輸出內容會顯示觸發條件的狀態。在以下範例中,troubleshoot-trigger 會在 14:16:56 啟用:

    NAME                  TYPE                               DESTINATION_RUN_SERVICE  ACTIVE
    troubleshoot-trigger  google.cloud.audit.log.v1.written  troubleshoot-service     By 14:16:56
    

    觸發條件啟用後,請再次將檔案上傳至儲存空間 bucket。事件會寫入 Cloud Run 服務記錄。如果服務未收到事件,可能與事件大小有關。

    稽核記錄

    在本教學課程中,Cloud Storage 事件會透過 Cloud 稽核記錄進行路由,並傳送至 Cloud Run。確認 Cloud Storage 已啟用稽核記錄。

    1. 前往 Google Cloud 控制台的「稽核記錄」頁面。

      前往稽核記錄

    2. 選取「Google Cloud Storage」核取方塊。
    3. 確認已選取「管理員讀取」、「資料讀取」和「資料寫入」記錄類型。

    啟用 Cloud 稽核記錄後,請再次將檔案上傳至儲存空間 bucket,並檢查記錄。如果服務仍未收到事件,這可能與觸發條件位置有關。

    觸發地點

    不同位置可能有多個資源,您必須篩選來自與 Cloud Run 目標位於相同區域的來源事件。詳情請參閱「Eventarc 支援的位置」和「瞭解 Eventarc 位置」。

    在本教學課程中,您已將 Cloud Run 服務部署至 us-central1。由於您將 eventarc/location 設為 us-central1,因此您也在相同位置建立了觸發條件。

    不過,您在 us-east1us-west1 位置建立了兩個 Cloud Storage bucket。如要接收這些位置的事件,您必須在這些位置建立 Eventarc 觸發條件。

    us-east1 中建立 Eventarc 觸發條件:

    1. 確認現有觸發條件的位置:

      gcloud eventarc triggers describe troubleshoot-trigger
      
    2. 將位置和區域設為 us-east1

      gcloud config set eventarc/location us-east1
      gcloud config set run/region us-east1
      
    3. 建構容器映像檔並部署至 Cloud Run,再次部署事件接收器

    4. 在「us-east1」中建立新的觸發條件:

      gcloud eventarc triggers create troubleshoot-trigger-new \
        --destination-run-service=troubleshoot-service \
        --event-filters="type=google.cloud.audit.log.v1.written" \
        --event-filters="serviceName=storage.googleapis.com" \
        --event-filters="methodName=storage.objects.create" \
        --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com
      
    5. 確認觸發條件已建立:

      gcloud eventarc triggers list
      

      觸發條件最多可能需要兩分鐘才能完成初始化,然後開始轉送事件。

    6. 如要確認觸發條件已正確部署,請產生並查看事件

    你可能會遇到的其他問題

    使用 Eventarc 時,您可能會遇到其他問題。

    事件大小

    傳送的事件不得超過事件大小限制。

    先前傳送事件的觸發條件已停止運作

    1. 確認來源是否正在產生事件。檢查 Cloud 稽核記錄,確認受監控的服務是否發出記錄。如果系統記錄了記錄檔,但未傳送事件,請與支援團隊聯絡

    2. 確認是否有相同觸發條件名稱的 Pub/Sub 主題。Eventarc 會使用 Pub/Sub 做為傳輸層,並使用現有的 Pub/Sub 主題,或自動建立及管理主題。

      1. 如要列出觸發條件,請參閱 gcloud eventarc triggers list
      2. 如要列出 Pub/Sub 主題,請執行下列指令:

        gcloud pubsub topics list
        
      3. 確認 Pub/Sub 主題名稱包含所建立的觸發條件名稱。例如:

        name: projects/PROJECT_ID/topics/eventarc-us-east1-troubleshoot-trigger-new-123

      如果 Pub/Sub 主題遺失,請針對特定供應商、事件類型和 Cloud Run 目的地重新建立觸發程序

    3. 確認服務已設定觸發條件。

      1. 前往 Google Cloud 控制台的「Services」頁面。

        前往「服務」分頁

      2. 按一下服務名稱,開啟「Service details」(服務詳細資料) 頁面。

      3. 按一下「觸發條件」分頁標籤。

        與服務相關聯的 Eventarc 觸發條件應會列出。

    4. 使用 Pub/Sub 指標類型,驗證 Pub/Sub 主題和訂閱項目的健康狀態。

      • 您可以使用 subscription/dead_letter_message_count 指標監控轉寄的無法傳送郵件。這項指標會顯示 Pub/Sub 從訂閱項目轉送的無法遞送訊息數量。

        如果訊息未發布至主題,請檢查 Cloud 稽核記錄,並確認受監控的服務是否發出記錄。如果系統已記錄記錄檔,但未傳送事件,請與支援團隊聯絡

      • 您可以使用 subscription/push_request_count 指標,並依 response_codesubcription_id 分組指標,監控推送訂閱項目

        如果系統回報推送錯誤,請檢查 Cloud Run 服務記錄。如果接收端點傳回非「OK」的狀態碼,表示 Cloud Run 程式碼無法正常運作,請與支援團隊聯絡

      詳情請參閱建立指標閾值快訊政策

    清除所用資源

    如果您是為了這個教學課程建立新專案,請刪除專案。如果您使用現有專案,並想保留專案,但不要本教學課程新增的變更,請刪除為本教學課程建立的資源

    刪除專案

    如要避免付費,最簡單的方法就是刪除您為了本教學課程所建立的專案。

    如要刪除專案:

    1. In the Google Cloud console, go to the Manage resources page.

      Go to Manage resources

    2. In the project list, select the project that you want to delete, and then click Delete.
    3. In the dialog, type the project ID, and then click Shut down to delete the project.

    刪除教學課程資源

    1. 刪除您在本教學課程中部署的 Cloud Run 服務:

      gcloud run services delete SERVICE_NAME

      其中 SERVICE_NAME 是您選擇的服務名稱。

      您也可以從Google Cloud 控制台刪除 Cloud Run 服務。

    2. 移除您在教學課程設定期間新增的任何 gcloud CLI 預設設定。

      例如:

      gcloud config unset run/region

      gcloud config unset project

    3. 刪除在本教學課程中建立的其他 Google Cloud 資源:

      • 刪除 Eventarc 觸發條件:
        gcloud eventarc triggers delete TRIGGER_NAME
        
        TRIGGER_NAME 替換為觸發條件的名稱。

    後續步驟