Receive events using Cloud Audit Logs (gcloud CLI)
This quickstart shows you how to set up a Google Kubernetes Engine (GKE) service as a destination to receive events from Cloud Storage using Eventarc.
In this quickstart, you will:
- Create a Cloud Storage bucket to be the event source.
- Create a GKE cluster.
- Set up a service account to pull events from Pub/Sub using an event forwarder component that forwards events to the target.
- Initialize GKE destinations in Eventarc.
- Deploy a GKE service that receives events.
- Create an Eventarc trigger that sends events from Cloud Storage to the GKE service.
- Upload a file to the Cloud Storage bucket to generate an event and view the event in the GKE pod logs.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
- Install and initialize the Google Cloud CLI.
If prompted to configure a default compute region, enter
n
. - Update gcloud components:
gcloud components update
- Enable the Google Cloud, Cloud Build, Resource Manager,
Google Kubernetes Engine API, Container Registry, and Eventarc APIs:
gcloud services enable cloudapis.googleapis.com gcloud services enable cloudbuild.googleapis.com gcloud services enable cloudresourcemanager.googleapis.com gcloud services enable container.googleapis.com gcloud services enable containerregistry.googleapis.com gcloud services enable eventarc.googleapis.com
- Set the configuration variables used in this quickstart:
gcloud config set project PROJECT_ID gcloud config set run/cluster events-cluster gcloud config set run/cluster_location us-central1 gcloud config set run/platform gke gcloud config set eventarc/location us-central1
WherePROJECT_ID
is your Google Cloud project ID. - Optional: You can check the configuration settings using the Google Cloud CLI by typing:
gcloud config list
The output should be similar to the following:[eventarc] location = us-central1 [run] cluster = events-cluster cluster_location = us-central1 platform = gke
- Enable Cloud Audit Logs Admin Read, Data Read, and Data Write Log Types in Google Cloud Storage:
- Read your project's IAM policy and store it in a
file:
gcloud projects get-iam-policy PROJECT_ID > /tmp/policy.yaml
- Edit your policy in
/tmp/policy.yaml
, adding or changing only the Data Access audit logs configuration.auditConfigs: - auditLogConfigs: - logType: ADMIN_READ - logType: DATA_WRITE - logType: DATA_READ service: storage.googleapis.com bindings: - members: - user:EMAIL_ADDRESS role: roles/owner etag: BwW_bHKTV5U= version: 1
ReplaceEMAIL_ADDRESS
with your email address. - Write your new IAM policy:
gcloud projects set-iam-policy PROJECT_ID /tmp/policy.yaml
If the preceding command reports a conflict with another change, then repeat these steps, starting with reading the project's IAM policy.
- Read your project's IAM policy and store it in a
file:
Create a Cloud Storage bucket
This quickstart uses Cloud Storage as the event source. To create a storage bucket:
gsutil mb -l us-central1 gs://events-quickstart-$(gcloud config get-value project)/
After the event source is created, you can deploy the event receiver service on GKE.
Create a GKE cluster
Create a GKE cluster with the HttpLoadBalancing
addon.
Enable workload identity
to access Google Cloud services from applications running within
GKE.
PROJECT_ID=$(gcloud config get-value project) gcloud beta container clusters create events-cluster \ --addons=HttpLoadBalancing \ --machine-type=n1-standard-4 \ --enable-autoscaling --min-nodes=2 --max-nodes=10 \ --no-issue-client-certificate --num-nodes=2 \ --logging=SYSTEM,WORKLOAD \ --monitoring=SYSTEM \ --scopes=cloud-platform,logging-write,monitoring-write,pubsub \ --zone us-central1 \ --release-channel=rapid \ --workload-pool=$PROJECT_ID.svc.id.goog
Wait for the cluster creation to complete. You can ignore the warnings during the creation process. Once the cluster is created, the output should be similar to the following:
Creating cluster events-cluster...done.
Created [https://container.googleapis.com/v1beta1/projects/my-project
/zones/us-central1/clusters/events-cluster].
Where my-project
is your Google Cloud project ID.
This creates a GKE cluster named events-cluster
in
my-project
.
Set up a Google service account
Set up a user-provided service account and grant it specific roles so that the event forwarder component can pull events from Pub/Sub and forward it to the target.
Create a service account called
TRIGGER_GSA
that is used to create triggers:TRIGGER_GSA=eventarc-gke-triggers gcloud iam service-accounts create $TRIGGER_GSA
Grant the
pubsub.subscriber
,monitoring.metricWriter
, andeventarc.eventReceiver
roles to the service account:gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:$TRIGGER_GSA@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/pubsub.subscriber" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:$TRIGGER_GSA@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/monitoring.metricWriter" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:$TRIGGER_GSA@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/eventarc.eventReceiver"
Enable GKE destinations
For each trigger that targets a GKE service, Eventarc creates an event forwarder component that pulls events from Pub/Sub and forwards it to the target. To create the component and manage resources in the GKE cluster, grant permissions to an Eventarc service account:
Enable GKE destinations for Eventarc:
gcloud eventarc gke-destinations init
At the prompt to bind the required roles, enter
y
.The following roles are bound to the service account:
compute.viewer
container.developer
iam.serviceAccountAdmin
Create a GKE service destination
Using a prebuilt image, gcr.io/cloudrun/hello
, deploy a GKE service that will receive and log events:
Create a Kubernetes deployment:
SERVICE_NAME=hello-gke kubectl create deployment $SERVICE_NAME \ --image=gcr.io/cloudrun/hello
Expose it as a Kubernetes service:
kubectl expose deployment $SERVICE_NAME \ --type LoadBalancer --port 80 --target-port 8080
Create a Cloud Audit Logs trigger
When you upload a file to Cloud Storage, the
Eventarc trigger sends events from Cloud Storage to
the hello-gke
GKE service.
Create a Cloud Audit Logs trigger:
gcloud eventarc triggers create my-gke-trigger \ --location=$TRIGGER_LOCATION \ --destination-gke-cluster="events-cluster" \ --destination-gke-location="us-central1" \ --destination-gke-namespace="default" \ --destination-gke-service="hello-gke" \ --destination-gke-path="/" \ --event-filters="type=google.cloud.audit.log.v1.written" \ --event-filters="serviceName=storage.googleapis.com" \ --event-filters="methodName=storage.objects.create" \ --service-account=$TRIGGER_GSA@$PROJECT_ID.iam.gserviceaccount.com
This creates a trigger called
my-gke-trigger
.Confirm the trigger was successfully created:
gcloud eventarc triggers list
The output should be similar to the following:
NAME TYPE DESTINATION ACTIVE my-gke-trigger google.cloud.audit.log.v1.written GKE: hello-gke Yes
Generate and view an event
Upload a text file to Cloud Storage to generate an event and trigger the GKE service. You can then view the event's message in the pod logs.
Upload a text file to Cloud Storage:
echo "Hello World" > random.txt gsutil cp random.txt gs://events-quickstart-$(gcloud config get-value project)/random.txt
The upload generates an event and the GKE pod logs the event's message.
To view the event message:
- Find the pod ID:
kubectl get pods
The output should be similar to the following:NAME READY STATUS RESTARTS AGE hello-gke-645964f578-2mjjt 1/1 Running 0 35s
Where NAME is the name of the pod. Copy theNAME
to use in the next step. - Check the logs of the pod:
kubectl logs
ReplaceNAME
NAME
with the name of the pod you copied. - Look for a log entry similar to:
2022/02/24 22:23:49 Hello from Cloud Run! The container started successfully and is listening for HTTP requests on $PORT {"severity":"INFO","eventType":"google.cloud.audit.log.v1.written","message":"Received event of type google.cloud.audit.log.v1.written. [...]}
- Find the pod ID:
Clean up
While Cloud Run does not charge when the service is not in use, you might still be charged for storing the container image in Container Registry, Eventarc resources, Pub/Sub messages, and for the GKE cluster.
You can delete your image, delete your storage bucket, and delete the GKE cluster.
To delete the Eventarc trigger:
gcloud eventarc triggers delete my-gke-trigger
Alternatively, you can delete your Google Cloud project to avoid incurring charges. Deleting your Cloud project stops billing for all the resources used within that project.
gcloud projects delete PROJECT_ID_OR_NUMBER
Replace PROJECT_ID_OR_NUMBER
with the project ID or number.