Serverless eventing with Cloud Run on Anthos with Knative Eventing and Apache Kafka
Contributed by Google employees.
Cloud Run is a Google Cloud offering built on the Knative Serving APIs, which brings serverless practices to Kubernetes, allowing developers to focus on code while operators focus on the infrastructure.
Developers can containerize their applications and deploy them to the cloud without worrying about configuring networking, machine types, and so on. By default, Cloud Run listens to HTTP and HTTPS traffic. In a world of event streaming, we need to concern ourselves with other types of event sources, too.
For this demonstration, we use Confluent Cloud and AlphaVantage. Both offer a free-tier solution, so using them for this demonstration won't incur additional charges. We recommend deleting your cluster after you have finished with this tutorial, to prevent incurring charges.
Before you begin
Run the setup script
kafka-cr-eventing/scripts folder contains the
setup.sh setup script for this demonstration.
We recommend that you run this script in Cloud Shell, rather than running it on your local machine.
The setup installs GoLang 1.13.1, Google Cloud SDK, and Confluent Cloud CLI if they aren't already installed. The setup script then enables some Google Cloud APIs, installs a GKE cluster, prepares that cluster for Cloud Run, and deploys an app that collects currency exchange information.
We encourage you to read through the script for the details of what it does.
To execute the script, run the following commands (replacing
[your_AlphaVantage_API_key] with the API key value):
export CLUSTER_NAME="kafka-events" #You can change this to whatever you want. export AV_KEY="[your_AlphaVantage_API_key]" chmod +x setup.sh ./setup.sh
Wait for a couple of minutes for all of the steps in the script to finish.
Use the simple streaming event serverless application
After the script has finished, open a tab in your terminal and run this command:
ccloud kafka topic consume cloudevents
Note: You may need to log in with
You will leave this running, and it will show you events being written to your Kafka topic.
Run this command:
kubectl get pods
You should see something like this:
currency-app-l6g6g-deployment-5d679fcf5c-mnvjh 2/2 Running 0 62s event-display-hbrjj-deployment-79f85796d9-n4ftd 2/2 Running 0 5s kafka-source-9mmvw-78cf98d4c4-g4pmm 1/1 Running 0 10s
Check the logs for the
$ kubectl logs event-display-hbrjj-deployment-79f85796d9-n4ftd -c user-container ☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 0.2 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/kafka-source#cloudevents id: partition:1/offset:11 time: 2019-11-04T21:11:55.111Z contenttype: application/json Extensions, key: Data, "MTA4LjU5MDAwMDAw" ☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 0.2 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/kafka-source#cloudevents id: partition:3/offset:7 time: 2019-11-04T21:12:02.879Z contenttype: application/json Extensions, key: Data, "MTA4LjU5MDAwMDAw"
Datayou should see something like
MTA4LjU5MDAwMDAw. Yours may look a little different.
Cloud events are encoded in base64, so let's decode it:
$ echo `echo MTA4LjU5MDAwMDAw | base64 --decode` 108.59000000
This is your exchange rate. Look at your ccloud tab in the terminal, to check that it matches the consumed message.
Congratulations, you have now created a simple streaming event serverless application.
Now, it's time to clean up what you built using the cleanup script.
Running this script will delete whatever cluster is assigned to the bash variable
$CLUSTER_NAME. Be sure that you only
assign the cluster that you created for this demo.
export CLUSTER_NAME="YOUR CLUSTER" export CONFLUENT_ID=$(ccloud kafka cluster list | grep "*" | cut -c4-14) cd kafka-cr-eventing/scripts chmod +x cleanup.sh ./cleanup.sh
You are now good to go!