ClearDATA: Running Forseti the serverless way
Principal Engineer, ClearDATA
[Editor’s note: ClearDATA is a cloud services provider in the heavily regulated healthcare and life sciences industry, and recently began using the open-source Forseti Security to automate security and compliance testing in its environment. Along the way, ClearDATA developed a novel way to deploy Forseti using containers and Cloud Pub/Sub, for a lightweight, serverless alternative to the usual way of deploying Forseti in a dedicated VM. In the following post, ClearDATA’s Ross Vandegrift tells us more.]
ClearDATA provides cloud services and DevOps expertise to help healthcare and life science companies realize the benefits of the cloud. Our mission is to make healthcare better by helping conservative, heavily regulated healthcare entities innovate safely in the cloud. To be successful, we enable healthcare and life science organizations to adopt modern cloud software technologies, like Forseti, that drive agility, while ensuring that sensitive data is protected and compliance obligations are met through automated tooling.
Automated security and compliance tools require a few familiar components: an inventory of resources, a rules engine to evaluate the compliance of resources, and a violation tracking/reporting mechanism. Building such a tool could be a cool project, but on its own, wouldn't help ClearDATA move the needle on healthcare. So when one of the engineers on our team came across the Forseti Security project, we were thrilled.
Forseti is a Google-led project that helps GCP customers ensure the continuous security and compliance of their workloads. It periodically builds an inventory of cloud resources and provides a framework for scanning those resources for compliance. It records violations in a database and can optionally send them out via email. Finally, for some resource types, Forseti can act on those violations to bring them back into compliance.
Even better, the tool is free software, released under the Apache license, with a friendly community with whom we could learn from and share. This means that we'd be able to extend an existing tool instead of building our own from scratch.
Forseti's architecture does have one downside for ClearDATA. It’s built as a traditional server application that runs in a dedicated VM. Most of our services are built on serverless design patterns. Our tools are often driven by events in customer projects, that in turn use Cloud Pub/Sub to trigger actions on a Kubernetes cluster. We’re also excited by our preliminary work with Knative for scalable serverless web endpoints. So Forseti’s monolithic VM-based design raised two types of concerns.
The first is operational complexity: managing and scaling traditional server-based applications can be difficult. Lots has been written about the operational advantages of serverless architectures, so we won't go into any detail on this issue.
The second is controlling the scope of access: on GCP, a server-based application typically runs as a single service account. In order to inventory all of our customers' projects, Forseti's service account would require access to all of those projects. This was an unacceptable access requirement. Our policy is for no single credential to have simultaneous access to multiple customer projects. This is really a breach prevention strategy—no bug, no matter how minor, should ever have the necessary access to be able to copy data between customer projects. The easiest way to ensure this is to accept an absolute prohibition on any shared credential.
When it comes to working with such a constraint, serverless architectures provide a unique advantage over persistent VMs. A persistent VM-based application has a long-lived identity, and lends itself to thinking in terms of granting access to that application. This encourages reusing that single identity, violating our goal of strict separation between credentials that can access customer projects. In fact, our original Forseti proof of concept ran into this exact issue.
Serverless models require a conceptual shift, from thinking about the lifecycle of a server to the lifecycle of a single execution context, whether that is one event, request, or job. Here at ClearDATA, we’ve found that this shift forces developers to get comfortable with restricted credentials that only work in one execution context. Moreover, since typical execution contexts are shorter than VM lifetimes, this also helps enable the use of aggressively rotated keys—at ClearDATA, we generate service account keys that don't last longer than one hour.
The rest of this post will provide an overview of how we converted Forseti to run in a serverless fashion.
How we did it
Our deployment runs Forseti in a producer-consumer pattern. In other words, a cron job periodically pulls a list of projects that need to be inventoried and scanned from one of our internal management services. Each project results in a job that's published to a Cloud Pub/Sub topic. Meanwhile, we have a Kubernetes cluster with some worker pods waiting for messages on a subscription to this Pub/Sub topic. When a job arrives from a subscribed topic, the information on the target project is extracted and used to run Forseti.
The main steps required to implement this are:
- Create your Pub/Sub topic and subscription.
- Set up Forseti. We recommend putting Forseti in its own dedicated project, though it's not strictly necessary.
Here’s how to do this from gcloud. You can also do this using Cloud Deployment Manager or Terraform. Note that you’ll need to substitute your own organization's info in these commands.
Next, you’ll need to prepare Forseti’s cloud resources. If you've already set up Forseti following the usual instructions you’re good to go. Either delete, disable, or ignore the cron jobs running on the compute instances. If you haven't already set up Forseti, these setup instructions are still your best bet, as they help you deploy the database. In particular, you'll get a Cloud SQL database and a service account for Forseti. Once you're more familiar with Forseti, you can redeploy these differently as you require.
Now, you’re ready to prepare a Forseti job producer. We use a Docker image that’s run as a Kubernetes cron job, though we’d probably use Cloud Scheduler if we started today. The container just produces the jobs, so it doesn’t need to have Forseti installed. You can find our producer here—it pulls a list of projects from an internal service, issues a credential for each job, and publishes that data to the Cloud Pub/Sub topic for each project that needs to be inventoried.
Next, you’ll need to prepare a container image with Forseti and the small shim to consume jobs from Pub/Sub.This container does require Forseti to be installed. The Forseti source code has a nice setup for building basic container images here. We build these images and use them as the base image for our worker image that can be found here.
In that repository, the app subdirectory has three files:
- server.yaml, which contains our Forseti server config. Currently, we use a single configuration for every project. In the future, we’ll replace this with a dynamically loaded config to support per-customer and per-project Forseti configurations.
- __init.py__, which just as a simple namespace config.
- worker.py, which is where the actual work happens.
worker.py users the Pub/Sub client from Google Cloud Client Library for Python. It works asynchronously—so the code sets up a subscription and provides a callback. The library invokes the callback for each message that it pulls from the subscription. The callback function unpacks the job info, sets up the application default credentials, and calls into the Forseti code to trigger the inventory collection. When it’s done it acknowledges the message so Pub/Sub knows that it's been handled.
Now, you need to create the Kubernetes objects required to deploy the producer and worker.
Example Kubernetes manifests can be found here and here. You’ll need to configure some service accounts; see README.md in the repo for notes on what needs to be filled in.
And there you have it—a fully operational Forseti deployment, operating serverlessly from within a container! With a bit of planning and a simple framework to run Forseti, we were able to trigger a containerized version of the app running on Kubernetes, provide a short-lived key to a restricted service account, and successfully perform inventory operations. Compared to the typical persistent VM-based Forseti deployment, this should provide better security and improved operational characteristics.
Note that you can use this pattern to convert any traditional application's periodic processes into a serverless workflow. That said, Forseti does have a few properties that make it a good fit for serverless:
- Forseti starts up quickly compared to its run interval. This ensures that startup overhead isn't a significant portion of the running time. If your app needs several minutes before it’s ready to handle a request, this strategy may not be feasible.
- Forseti’s inventory collections are independent of one another. This means that there are no coordination concerns between parallel jobs, and no ordering problems to worry about. If your app cannot be so easily parallelized, this strategy may not be feasible.
- Forseti is free software, which means we have access to the source code and can modify or extend it. If your app is proprietary, there may be no way to learn how to make calls directly into the application.
This blog post has presented a pretty simple consumer framework that only supports a fixed number of inventory workers. There's lots of room for improvement: the ability to request other Forseti actions and autoscaling the number of consumer Pods immediately jump to mind. Nevertheless, engineering teams at ClearDATA and Google have begun discussions about how (and if!) Forseti should evolve towards a serverless design. In the meantime, if running Forseti (or another application, for that matter) serverlessly makes sense for your organization, I hope this blog provides signposts about how to go about it.