DevOps on Google Cloud: tools to speed up software development velocity
Warren Strange
Engineering Director, ForgeRock
Editor’s note: Today we hear from ForgeRock, a multinational identity and access management software company with more than 1,100 enterprise customers, including a major public broadcaster. In total, customers use the ForgeRock Identity Platform to authenticate and log in over 45 million users daily, helping them manage identity, governance, and access management across all platforms, including on-premises and multicloud environments.
Operating at that kind of scale isn’t easy. In this blog post, ForgeRock Engineering Director, Warren Strange discusses the three things that help make their developers efficient and productive, and the Google Cloud tools they use along the way.
At ForgeRock, we’ve been an early adopter of Kubernetes, viewing it as a strategic platform. Running on Kubernetes allows us to drive multicloud support across Google Kubernetes Engine (GKE), Amazon (EKS), and Azure (AKS). So no matter which cloud our customers are running on, we are able to seamlessly integrate our products into customers' environments.
Making it easier for ForgeRock's developers and operators to build, deploy and manage applications has been crucial in our ability to continually provide high quality solutions for our customers. We’re always looking for tools to improve productivity and keep our developers focused on coding instead of configuration. Google Cloud’s suite of DevOps tools have streamlined three specific practices to help keep our developers productive:
1. Make developers productive within IDEs
Developer productivity is core to the success of any organization, including ForgeRock. Since developers spend most of their time within their IDE of choice, our goal at ForgeRock has been to make it easier for our developers to write Kubernetes applications within the IDEs they know and love. Cloud Code helps us precisely with that: it makes the process of building, deploying, scaling, and managing Kubernetes infrastructure and applications a breeze.
In particular, working with the Kubernetes YAML syntax and schema takes time, and a lot of trial and error to master. Thanks to YAML authoring support within Cloud Code, we can easily avoid the complicated and time consuming task of writing YAML files at ForgeRock. With YAML authoring support, developers save time on every bug. Cloud Code’s inline snippets, completions, and schema validation, a.k.a. “linting,” further streamline working with YAML files.
The benefits of Cloud Code extend to local development as well. Iterating locally on Kubernetes applications often requires multiple manual steps, including building container images, updating Kubernetes manifests, and redeploying applications. Doing these steps over and over again can be a chore. Cloud Code supports Skaffold under the hood, which tracks changes as they come and automatically rebuilds and redeploys—reducing repetitive development tasks.
Finally, developing for Kubernetes usually involves jumping between the IDE, documentation, samples etc. Cloud Code reduces this context switching with Kubernetes code samples. With samples, we can get new developers up and running quickly. They spend less time learning about configuration and management of the application—and spend more time on writing and evolving the code.
2. Drive end-to-end automation
To further improve developer productivity, we’ve focused on end-to-end automation: from writing code within IDEs, to automatically triggering CI/CD pipelines and running the code in production. In particular, Tekton, Cloud Build, Container Registry, and GKE have been critical to Forgerock as we streamline the flow of code, feedback and remediation through the build and deployment processes. The process looks something like this:
We begin by developing Kubernetes manifests and dockerfiles using Cloud Code. Then we use Skaffold to build containers locally, while Cloud Build helps with continuous integration (CI). The Cloud Build GitHub app allows us to automate builds and tests as part of our GitHub workflow. Cloud Build is differentiated from other continuous integration tools since it is fully serverless. It scales up and scales down in response to load, with no need for us to pre-provision servers or pay in advance for additional capacity. We pay for the exact resources we use.
Once the image is built by Cloud Build, it is stored, managed, and secured in Google’s Container Registry. Just like Cloud Build, Container Registry is serverless, so we only pay for what we use. Additionally, since Container Registry comes with automatic vulnerability scanning, every time we upload a new image to Container Registry, we can also scan it for vulnerabilities.
Next, a Tekton pipeline is triggered, which deploys the docker images stored in Container Registry and Kubernetes manifests to a running GKE cluster. Along with Cloud Build, Tekton is a critical part of our CI/CD process at ForgeRock. Most importantly, since Tekton comes with standardized Kubernetes-native primitives, we can create continuous delivery workflows very quickly.
After deployment, Tekton triggers a functional test suite to ensure that the applications we deploy perform as expected. The test results are posted to our team Slack channel so all developers have instant access and insights about each cluster. From there, we are able to provide our customers with their finished product request.
3. Leverage multicloud patterns and practices
The industry has seen a shift towards multicloud. Organizations have adopted multicloud strategies to minimize vendor lock-in, take advantage of best-in-class solutions, improve cost-efficiencies, and increase flexibility through choice.
At ForgeRock, we’re big proponents of multicloud. Part of that comes from the fact that our identity and access management product works across Google Cloud, AWS, and Azure. Developing products using open-source technologies such as Kubernetes has been particularly helpful in driving this interoperability.
Tekton has been another critical project that has allowed us to prevent vendor lock-in. Thanks to Tekton, our continuous delivery pipelines can deploy across any Kubernetes cluster. Most importantly, since Tekton pipelines run on Kubernetes, these pipelines can be decoupled from the runtime. Like Tekton and Kubernetes, both Cloud Build and Container Registry are based on open technologies. Community-contributed builders and official builder images allow us to connect to a variety of tools as a part of the build process. And finally, with support for open technologies like Google Cloud buildpacks within Cloud Build, we can build containers without even knowing Docker.
Making it easier for developers and operators to build, deploy and manage applications is critical for the success of any organization. Driving developer productivity within IDEs, leveraging end-to-end automation, and support for multi-cloud patterns and practices are just some of the ways we are trying to achieve this at ForgeRock. To learn more about ForgeRock, and to deploy the ForgeRock Identity Platform into your Kubernetes cluster, check out our open-source ForgeOps repository on GitHub.