Developers & Practitioners
Intro to Kf: Cloud Foundry apps on Kubernetes
While many companies are writing brand-new Kubernetes-based applications, it’s still quite common to find companies who want to migrate existing workloads. A common source platform for these applications is Cloud Foundry. However, getting an existing Cloud Foundry application running on Kubernetes can be non-trivial, especially if you want to avoid making code changes in your applications, or taking on big process changes across teams. That is, if you’re not using Kf to do a lot of that heavy lifting for you.
Kf is a Google Cloud service that allows you to easily move existing Cloud Foundry workloads to Kubernetes with minimal disruption to your existing processes.
Kf features a command line interface (CLI) also named kf, that replaces the existing Cloud Foundry cf command line utility. The kf CLI implements the most commonly used cf functionality, including the ability to manage bindings, services, apps, routes and more.
For example, to deploy an existing application you would simply issue the kf push command.
On the server side Kf is built on several open source technologies. In some cases these technologies are also the Google Cloud implementation. For instance GKE is our managed Kubernetes offering, and provides the platform for managing and running the applications. Routing and ingress is handled by Anthos Service Mesh, Google Cloud’s managed Istio-based service mesh. Finally, Tekton provides on-cluster build functionality for Kf. Developers don't have to worry about any of those technologies, as Kf abstracts them away.
Kf primitives such as spaces, bindings and services are implemented as custom Kubernetes resources and controllers. The custom resources effectively serve as the Kf API and are used by the kf CLI to interact with the system. The controllers use Kf's CRDs to orchestrate the other components in the system.
The beauty of this approach is that developers who are familiar with existing workflows can largely replicate those workflows with the kf CLI. On the other hand, platform operators who are more familiar with Kubernetes can use kubectl to interact with the CRDs and controllers.
For instance if you wanted to list the apps running on your Kf cluster you could issue either of the following commands:
kubectl get apps -n space-name
Notice that CF / Kf spaces get mapped one to one to Kubernetes namespaces.
To get a list of all the custom resources you can examine the api-resources in the kf.dev API group.
kubectl api-resources --api-group=kf.dev
NAME SHORTNAMES APIGROUP NAMESPACED KIND
apps kf.dev true App
builds kf.dev true Build
clusterservicebrokers kf.dev false ClusterServiceBroker
routes kf.dev true Route
servicebrokers kf.dev true ServiceBroker
serviceinstancebindings kf.dev true ServiceInstanceBinding
serviceinstances kf.dev true ServiceInstance
spaces kf.dev false Space
With Kf developers can continue to work with a familiar interface and platform operators can use declarative Kubernetes practices and tooling such as Anthos Config Management to manage the cluster. It’s really the best of both worlds if you’re looking to manage your existing Cloud Foundry applications on Kubernetes.
If you’d like to learn more about Kf check out the video I just released on YouTube. It reviews some of the concepts discussed here, and includes a short demo. If you’d like to get hands on, try the quick start. And, of course, you can always read the documentation.