This page describes the requirements for packaging your Kubernetes application, and guidelines for meeting those requirements.
Your application package is a bundle of container images and configuration files that are deployed to users' Kubernetes clusters. To support deploying your app to Google Kubernetes Engine from the Google Cloud Platform Console, the application package must include a deployment container, which pushes the configuration files and display metadata to the Kubernetes API.
Your application package enables GCP users to:
- Discover your app in the GCP Marketplace catalog
- Deploy the application to their GKE cluster, using the Google Cloud Platform UI
- Interact with the running app using Google Cloud Platform Console
In addition to supporting a UI-based deployment from GCP Marketplace, your package must include steps for users to deploy your app from a command-line interface (CLI), using tools such as kubectl and Helm.
For examples of app packages, see the GitHub repository for Google Click to Deploy Solutions. The repository contains packages for popular open source applications such as WordPress and Elasticsearch.
Before you begin
- Make sure that you have set up your GCP environment.
- Create a public Git repository for the configuration files, user guide, and other resources to run your app. You can host the repository with a provider such as GitHub, Cloud Source Repositories, or on your own server. We recommend a dedicated repository for each product that you are distributing.
The application must meet the following requirements:
Your repository must contain a LICENSE file, which contains the open source license for the repository.
Your repository must contain a configuration file, which deploys your app. The configuration file can be a Kubernetes YAML manifest, or a Helm chart.
The configuration must include an Application custom resource, which describes the application.
Optionally, if you want your application to be compatible with GKE On-Prem, modify your application images for compatibility.
For information on GKE On-Prem, request access.
Optionally, if you want your application to be compatible with Istio, review the limitations on clusters that run Istio.
If you are selling a commercial application, your app must report its usage to Google, so that your customers are billed accurately. We recommend integrating your app with the Usage-based Billing Agent, which sends usage reports to Google.
You must upload all the container images for your application to your registry in Container Registry. Your registry must also include a deployer image, which pushes the app's configuration to the Google Kubernetes Engine API when users deploy the app from Google Cloud Platform Console.
You must include integration tests for the app.
You must include a user guide, with steps to deploy the application from the command line, configure the app, and to use the app.
Requirements for your configuration
You can provide your configuration either as a Kubernetes manifest, or as a Helm chart.
Your configuration must meet the following requirements:
To protect users from unstable APIs, use only beta or generally available Kubernetes resources.
Your configuration must deploy an Application custom resource. The Application resource contains the information that users see when they deploy their app through the GCP Marketplace UI.
The Application resource is a custom resource, which aggregates the Kubernetes components associated with your app, and lets users manage these resources as a group.
For information on creating an Application resource, and for examples, see the Application GitHub repository.
Your configuration must use parameters that can be substituted, either using
envsubst, or as flags.
The following parameters must be capable of being substituted:
Cluster and namespace: Kubernetes client tools typically provide a mechanism to select cluster and namespace, which is sufficient to meet this requirement. Namespaces sometimes also appear inside of resource definitions, such as when referring to a
RoleBinding. Any such reference must be parameterized.
App name: The instance name of the application resource. The same string must also be included in the name of every resource belonging to the application, to avoid name collisions in the event of multiple deployments of the app within one namespace.
Container Images: Every image reference, such as in a
PodSpec, must be substitutable. This allows GCP Marketplace to override these references to point to images published in our container registry. It also allows customers to easily substitute modified images.
License secret: If your application is performing commercial metering, it must accept the name of a Secret resource as a parameter. The secret contains the usage reporting credentials, which your app uses to send usage data to Google.
All the resources in your configuration must belong to a single namespace. GCP Marketplace users configure this namespace when they deploy your app. Avoid hard-coding namespaces in your manifests, so that the resources you specify are created in the namespace that users select.
It must be possible to deploy your application using client-side tools. For example, if you are using Helm, the deployment must work with the
Requirements to support GKE On-Prem
If you want your application to run on GKE On-Prem, note that customers' GKE On-Prem clusters might be configured differently than standard GKE clusters, so the following resources must be manually configured by your customers:
GCP Marketplace cannot predict which Storage Classes exist in a customer's GKE On-Prem cluster, so we recommend the following changes to your application:
If you are creating Persistent Volume Claims (PVCs), the claim must be provisioned without explicitly referencing a Storage Class.
If your application uses the
x-google-marketplace STORAGE_CLASSproperty, customers must choose a Storage Class when they deploy your app from the Google Cloud Platform Console. We recommend adding documentation that guides users to choose an appropriate Storage Class.
Services and Ingress
GCP Marketplace cannot predict the network topology and networking controllers in a customer's GKE On-Prem cluster, so customers must set up networking that works with their specific configuration.
Depending on the client that your application uses for the deployment, you must modify your application in the following ways:
If you are using
kubectland environment variables for your app's configuration, we recommend removing all Ingress resources from your manifests.
If you are using Helm for your configuration, use the
x-google-marketplace INGRESS_AVAILABLEproperty in the schema for your deployment image. If this property is
false, your deployer must not create any Ingress resources.
For high-level information on the deployment container, see the requirements for the deployment container. For detailed steps to build a deployment container, including information about the deployer's schema, see the GCP Marketplace tools GitHub repository.
In the post-deployment section of your documentation, add steps for your customers to set up networking after they have deployed the application.
Kubernetes Service Accounts
To ensure that customers' GKE On-Prem clusters can access the application images in your Container Registry repository, your application must use explicitly declared Kubernetes Service Accounts. The Service Accounts must be defined in your deployer image's schema, using the
This means that your application must not rely on the namespace's default Service Account, nor should it create new Service Accounts. To remove the dependency on the namespace's default Service Account, explicitly define the service account to use for all workloads, such as by setting the
serviceAccountNameproperty for all Pod specs.
For an application that uses an explicit Service Account, see the following examples from the Prometheus app in the Google Click to Deploy GitHub repository:
- Example: Declaring the Service Account in the deployer's schema
- Example: Using the Service Account in the spec
For information on configuring Service Accounts, see the Kubernetes documentation for Service Accounts.
Limitations on clusters with Istio
If you want your application to be compatible with Istio, review the limitations described in this section. For an overview of Istio, see What is Istio?.
If your customers run Istio in their clusters, Istio controls the network traffic between your application's Pods by default. This means that network communication might be blocked in the following scenarios:
If your Pods run
kubectlcommands that use the cluster's IP address, the commands might fail.
If your application uses Pod-to-Pod or IP-based communication, the cluster formation step might fail.
External connections to third-party services, such as OS package repositories, are blocked. Customers must configure Istio egress traffic to enable access to external services.
We recommend configuring incoming connections using Istio Gateway, instead of LoadBalancer or Ingress resources.
If you are using Helm for your configuration, use the
x-google-marketplace ISTIO_ENABLED property in the schema for your
deployment image. If this property is
true, your deployer must modify
the deployment, such as waiting for the Istio sidecar to be ready.
To help your customers set up communication between application Pods, we recommend adding steps to the post-deployment sections of your documentation.
Requirements to integrate the billing agent
If you are selling a commercial application, we recommend integrating your application with the Usage-based Billing (UBB) Agent.
The agent handles authentication and reporting to Google's usage reporting endpoint: Service Control. When you have submitted your pricing model, the GCP Marketplace team creates a service for your application to report against, and the billing metrics to measure usage.
The agent also manages local aggregation, crash recovery, and retries. For the usage-hour metric, the agent can be configured to automatically report heartbeats.
The agent also exposes reporting status, allowing your application to detect whether the agent is successfully reporting usage data.
Customers purchase the application in GCP Marketplace to acquire a license, which is attached to the application at deployment time.
When you are integrating with the billing agent, consider how your application behaves when usage reports fail, which might indicate one of the following:
The customer has cancelled their subscription.
The customer might have accidentally disabled the reporting channel. For example, customers might inadvertently remove or misconfigure the agent, or the network might prevent the agent from accessing Google's reporting endpoint.
In these cases, Google does not receive usage data and customers are not billed.
In these scenarios, your application can self-terminate, or disable functionality. If usage reports fail during the application's startup, we recommend that your app self-terminate, so that your customers get immediate feedback, and can resolve the issue.
Integrating the billing agent
You can integrate the agent as a sidecar container, which runs in the same Pod as your application, or using the SDK.
In the sidecar approach, the agent runs in its own container, in the same Kubernetes Pod as the application container. Your application communicates with the agent's local REST interface.
If you use the SDK approach, the agent must be compiled or linked into your application binary. The SDK is implemented natively for Go, with bindings for Python.
In general, the sidecar approach requires less integration effort, while the SDK is more robust against being accidentally disabled.
Credentials to report usage
The billing agent requires credentials that allow it to send usage reports to
Google. GCP Marketplace generates these credentials when users deploy
the app from GCP Marketplace, and ensures that they exist as a
in the target Kubernetes namespace before your application is deployed.
The name of this Secret is passed to your app as the
For an example manifest that uses the reporting secret, see the WordPress app example in GitHub.
The Secret contains the following fields:
||An identifier representing the customer's agreement to purchase and use the software.|
||An identifier associated with the Entitlement that is passed to Google Service Control along with usage reports.|
||The Google Cloud Service Account JSON key used to authenticate to Google Service Control.|
If your solution provides a SaaS component in addition to the app, you can optionally have that SaaS component periodically check the validity of entitlement IDs, using the GCP Marketplace Procurement service. To get access to the Procurement service, contact your Partner Engineer.
For information on additional parameters that are passed to the app, see Custom Parameters, later in this section.
Requirements for building your container images
Your application consists of one or more application container images. In addition, your repository must include a deployment container, which is used when customers deploy the app from the GCP Marketplace UI.
Container images are typically built using a
Dockerfile and the
build command-line tool. We recommend that you publish the Dockerfile and
container build instructions in the public repository for your application.
This enables customers to modify or rebuild the images, which is sometimes
necessary to certify images for enterprise production environments.
If your application image depends on a base image such as Debian, or a language runtime image such as Python or OpenJDK, we strongly recommend that you use one of GCP Marketplace's certified container images. Doing so ensures timely updates to your base image, especially for security patches. This approach also simplifies your open source license review step, because you do not need to account for packages that are present in the base image.
After building your application images, push them to the staging registry that you created in Container Registry when you set up your environment.
Your Container Registry repository must have the following structure:
Your application's main image must be in the root of the repository. For example, if your Container Registry repository is
gcr.io/exampleproject/exampleapp, the application's image should be in
The image for your deployment container must be in a folder called
deployer. In the above example, the deployment image must be in
If your application uses additional container images, each additional image must be in its own folder. For example, if your application requires an Ubuntu 16.04 image, add the image to
All the images in your application must be tagged with the minor version using semantic versioning for your application.
For example, the following image shows the Container Registry repository
for the Elasticsearch Cluster
Kubernetes application. The application contains the main application image,
the deployer image in its own folder, and the Ubuntu 16.04 image in
All the images in the repository are tagged with the same minor version number,
6.3, using semantic versioning.
The repository is at
Use the container image identifiers that you selected when you chose your product identifiers.
To upload your images to Container Registry, tag it with the registry name,
and then push the image using
gcloud. For example, use the following commands
to push the images for
docker build -t gcr.io/my-project/example-pro:4 . gcloud docker -- push gcr.io/my-project/example-pro:4
For detailed steps to tag and push images to your registry, see the Container Registry documentation.
Requirements for the deployment container
The deployment container, or deployer, is used when customers deploy your
solution from the GCP Marketplace catalog. The deployer image
packages your application's Kubernetes configuration and the
client tools such as
kubectl or Helm which push the configuration to the
Kubernetes API. The deployer should typically use the same set of
command-line commands that a user would run to deploy your application.
To create your deployer, use one of the base images for deployment containers from the tools repository marketplace directory:
- If you are using
kubectlfor your configuration, use the kubectl base image.
- If you are using a Helm chart for your configuration, use the Helm base image.
The base images have built-in utilities for tasks such as assigning owner references, password generation, and post-deploy cleanup.
For detailed steps to build your deployer image, see Building the Application Deployer.
Parameters passed to your app
Your deployment container must declare parameters that need to be collected from customers when they select your app. These parameters are then supplied to the deployment container when users deploy the app.
To configure these parameters, your deployment container image must include a
JSON Schema, in YAML format, at this
To learn how to craft
schema.yaml, see Deployer schema.
Requirements for integration tests
Your application should include integration tests that:
- Test the basic functionality of the software
- Validate that the app reports usage as expected, and that usage amounts are correct
- Generate usage reports to Google Service Control, to enable GCP Marketplace's billing regression tests
The primary image of your application must contain a
folder, with the configuration and manifests that are applied when the app
is started in test mode. Depending on your configuration method, your folder
structure can be similar to the following examples:
For Helm-based deployments:
data-test/ chart/ nginx.tar.gz schema.yaml
For kubectl based deployer:
data-test/ manifest/ tester.yaml.template schema.yaml
You might need to specify additional Kubernetes resources or even modify
some of them for integration tests. To use a test configuration, the
configuration defined in
/data-test overrides the default configuration
/data according to the following rules:
values.yamlfiles are merged. Values defined in
/data-testoverride the values in
- Schema properties are merged. Schema entries defined in
/data-test/schema.yamlwill override entries defined in
- All other files in
/data-testwill override files in
When your tests run, if you are using a Helm configuration, the Pods are
helm.sh/hook:test-success. For deployments that use
kubectl, the Pods are annotated with
The deployer starts the tester Pod after all the Kubernetes resources are healthy. If the test fails, the deployer must exit with a non-zero exit status. The test is considered successful if the container in the tester Pod exits successfully.
The base images for the deployer contain a script called
/bin/deploy_with_tests.sh, which executes the deployer in test mode.
The script is a modified version of
/bin/deploy.sh, which is the default
deployment script. To start the deployer in test mode, run
replace the entrypoint:
start.sh \ --deployer="$deployer" \ --parameters="$parameters" \ --entrypoint="/bin/deploy_with_tests.sh"
In test mode, the deployer executes the following steps:
/data, to prepare the resources such as the tester Pod.
- Applies the manifest for the app.
- Validates the Kubernetes resources, to ensure that the app is ready for testing.
- Starts the tester Pods. If a tester Pod fails, the deployer Job fails.
Ensure that your integration tests generate usage reports to Google Service Control. GCP Marketplace periodically uses the generated reports as part of regression tests for billing, to ensure that reporting and billing work as expected.
Requirements for your user guide
Your user guide must include the following information:
- A general application overview, covering basic functions and configuration options. This section must also link to the published solution on GCP Marketplace.
- Configuring client tools such as
kubectlor Helm. as applicable.
- Installing the Application CustomResourceDefinition (CRD), so that your cluster can manage the Application resource.
- If you are selling a commercial app, steps for acquiring and deploying a license Secret from GCP Marketplace.
- Commands for deploying the application.
- Passing parameters available in UI configuration.
- Pinning image references to immutable digests.
If you add custom input fields to your deployer schema, add information about the expected values, if applicable.
- Connecting to an admin console (if applicable).
- Connecting a client tool and running a sample command (if applicable).
- Modifying usernames and passwords.
- Enabling ingress and installing TLS certs (if applicable).
Backup and restore
- Backing up application state.
- Restoring application state from a backup.
- Updating the application images for patches or minor updates.
- Scaling the application (if applicable).
- Deleting the app.
- Cleaning up any resources that might be intentionally orphaned, such as PersistentVolumeClaims.