This topic discusses how to add a second Apigee hybrid organization (org) to an existing Kubernetes cluster. In this multi-org configuration, both orgs use and share the same Cassandra ring. Each org can have multiple environments and environment groups configured.
Limitations
A multi-org per cluster configuration is supported with the following limitations. Until these limitations are mitigated, we do not recommend that you use this configuration:
- If you are going to have multiple Apigee hybrid instances, each instance should have its own cluster. Multiple Apigee hybrid instances running on the same kubernetes cluster can lead to issues of instability potentially causing downtime.
- All logging from the pods are sent to the first Google Cloud project that was configured. This
limitation is most apparent in the Cloud Logging tool. The logs for the other Apigee orgs will
not be sent to the matching Google Cloud project. Logs are still captured at the pod level and
can be retrieved with
kubectl
commands. However, they are not sent to the correct Cloud project through Cloud Logging. - You cannot delete org data in the Cassandra database for just one org. This means you cannot remove orgs selectively. Any modification to the database configuration affects all orgs that are deployed to that cluster.
- The hybrid upgrade procedure upgrades the entire cluster all at once.
- Backup and restore is done as a cluster, and cannot be done for a specific org.
- The Apigee API Monitoring feature (Timeline, Recent, Investigate) only works for the first org that was configured and deployed. It will not work for the other orgs in a multi-org cluster.
Multi-org options
This section describes how Apigee Support handles existing multi-org clusters and recommendations for future deployments:
- If you have existing multi-org Kubernetes clusters deployed in non-production and production contexts, Apigee Support will continue to support them. However, note the technical limitations outlined in the next section. We recommend that you change any future production deployments to use one Apigee org per cluster.
- If you have existing multi-org clusters in non-production contexts, Apigee Support will continue to support them. We recommend that you migrate any production clusters to a new configuration that uses one Apigee org per cluster.
Prerequisites
Before continuing, note the following:
- You must have an existing hybrid org with one or more environments installed and configured in an existing Kubernetes cluster. See the hybrid installation instructions.
- When combining multiple orgs in a single cluster, the hybrid versions must all match. Before adding a second org to a cluster, upgrade the existing hybrid installation, if necessary. See Upgrading Apigee hybrid.
Create an org to add to the existing cluster
To create the additional org, follow the steps in Part 1: Project and org setup.
Configure the new org
In the following steps, you will create a new overrides file and configure it for the
new org. An overrides.yaml
file can only support one org's information. Therefore,
you must create a new overrides.yaml
file and apply it to the existing Kubernetes
cluster.
- Create service accounts for use with the new org. See Create service accounts.
- Make note of the TLS certificate files (
.key
and.pem
) in yourcerts
directory. If you need to create them again, you can follow the instructions in Create TLS certificates. - Copy your existing
overrides.yaml
to a new file to use as a starting point for configuring your new org. For example:new-overrides.yaml
. - Edit the new overrides file with the following configurations:
org: "new-org-name" instanceID: "instance-id" ## Must match the instanceID of your existing org. multiOrgCluster: true ## Enables exporting metrics for this org to the Google Cloud Project named with gcp:projectID k8sCluster: name: "existing-cluster-name" region: "existing-cluster-analytics-region" gcp: projectID: "new-project-id" name: "new-project-id" region: "new-project-default-location" namespace: namespace ## must be the same for both new and existing orgs virtualhosts: - name: new-environment-group-name sslCertPath: ./certs/cert-file-name # .crt or .pem sslKeyPath: ./certs/key-file-name # .key envs: - name: new-environment-name serviceAccountPaths: runtime: ./new-service-accounts-directory/new-project-id-apigee-runtime.json synchronizer: ./new-service-accounts-directory/new-project-id-apigee-synchronizer.json udca: ./new-service-accounts-directory/new-project-id-apigee-udca.json connectAgent: serviceAccountPath: ./new-service-accounts-directory/new-project-id-apigee-mart.json mart: serviceAccountPath: ./new-service-accounts-directory/new-project-id-apigee-mart.json metrics: serviceAccountPath: ./new-service-accounts-directory/new-project-id-apigee-metrics.json watcher: serviceAccountPath: ./new-service-accounts-directory/new-project-id-apigee-watcher.json
The following table describes each of the property values that you must provide in the overrides file. For more information, see Configuration property reference.
Variable Description new-org-name The name of your new org. instance-id All orgs in this cluster must have the same instance ID. Therefore this must match the instanceID
entry in the overrides file for your original org.existing-cluster-name The name of the cluster you are adding this org to. It must match the k8sCluster.name
entry in the overrides file for your original cluster.existing-cluster-analytics-region The region where the original cluster is provisioned. It must match the k8sCluster.region
entry in the overrides file for your original cluster.new-project-id The project ID of your new project. The project ID and org name are the same. new-project-default-location The analytics region you specified when you created the new org. It does not have to be the same as the region for the existing org. namespace All orgs in the cluster must share the same namespace. Be sure to use the same namespace that was used for the original org. The namespace for most installations is apigee
.new-environment-group-name The new environment group you created for the new org. cert-file-name and
key-file-nameThe TLS cert and key files for the cluster that you checked or created in step 1 in this section. new-environment-name The name of the environment you created for the new org. new-service-accounts-directory The directory where the service account key files you created for the new org are located.
Apply the configuration
Apply the new org configuration to your cluster:
- Do a dry run installation to check for any problems:
Helm
helm upgrade ORG_NAME apigee-org/ \ --install \ --namespace apigee \ --atomic \ -f OVERRIDES_FILE.yaml \ --dry-run
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE.yaml --org --dry-run=client
- If there are no issues, apply the org-level components. This step installs the Cassandra
jobs (user and schema), Apigee Connect, Apigee Watcher and MART services:
Helm
helm upgrade ORG_NAME apigee-org/ \ --install \ --namespace apigee \ --atomic \ -f NEW_OVERRIDES_FILE.yaml
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE.yaml --org
- Install the environment. This step installs apigee-runtime, synchronizer and UDCA components,
per environment:
Helm
helm upgrade ENV_NAME apigee-env/ \ --install \ --namespace apigee \ --atomic \ --set env=ENV_NAME \ -f overrides.yaml \ --dry-run
helm upgrade ENV_NAME apigee-env/ \ --install \ --namespace apigee \ --atomic \ --set env=ENV_NAME \ -f overrides.yaml
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE --env $ENV_NAME --dry-run=client
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE --env $ENV_NAME
- Apply the load balancer changes. This step configures the ingress to listen to the new
virtual host(s) for the second org:
Helm
helm upgrade NEW_ENV_GROUP_NAME apigee-virtualhost/ \ --install \ --namespace apigee \ --atomic \ --set envgroup=NEW_ENV_GROUP_NAME \ -f overrides.yaml \ --dry-run
helm upgrade NEW_ENV_GROUP_NAME apigee-virtualhost/ \ --install \ --namespace apigee \ --atomic \ --set envgroup=NEW_ENV_GROUP_NAME \ -f overrides.yaml
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE --settings virtualhosts --dry-run=client
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE --settings virtualhosts
- Enable synchronizer access for your new org following the steps in Enable Synchronizer access.
- By default, when you first install the Apigee hybrid runtime, the Telemetry component is configured
with
multiOrgCluster
disabled. Use the following steps to enable multi-org telemetry for each org in your cluster:- Delete the existing Telemetry component with the following commands:
Helm
helm delete telemetry
apigeectl
Perform a dry-run first:
$APIGEECTL_HOME/apigeectl delete -f FIRST_OVERRIDES_FILE.yaml --telemetry --dry-run=client
If the dry-run is successful, delete the Telemetry component:
$APIGEECTL_HOME/apigeectl delete -f FIRST_OVERRIDES_FILE.yaml --telemetry
- Add the following line to the
overrides.yaml
file for your existing org.multiOrgCluster: true
- Apply the changes to install the Telemetry component for the org.
Perform a dry-run first:
Helm
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f FIRST_OVERRIDES_FILE.yaml \ --dry-run
apigeectl
$APIGEECTL_HOME/apigeectl apply -f FIRST_OVERRIDES_FILE.yaml --telemetry --dry-run=client
If the dry-run is successful, apply the changes and install the Telemetry component:
Helm
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f FIRST_OVERRIDES_FILE.yaml
apigeectl
$APIGEECTL_HOME/apigeectl apply -f FIRST_OVERRIDES_FILE.yaml --telemetry
- Make sure the following line is in the
overrides.yaml
file for each new org.multiOrgCluster: true
- Apply the changes to install the Telemetry component for each new org. Repeat this for
every new org in your multi-org cluster.
Perform a dry-run first:
Helm
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f NEW_OVERRIDES_FILE.yaml \ --dry-run
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE.yaml --telemetry --dry-run=client
If the dry-run is successful, apply the changes and install the Telemetry component:
Helm
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f NEW_OVERRIDES_FILE.yaml
apigeectl
$APIGEECTL_HOME/apigeectl apply -f NEW_OVERRIDES_FILE.yaml --telemetry
- Delete the existing Telemetry component with the following commands: