This topic explains how to set up a multi-region deployment for Apigee hybrid on Microsoft® Azure Kubernetes Service (AKS).
Topologies for multi-region deployment include the following:
- Active-Active: When you have applications deployed in multiple geographic locations and you require low latency API response for your deployments. You have the option to deploy hybrid in multiple geographic locations nearest to your clients. For example: US West Coast, US East Coast, Europe, APAC.
- Active-Passive: When you have a primary region and a failover or disaster recovery region.
The regions in a multi-region hybrid deployment communicate via Cassandra, as the following image shows:
Prerequisites
Before configuring hybrid for multiple regions, you must complete the following prerequisites:
- Follow the hybrid installation guide for any prerequisites like Google Cloud and org configuration before moving to cluster setup steps.
- Cassandra Multi Region requirements:
- If the pod network namespace does not have connectivity between pods in different clusters
(the clusters are running in "island network mode", the default case in AKS installations),
enable the Kubernetes
hostNetwork
feature by settingcassandra.hostNetwork: true
in the overrides file for all of the regions in your Apigee hybrid multi-regions installation. - Enable
hostNetwork
on existing clusters before expanding your multi-region configuration to new regions. - When
hostNetwork
is enabled, make sure worker nodes can perform reverse DNS lookup. Apigee cassandra uses both forward and reverse DNS lookup to obtain the host IP while starting. - Open Cassandra ports 7000 and 7001 between Kubernetes clusters across all regions to enable worker nodes across regions and datacenters to communicate. See Configure ports.
- If the pod network namespace does not have connectivity between pods in different clusters
(the clusters are running in "island network mode", the default case in AKS installations),
enable the Kubernetes
For detailed information, see Kubernetes documentation.
Create a virtual network in each region
Follow the Azure Kubernetes Service (AKS) documentation to:
- Create a virtual network in each region.
- Establish network peering between the regions.
- Verify the network peering.
Create multi-regional clusters
Set up Kubernetes clusters in multiple regions with different CIDR blocks. See also the Step 1: Create a cluster. Use the locations and virtual network names you created previously.
Open Cassandra ports 7000 and 7001 between Kubernetes clusters across all regions (7000 may be used as a backup option during troubleshooting)
Configure the multi-region seed host
This section describes how to expand the existing Cassandra cluster to a new region. This setup allows the new region to bootstrap the cluster and join the existing data center. Without this configuration, the multi-region Kubernetes clusters would not know about each other.
- Set the kubectl context to the original cluster before retrieving the seed name:
kubectl config use-context original-cluster-name
Run the following
kubectl
command to identify a seed host address for Cassandra in the current region.A seed host address allows a new regional instance to find the original cluster on the very first startup to learn the topology of the cluster. The seed host address is designated as the contact point in the cluster.
kubectl get pods -o wide -n apigee | grep apigee-cassandra apigee-cassandra-default-0 1/1 Running 0 4d17h 120.38.1.9 aks-agentpool-21207753-vmss000000
- Decide which of the IPs returned from the previous command will be the multi-region seed
host. In this example, where only a single node cassandra cluster is running, the seed host
is
120.38.1.9
. - In data center 2, copy your overrides file to a new file whose name includes the cluster
name. For example,
overrides_your_cluster_name.yaml
. - In data center 2, configure
cassandra.multiRegionSeedHost
andcassandra.datacenter
inoverrides_your_cluster_name.yaml
, wheremultiRegionSeedHost
is one of the IPs returned by the previous command:cassandra: multiRegionSeedHost: seed_host_IP datacenter: data_center_name rack: rack_name
For example:
cassandra: multiRegionSeedHost: 120.38.1.9 datacenter: "dc-1" rack: "ra-1"
- In the new data center/region, before you install hybrid, set the same TLS certificates and
credentials in
overrides_your_cluster_name.yaml
as you set in the first region.
Set up the new region
After you configure the seed host, you can set up the new region.
To set up the new region:
- Copy your certificate from the existing cluster to the new cluster. The new CA root is
used by Cassandra and other hybrid components for mTLS. Therefore, it is essential to have
consistent certificates across the cluster.
- Set the context to the original namespace:
kubectl config use-context original-cluster-name
- Export the current namespace configuration to a file:
$ kubectl get namespace apigee -o yaml > apigee-namespace.yaml
apigee is the default namespace.
- Export the
apigee-ca
secret to a file:kubectl -n cert-manager get secret apigee-ca -o yaml > apigee-ca.yaml
- Set the context to the new region's cluster name:
kubectl config use-context new-cluster-name
- Import the namespace configuration to the new cluster.
Be sure to update the "namespace" in the file if you're using a different namespace
in the new region:
kubectl apply -f apigee-namespace.yaml
Import the secret to the new cluster:
kubectl -n cert-manager apply -f apigee-ca.yaml
- Set the context to the original namespace:
- Install hybrid in the new region. Be sure that the
overrides_your_cluster_name.yaml
file includes the same TLS certificates that are configured in the first region, as explained in the previous section.Execute the following two commands to install hybrid in the new region:
apigeectl init -f overrides_your_cluster_name.yaml
apigeectl apply -f overrides_your_cluster_name.yaml
- Run
nodetool rebuild
sequentially on all the nodes in the new data center. This may take a few minutes to a few hours depending on the data size.kubectl exec apigee-cassandra-default-0 -n apigee -- nodetool -u JMX_user -pw JMX_password rebuild -- dc-1
Where JMX_user and JMX_password are the username and password for the Cassandra JMX User.
- Verify the rebuild processes from the logs. Also, verify the data size
using the
nodetool status
command:kubectl logs apigee-cassandra-default-0 -f -n apigee
kubectl exec apigee-cassandra-default-0 -n apigee -- nodetool -u JMX_user -pw JMX_password status
The following example shows example log entries:
INFO 01:42:24 rebuild from dc: dc-1, (All keyspaces), (All tokens) INFO 01:42:24 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Executing streaming plan for Rebuild INFO 01:42:24 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Starting streaming to /10.12.1.45 INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889, ID#0] Beginning stream session with /10.12.1.45 INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Starting streaming to /10.12.4.36 INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889 ID#0] Prepare completed. Receiving 1 files(0.432KiB), sending 0 files(0.000KiB) INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Session with /10.12.1.45 is complete INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889, ID#0] Beginning stream session with /10.12.4.36 INFO 01:42:25 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Starting streaming to /10.12.5.22 INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889 ID#0] Prepare completed. Receiving 1 files(0.693KiB), sending 0 files(0.000KiB) INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Session with /10.12.4.36 is complete INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889, ID#0] Beginning stream session with /10.12.5.22 INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889 ID#0] Prepare completed. Receiving 3 files(0.720KiB), sending 0 files(0.000KiB) INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] Session with /10.12.5.22 is complete INFO 01:42:26 [Stream #3a04e810-580d-11e9-a5aa-67071bf82889] All sessions completed
- Update the seed hosts. Remove
multiRegionSeedHost: 10.0.0.11
fromoverrides-DC_name.yaml
and reapply.apigeectl apply -f overrides-DC_name.yaml