This tutorial walks you through the process of deploying a highly available IBM MQ queue manager cluster using Compute Engine on Google Cloud. MQ transports data between these points through a queuing system that ensures delivery in case of network or application failure.
This tutorial is useful if you are an enterprise architect, sysadmin, developer, or devops engineer who wants to deploy a highly available IBM MQ queue manager cluster on Google Cloud.
The tutorial assumes that you are familiar with the following:
- Ubuntu server 16.04
- IBM MQ
- Compute Engine
- Network Services Load Balancing
Objectives
- Create the Compute Engine instances.
- Create the firewall rules.
- Create the internal load balancers.
- Configure the GlusterFS shared file system.
- Configure each node of the cluster.
- Configure the cluster for high availability.
- Test the cluster.
Costs
This tutorial uses billable components of Google Cloud, including:
- Compute Engine
- Networking
Use the Pricing Calculator to generate a cost estimate based on your projected usage.
Before you begin
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Compute Engine API.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. See Cleaning up for more detail.
Architecture
To create a highly available and scalable deployment of IBM MQ, this solution combines queue manager clusters with multi-instance queue managers. Multi-instance queue managers run in an active/standby configuration, using a shared volume to share configuration and state data. Clustered queue managers share configuration information using a network channel and can perform load balancing of incoming messages. However, message state is not shared between the two queue managers.
By using both deployment models, you can achieve redundancy at the queue manager level and then scale by distributing the load across one or more queue managers.
In this tutorial, you create two queue managers, referred to as A and B. For
each queue manager, you create a primary node and standby node (mq-1
and
mq-2
for queue manager A, and mq-3
and mq-4
for queue manager B). To route
traffic to the primary instances, you use internal load balancers, one for each
queue manager. Consumers and publishers are pointed to the load balancer
addresses as if those were the direct queue managers addresses. The queue
managers communicate with each other through the internal load balancers as
well.
IBM MQ multi-instance queue managers require shared storage. In this tutorial, you use GlusterFS, a distributed, scalable file system, as a shared file system between the nodes of each multi-instance queue manager.
Creating the Compute Engine instances
You now create the compute resources required for this tutorial.
Compute Engine instances mq-1
and mq-2
will be the primary and
standby nodes for queue manager A, and mq-3
and mq-4
will be the primary and
standby nodes for queue manager B. You also create unmanaged instance groups for
each instance, because they are in different zones. You will attach the
unmanaged instance groups to the load balancer later.
Open Cloud Shell.
Create a startup script for the MQ compute instances:
cat << 'EOF' > mqstartup.sh #!/bin/bash if [ -f /root/INSTALLATION_DONE ]; then echo "Skipping because installation completed." exit 0 fi # Docker installation apt-get install -y apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install -y docker-ce # GlusterFS installation apt-get install -y glusterfs-server # Format and mount the persistent disk mkdir -p /data mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb mount -o discard,defaults /dev/sdb /data touch /root/INSTALLATION_DONE EOF
Create a Compute Engine instance for the primary node for queue manager A:
gcloud compute instances create mq-1 \ --zone=us-central1-c \ --machine-type=n1-standard-1 \ --image-family=ubuntu-1604-lts \ --image-project=ubuntu-os-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=mq-1 \ --create-disk=mode=rw,size=10,type=projects/${DEVSHELL_PROJECT_ID}/zones/us-central1-c/diskTypes/pd-ssd,name=gluster-disk-1 \ --tags=ibmmq \ --metadata-from-file startup-script=mqstartup.sh
Create an unmanaged instance group and add the instance:
gcloud compute instance-groups unmanaged create mq-group-1 \ --zone=us-central1-c gcloud compute instance-groups unmanaged add-instances mq-group-1 \ --zone=us-central1-c \ --instances=mq-1
Create a Compute Engine instance for the standby node of queue manager A:
gcloud compute instances create mq-2 \ --zone=us-central1-b \ --machine-type=n1-standard-1 \ --image-family=ubuntu-1604-lts \ --image-project=ubuntu-os-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=mq-2 \ --create-disk=mode=rw,size=10,type=projects/${DEVSHELL_PROJECT_ID}/zones/us-central1-b/diskTypes/pd-ssd,name=gluster-disk-2 \ --tags=ibmmq \ --metadata-from-file startup-script=mqstartup.sh
Create an unmanaged instance group and add the instance:
gcloud compute instance-groups unmanaged create mq-group-2 \ --zone=us-central1-b gcloud compute instance-groups unmanaged add-instances mq-group-2 \ --zone=us-central1-b \ --instances=mq-2
Create a Compute Engine instance for the primary node of queue manager B:
gcloud compute instances create mq-3 \ --zone=us-central1-a \ --machine-type=n1-standard-1 \ --image-family=ubuntu-1604-lts \ --image-project=ubuntu-os-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=mq-3 \ --create-disk=mode=rw,size=10,type=projects/${DEVSHELL_PROJECT_ID}/zones/us-central1-a/diskTypes/pd-ssd,name=gluster-disk-3 \ --tags=ibmmq \ --metadata-from-file startup-script=mqstartup.sh
Create an unmanaged instance group and add the instance:
gcloud compute instance-groups unmanaged create mq-group-3 \ --zone=us-central1-a gcloud compute instance-groups unmanaged add-instances mq-group-3 \ --zone=us-central1-a \ --instances=mq-3
Create a Compute Engine instance for the standby node of queue manager B:
gcloud compute instances create mq-4 \ --zone=us-central1-f \ --machine-type=n1-standard-1 \ --image-family=ubuntu-1604-lts \ --image-project=ubuntu-os-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=mq-4 \ --create-disk=mode=rw,size=10,type=projects/${DEVSHELL_PROJECT_ID}/zones/us-central1-f/diskTypes/pd-ssd,name=gluster-disk-4 \ --tags=ibmmq \ --metadata-from-file startup-script=mqstartup.sh
Create an unmanaged instance group and add the instance:
gcloud compute instance-groups unmanaged create mq-group-4 \ --zone=us-central1-f gcloud compute instance-groups unmanaged add-instances mq-group-4 \ --zone=us-central1-f \ --instances=mq-4
Creating the firewall rules
In order for the nodes in the cluster to communicate with each other and to receive traffic from the load balancer, you have to create the appropriate firewall rules.
Create a firewall rule to allow traffic between nodes of the cluster:
gcloud compute firewall-rules create ibmmq-transport \ --allow=tcp:1414 \ --target-tags ibmmq \ --source-tags ibmmq
Create a firewall rule to allow traffic to the queue managers from the health checkers:
gcloud compute firewall-rules create fw-allow-health-checks \ --allow=tcp:1414 \ --target-tags ibmmq \ --source-ranges 35.191.0.0/16,130.211.0.0/22
Setting up the internal TCP load balancer
You now create the internal load balancer that monitors the four instances to determine the two healthy primary instances and route traffic to them.
Create the health check:
gcloud compute health-checks create tcp hc-tcp-1414 \ --description="Health check: TCP 1414" \ --check-interval=2s \ --timeout=2s \ --healthy-threshold=2 \ --unhealthy-threshold=2 \ --port=1414
Create the backend services:
gcloud compute backend-services create mqm-svc-a \ --load-balancing-scheme internal \ --region us-central1 \ --health-checks hc-tcp-1414 \ --protocol tcp gcloud compute backend-services create mqm-svc-b \ --load-balancing-scheme internal \ --region us-central1 \ --health-checks hc-tcp-1414 \ --protocol tcp
Add the unmanaged instance groups to the backend services:
gcloud compute backend-services add-backend mqm-svc-a \ --instance-group mq-group-1 \ --instance-group-zone us-central1-c \ --region us-central1 gcloud compute backend-services add-backend mqm-svc-a \ --instance-group mq-group-2 \ --instance-group-zone us-central1-b \ --region us-central1 gcloud compute backend-services add-backend mqm-svc-b \ --instance-group mq-group-3 \ --instance-group-zone us-central1-a \ --region us-central1 gcloud compute backend-services add-backend mqm-svc-b \ --instance-group mq-group-4 \ --instance-group-zone us-central1-f \ --region us-central1
Create the forwarding rule:
gcloud compute forwarding-rules create mqm-svc-a-forwarding-rule \ --load-balancing-scheme internal \ --ports 1414 --network default \ --address 10.128.0.100 \ --region us-central1 \ --backend-service mqm-svc-a gcloud compute forwarding-rules create mqm-svc-b-forwarding-rule \ --load-balancing-scheme internal \ --ports 1414 \ --network default \ --address 10.128.0.101 \ --region us-central1 \ --backend-service mqm-svc-b
Creating and mounting the GlusterFS volume for queue manager A
In this tutorial, you use the GlusterFS distributed file system as shared
storage between nodes of a queue manager. You now set up GlusterFS for queue
manager A by deploying it on the mq-1
and mq-2
instances.
Initialize the GlusterFS trusted pool on
mq-1
by probingmq-2
:gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command='sudo mkdir -p /data/gv0 && sudo gluster peer probe mq-2' \ -- -t
Initialize GlusterFS trusted pool on
mq-2
by probingmq-1
:gcloud compute ssh mq-2 \ --zone=us-central1-b \ --command='sudo mkdir -p /data/gv0 && sudo gluster peer probe mq-1' \ -- -t
Start GlusterFS replication:
gcloud compute ssh mq-2 \ --zone=us-central1-b \ --command='sudo gluster volume create mq-data replica 2 mq-1:/data/gv0 mq-2:/data/gv0 && sudo gluster volume start mq-data' \ -- -t
Mount the shared volume on
mq-2
:gcloud compute ssh mq-2 \ --zone=us-central1-b \ --command='sudo mkdir -p /mnt/mqm_glusterfs && sudo mount -t glusterfs mq-1:/mq-data /mnt/mqm_glusterfs' \ -- -t
Mount the shared volume on
mq-1
:gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command='sudo mkdir -p /mnt/mqm_glusterfs && sudo mount -t glusterfs mq-1:/mq-data /mnt/mqm_glusterfs' \ -- -t
Verify the shared volume status:
gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command='sudo gluster volume info'
Except for the volume ID, the output looks like the following:
Volume Name: mq-data Type: Replicate Volume ID: ad63f6df-8469-4f30-9282-5a285d1a2b87 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: mq-1:/data/gv0 Brick2: mq-2:/data/gv0 Options Reconfigured: performance.readdir-ahead: on
Creating and mounting the GlusterFS volume for queue manager B
You now set up GlusterFS for queue manager B by deploying it on mq-3
and
mq-4
instances.
Initialize the GlusterFS trusted pool on
mq-3
by probingmq-4
:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command='sudo mkdir -p /data/gv0 && sudo gluster peer probe mq-4' \ -- -t
Initialize the GlusterFS trusted pool on
mq-4
by probingmq-3
:gcloud compute ssh mq-4 \ --zone=us-central1-f \ --command='sudo mkdir -p /data/gv0 && sudo gluster peer probe mq-3' \ -- -t
Start GlusterFS replication:
gcloud compute ssh mq-4 \ --zone=us-central1-f \ --command='sudo gluster volume create mq-data replica 2 mq-3:/data/gv0 mq-4:/data/gv0 && sudo gluster volume start mq-data' \ -- -t
Mount the shared volume on
mq-4
:gcloud compute ssh mq-4 \ --zone=us-central1-f \ --command='sudo mkdir -p /mnt/mqm_glusterfs && sudo mount -t glusterfs mq-3:/mq-data /mnt/mqm_glusterfs' \ -- -t
Mount the shared volume on
mq-3
:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command='sudo mkdir -p /mnt/mqm_glusterfs && sudo mount -t glusterfs mq-3:/mq-data /mnt/mqm_glusterfs' \ -- -t
Verify the shared volume status:
gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command='sudo gluster volume info'
Except for the volume ID, you see the following output:
Volume Name: mq-data Type: Replicate Volume ID: ad63f6df-8469-4f30-9282-5a285d1a2b87 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: mq-3:/data/gv0 Brick2: mq-4:/data/gv0 Options Reconfigured: performance.readdir-ahead: on
Initialize queue manager A
You now run a series of commands on mq-1
that define the name of the queue
manager, the shared storage, the incoming and outgoing communication channels,
authentication, and the other queue manager in the queue manager cluster.
In Cloud Shell, run a temporary MQ container on
mq-1
and connect to it:gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command='sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=A --volume /mnt/mqm_glusterfs:/mnt/mqm --network host --name=ibmmq-init --rm=true ibmcom/mq:latest ' \ -- -t
Enter cluster and queue definitions:
mkdir -p /mnt/mqm/data chown mqm:mqm /mnt/mqm/data /opt/mqm/bin/crtmqdir -f -s su mqm -c "cp /etc/mqm/web/installations/Installation1/servers/mqweb/*.xml /var/mqm/web/installations/Installation1/servers/mqweb/" su mqm -c "crtmqm -q -p 1414 $MQ_QMGR_NAME" su mqm -c "strmqm -x"
Configure queue manager A for clustering:
echo " * Define the full repository for the cluster ALTER QMGR REPOS(GCP) * Define backstop rule for channel auth SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('Back-stop rule - Blocks everyone') ACTION(REPLACE) * Clustering channels and listeners DEFINE LISTENER(A_LS) TRPTYPE(TCP) CONTROL(QMGR) DEFINE CHANNEL(GCP.A) CHLTYPE(CLUSRCVR) CONNAME('10.128.0.100') CLUSTER(GCP) REPLACE DEFINE CHANNEL(GCP.B) CHLTYPE(CLUSSDR) CONNAME('10.128.0.101') CLUSTER(GCP) REPLACE SET CHLAUTH('GCP.A') TYPE (QMGRMAP) QMNAME(B) USERSRC(CHANNEL) ADDRESS('*')" | runmqsc $MQ_QMGR_NAME
Configure the application queue, channel, and authorization:
echo " * Application queues DEFINE QLOCAL('APP.QUEUE.1') DEFBIND(NOTFIXED) CLWLUSEQ(ANY) CLUSTER(GCP) REPLACE * Application topics DEFINE TOPIC('APP.BASE.TOPIC') TOPICSTR('app/') REPLACE * Application connection authentication DEFINE AUTHINFO('APP.AUTHINFO') AUTHTYPE(IDPWOS) CHCKCLNT(REQDADM) CHCKLOCL(OPTIONAL) ADOPTCTX(YES) REPLACE ALTER QMGR CONNAUTH('APP.AUTHINFO') REFRESH SECURITY(*) TYPE(CONNAUTH) * Application channels DEFINE CHANNEL('APP.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('app') REPLACE SET CHLAUTH('APP.SVRCONN') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED) DESCR('Allows connection via APP channel') ACTION(REPLACE) * Application auth records SET AUTHREC GROUP('mqclient') OBJTYPE(QMGR) AUTHADD(CONNECT,INQ) SET AUTHREC PROFILE('APP.**') GROUP('mqclient') OBJTYPE(QUEUE) AUTHADD(BROWSE,GET,INQ,PUT) SET AUTHREC PROFILE('APP.**') GROUP('mqclient') OBJTYPE(TOPIC) AUTHADD(PUB,SUB)" | runmqsc $MQ_QMGR_NAME exit
Initialize queue manager B
You now run a similar series of commands on mq-3
that define the same
information for the second queue manager: name of the queue manager, the shared
storage, the incoming and outgoing communication channels, authentication, and
the other queue manager in the queue manager cluster.
In Cloud Shell, run a temporary MQ container on
mq-3
and connect to it:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command='sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=B --volume /mnt/mqm_glusterfs:/mnt/mqm --network host --name=ibmmq-init --rm=true ibmcom/mq:latest' \ -- -t
Enter cluster and queue definitions. (This is consolidated version of steps 2 through 4 from the procedure you ran for queue manager A.)
mkdir -p /mnt/mqm/data chown mqm:mqm /mnt/mqm/data /opt/mqm/bin/crtmqdir -f -s su mqm -c "cp /etc/mqm/web/installations/Installation1/servers/mqweb/*.xml /var/mqm/web/installations/Installation1/servers/mqweb/" su mqm -c "crtmqm -q -p 1414 $MQ_QMGR_NAME" su mqm -c "strmqm -x" echo " * Define the full repository for the cluster ALTER QMGR REPOS(GCP) * Define backstop rule for channel auth SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('Back-stop rule - Blocks everyone') ACTION(REPLACE) * Clustering channels and listeners DEFINE LISTENER(B_LS) TRPTYPE(TCP) CONTROL(QMGR) DEFINE CHANNEL(GCP.B) CHLTYPE(CLUSRCVR) CONNAME('10.128.0.101') CLUSTER(GCP) REPLACE DEFINE CHANNEL(GCP.A) CHLTYPE(CLUSSDR) CONNAME('10.128.0.100') CLUSTER(GCP) REPLACE SET CHLAUTH('GCP.B') TYPE (QMGRMAP) QMNAME(A) USERSRC(CHANNEL) ADDRESS('*') * Application queues DEFINE QLOCAL('APP.QUEUE.1') DEFBIND(NOTFIXED) CLWLUSEQ(ANY) CLUSTER(GCP) REPLACE * Application topics DEFINE TOPIC('APP.BASE.TOPIC') TOPICSTR('app/') REPLACE * Application connection authentication DEFINE AUTHINFO('APP.AUTHINFO') AUTHTYPE(IDPWOS) CHCKCLNT(REQDADM) CHCKLOCL(OPTIONAL) ADOPTCTX(YES) REPLACE ALTER QMGR CONNAUTH('APP.AUTHINFO') REFRESH SECURITY(*) TYPE(CONNAUTH) * Application channels DEFINE CHANNEL('APP.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('app') REPLACE SET CHLAUTH('APP.SVRCONN') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED) DESCR('Allows connection via APP channel') ACTION(REPLACE) * Application auth records SET AUTHREC GROUP('mqclient') OBJTYPE(QMGR) AUTHADD(CONNECT,INQ) SET AUTHREC PROFILE('APP.**') GROUP('mqclient') OBJTYPE(QUEUE) AUTHADD(BROWSE,GET,INQ,PUT) SET AUTHREC PROFILE('APP.**') GROUP('mqclient') OBJTYPE(TOPIC) AUTHADD(PUB,SUB)" | runmqsc $MQ_QMGR_NAME exit
Starting IBM MQ in HA mode
Now that the queue managers are initialized, you can start the IBM MQ cluster.
In Cloud Shell, start queue manager A on
mq-1
:gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command="sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=A --volume /mnt/mqm_glusterfs:/mnt/mqm --publish 1414:1414 --network host --name=ibmmq-node --rm=true -d ibmcom/mq:latest -c 'echo app:APPPass1! | chpasswd && su mqm -c \"strmqm -x\"; tail -f /dev/null'"
For purposes of this tutorial, in this command you set
AppPass1!
as the password for the user app.Start queue manager A on
mq-2
:gcloud compute ssh mq-2 \ --zone=us-central1-b \ --command="sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=A --volume /mnt/mqm_glusterfs:/mnt/mqm --publish 1414:1414 --network host --name=ibmmq-node --rm=true -d ibmcom/mq:latest -c 'echo app:APPPass1! | chpasswd && su mqm -c \"strmqm -x\"; tail -f /dev/null'"
In this command, you set
AppPass1!
as the password for the user app.To check the status of your load balancer, go to the Load balancing page in the Cloud Console.
From the list, select load balancer mqm-svc-a.
When the load balancer mqm-svc-a shows that it has a healthy instance group (as shown in the screenshot above), you're ready to go to the next step.
In Cloud Shell, start queue manager B on
mq-3
:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command="sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=B --volume /mnt/mqm_glusterfs:/mnt/mqm --publish 1414:1414 --network host --name=ibmmq-node --rm=true -d ibmcom/mq:latest -c 'echo app:APPPass1! | chpasswd && su mqm -c \"strmqm -x\"; tail -f /dev/null'"
Start queue manager B on
mq-4
:gcloud compute ssh mq-4 \ --zone=us-central1-f \ --command="sudo docker run -it --entrypoint=/bin/bash --env LICENSE=accept --env MQ_QMGR_NAME=B --volume /mnt/mqm_glusterfs:/mnt/mqm --publish 1414:1414 --network host --name=ibmmq-node --rm=true -d ibmcom/mq:latest -c 'echo app:APPPass1! | chpasswd && su mqm -c \"strmqm -x\"; tail -f /dev/null'"
Verifying the cluster
Before you test your deployment, you need to verify that queue manager A and queue manager B are communicating correctly.
In Cloud Shell, connect to the IBM MQ container on
mq-3
:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command="sudo docker exec -it ibmmq-node /bin/bash" \ -- -t
Check the status of the cluster communication channels:
echo "DISPLAY CHSTATUS(*)" | runmqsc $MQ_QMGR_NAME && exit
If everything is working correctly, the output looks like the following:
5724-H72 (C) Copyright IBM Corp. 1994, 2018. Starting MQSC for queue manager B. 1 : display chstatus(*) AMQ8417I: Display Channel Status details. CHANNEL(GCP.B) CHLTYPE(CLUSRCVR) CONNAME(10.128.0.2) CURRENT RQMNAME(A) STATUS(RUNNING) SUBSTATE(RECEIVE) AMQ8417I: Display Channel Status details. CHANNEL(GCP.A) CHLTYPE(CLUSSDR) CONNAME(10.128.0.100(1414)) CURRENT RQMNAME(A) STATUS(RUNNING) SUBSTATE(MQGET) XMITQ(SYSTEM.CLUSTER.TRANSMIT.QUEUE) One MQSC command read. No commands have a syntax error. All valid MQSC commands were processed.
Sending messages to the cluster
You now test the cluster by sending 2 messages to a queue named APP.QUEUE.1
using a sample application provided by IBM in the Docker image.
Send test messages to queue manager B
In Cloud Shell, connect to the IBM MQ container on
mq-3
:gcloud compute ssh mq-3 \ --zone=us-central1-a \ --command="sudo docker exec -it ibmmq-node /bin/bash" \ -- -t
In the terminal shell, launch the message sender application connecting to queue manager B:
MQSAMP_USER_ID=app /opt/mqm/samp/bin/amqsput APP.QUEUE.1 B
When prompted, authenticate by entering the app password you created earlier (in this tutorial, it's
APPPass1!
).When you see target queue is APP.QUEUE.1, send a test message to the queue by entering
abcdef
.Send another test message by entering 123456.
Press Enter to quit the message sender application.
The output looks like the following:
Sample AMQSPUT0 start Enter password: ********* target queue is APP.QUEUE.1 abcdef 123456 Sample AMQSPUT0 end
Verify that there is one message in flight in queue manager B:
echo "display qstatus(APP.QUEUE.1)" | runmqsc $MQ_QMGR_NAME | grep CURDEPTH
There is only one message in flight, because IBM MQ load balancing routed the other message to queue manager A, as will be described later.
Close the SSH connection:
exit
You see the following output:
CURDEPTH(1) IPPROCS(0)
Queue manager B has one message in flight even though in the previous step you sent two messages. This is because IBM MQ load-balances the messages between queue managers, and the second message was sent to queue manager A.
Verify messages in flight in queue manager A
In Cloud Shell, connect to the IBM MQ container on
mq-1
:gcloud compute ssh mq-1 \ --zone=us-central1-c \ --command="sudo docker exec -it ibmmq-node /bin/bash" \ -- -t
In the terminal window, verify that there is one message in flight in queue manager A:
echo "display qstatus(APP.QUEUE.1)" | runmqsc $MQ_QMGR_NAME | grep CURDEPTH
You see the following output:
CURDEPTH(1) IPPROCS(0)
Testing queue manager high availability
You now shut down instance mq-3
and verify that mq-4
becomes the primary
for queue manager B.
In Cloud Shell, stop instance
mq-3
:gcloud compute instances stop mq-3 \ --zone=us-central1-a
Verify that
mq-4
has become the primary instance by checking that it is healthy from the load balancer's perspective:gcloud compute backend-services get-health mqm-svc-b \ --region=us-central1
You see output like the following:
backend: https://www.googleapis.com/compute/v1/projects/ibmmq1/zones/us-central1-a/instanceGroups/mq-group-3 status: kind: compute#backendServiceGroupHealth---backend: https://www.googleapis.com/compute/v1/projects/ibmmq1/zones/us-central1-f/instanceGroups/mq-group-4 status: healthStatus: - healthState: HEALTHY instance: https://www.googleapis.com/compute/v1/projects/ibmmq1/zones/us-central1-f/instances/mq-4 ipAddress: 10.128.0.5 port: 80 kind: compute#backendServiceGroupHealth
Moving to production
In this tutorial, you used IBM MQ Advanced for Developers. Before you move to production, review other available licensing and features sets. You should also review IBM's documentation on running IBM MQ in a container before deploying to production.
Cleaning up
After you've finished the tutorial, you can clean up the resources that you created on Google Cloud so they won't take up quota and you won't be billed for them in the future. The following sections describe how to delete or turn off these resources.
Delete the project
- In the Cloud Console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- Read more about IBM MQ.
- Read about IBM MQ high availability configurations.
- Read more about internal load balancers.
- Learn more about GlusterFS.
- Try out other Google Cloud features for yourself. Have a look at our tutorials.