IoT & Devices
HTTP vs. MQTT: A tale of two IoT protocols
When it comes to making software design decisions for Internet of Things (IoT) devices, it’s important to fit your functional requirements within the capabilities of resource-constrained devices. More specifically, you often need to efficiently offload data from the devices into the cloud, which eventually requires you to evaluate different communication protocols.
Google Cloud IoT Core currently supports device to cloud communication through two protocols: HTTP and MQTT. By examining the performance characteristics of each of these two protocols, you can make an informed decision on which is more helpful to your particular use case.
This post takes an experimental approach by collecting metrics on response time and packet size when sending identical payload through MQTT and HTTP, and by the variation of payload size and number of messages over one connection session. Doing it this way highlights some of the characteristics of—and differences between—the two protocols.
Setting up the experiment
We set up our experiment by using a single registry in Cloud IoT Core that accepts both HTTP and MQTT connections. The registry routes device messages to a single Pub/Sub topic that has one Cloud Functions endpoint as the subscriber: the Cloud Function simply writes the payload to log.
The end device is simulated on our laptop, which runs both a MQTT client and a HTTP client, and then measures the response time and tracks the packets sent over the wire.
Properties of the protocols
Before we go into the implementation details, let's take a look at the differences between MQTT and HTTP that influence how the tests are set up
MQTT (Message Queuing Telemetry Transport), as the name suggests, is a publisher subscriber pattern, in which clients connect to a broker and the remote devices publish messages to a shared queue. The protocol optimizes towards message size, for efficiency.
HTTP adheres to the standard request response model.
To make a fair comparison between the two protocols, all the steps in the authentication process (handshake) need to be taken into account. For the MQTT case, this means that the connect and disconnect messages are measured sequentially with the actual data messages. Since there will be the overhead for the MQTT case, we want to send a different number of data messages between one connect-disconnect cycle and the next.
Trace packets sent over wire
To get a detailed view of the packet size being transmitted for both protocols, we used Wireshark.
Locust client implementation
We used Locust.io to perform load tests and to compile the metrics. Locust.io gives you a simple HTTP client from which to collect your timing data, whereas for the MQTT profiling, we tested with the Eclipse Paho MQTT client package, authenticated via JWT with Cloud IoT Core. The source code for the test is available here.
Let take a closer look of the MQTT Locust client. You’ll likely notice several things. First, an initial connect and disconnect is issued in the `on_start` function to preload the MQTT client with all the credentials it needs to connect with Cloud IoT Core, so that credentials can be reused in each measurement cycle.
When publishing messages, we use the qos=1 flag to ensure that the message was delivered by waiting for a pub_ack from Cloud IoT Core, which is comparable to the request response cycle of the HTTP protocol. Also the Paho MQTT client publishes all messages asynchronously, which forces us to call the wait_for_publish() function on the MQTTMessageInfo object to block execution until a PUBACK response is received for each message.
for i in range(1, numberOfMsg+1):
msgInfo = self.client.publish(mqtt_topic, payload, qos=1)
Varying the number of messages: We measured the response time for sending 1, 100, and 1000 messages over a single connection cycle each, and also captured the packet sizes that were sent over the wire.
Varying the size of messages: Here we measured the response time for sending a single message with 1, 10, and 100 property fields over a single connection cycle each, and then capture the packet size sent.
Next we measured the average response time for sending a payload with 1, 10, and 100 property fields and then capture the packet size over the wire.
MQTT response time
Below are the results of running both the HTTP and MQTT cases with only one simulated Locust user. The message transmitted is a simple JSON object containing single key-value pair.
HTTP response time
Packet size capturing results
To get a more accurate view of what packets are actually being sent over the wire, we used Wireshark to capture all packets transferred from and to the TCP port used by Locust.io. The sizes of each packet were also captured to give a precise measure on the data size overhead of both protocols.
The wire log shows the handshake process that sets up a TLS tunnel for MQTT communication. The main part of this process consists of the exchange and verification of both certificates and shared secret.
The wire log over single message publishing cycle shows that there’s a MQTT publish message from client to server, a MQTT publish ACK message back to the client, plus the client also sends back a TCP ACK for the MQTT ACK received.
The initialization procedure for setting up the TLS tunnel is the same for the HTTP case as it is for the MQTT case, and the now established secure tunnel is re-used by all subsequent requests.
The HTTP protocol is connectionless, meaning the JWT token is sent in the header for every publish event request and the Cloud IoT Core HTTP bridge will respond to every request.
The following table sums the packet size sent during each of the transfer states for both MQTT and HTTP:
Looking at the result that compares response time over one connection cycle for MQTT, we can clearly see that the initial connection setup increases the response time for sending single messages to the level that equals the response time of sending a single message over HTTP, which in our case rounds up to 120 ms per message. The contribution in terms of data amount sent over wire is even more significant for the MQTT case in which around 6300 bytes is sent for a single message, which is larger than for HTTP, which sums up to 5600 bytes. By looking at the packet traffic log, we can see that the dominant part—more than 90% of the data transmitted—is for setting up and tearing down the connection.
The real advantage of MQTT over HTTP occurs when we reuse the single connection for sending multiple messages in which the average response per message converges to around 40 ms and the data amount per message converges to around 400 bytes. Note that in the case of HTTP, these reductions simply aren’t possible.
From the result of the test for variation in payload size, we could observe that response times were kept constant as the payload size went up. The explanation here is that since the payloads being sent are small, the full network capacity isn’t utilized and as the payload size increases, more of the capacity is being used. Another observation we can make looking at the network packet log is that even as the amount of information packed into the payload increased by 10x and 100x, the amount of data actually transferred only increased by 1.8x respective 9.8x for MQTT and 1.2x and 3.4x for HTTP, which shows the effect of the protocol overhead when publishing messages.
The conclusion we can draw is that when choosing MQTT over HTTP, it’s really important to reuse the same connection as much as possible. If connections are set up and torn down frequently just to send individual messages, the efficiency gains are not significant compared to HTTP.
The greatest efficiency gains can be achieved through MQTT’s increase in information density for each payload message.
The most straightforward approach is to reduce the payload size where more data can be transmitted in each payload, which can be achieved through choosing proper compression and package methods based on the type of the data being generated. For instance, protobuf is an efficient way to serialize structured data.
For streaming applications, time-window bundling can increase the number of data points sent in each message, whereby choosing the window length wisely in relation to the data generation pace and available network bandwidth, you can transmit more information with lower latency.
In many IoT applications, the prior methods mentioned cannot easily be applied due to the hardware constraints of the IoT devices. Depending on the functional requirements from case to case, a viable solution would be the usage of gateway devices, with higher capabilities in terms of processing and memory. The payload data is initially delivered from the end device to the gateway, whereby different optimizing measures can be applied before further delivery to Google Cloud.
Note: While this post intends to make comparisons between the two protocols, the actual response times depend on client network connectivity, the distance to the closest GCP edge node, where the IoT Core service is terminated, and the size of the transmitted message.