Building Real-Time Inventory Systems for Retail

This solution provides an in-depth review of the architecture needed to deploy a real-time inventory infrastructure on Google Cloud Platform. Real-time inventory systems are designed to update data-storage systems as soon as anything changes in the inventory. For example, in a retail store, inventory data can be updated automatically as soon as an item is sold, or even when stock is moved from one part of the store to another.

As retailers increasingly adopt strategies to work both online and offline, and in multi-channel scenarios, they need technical solutions to address the challenges associated with these approaches. Real-time inventory exists at the intersection of customer experience improvements, operational efficiency gains, and reducing time to market for innovative solutions.

With real-time inventory, retailers can improve the customer experience by enabling store associates to spend more time with buyers and helping customers find the products they’re looking for faster. Retailers can also make significant gains in operational efficiency by accurately knowing where product inventory is, and how much product is available, at any time. Actionable insights into inventory accuracy and movement also provide critical information that retailers need to fine tune how they take products to market and bring customers into every sales channel.

For real-time inventory to work, retailers need to deploy the right infrastructure and make it available at a moment’s notice. That means leveraging the characteristics of a cloud-based architecture, including elastically scaling computing infrastructure, managed software systems to reduce administrative complexity, and best-in-class data processing and warehousing.

The following diagram shows the architecture of a real-time inventory system that uses Google Cloud Platform on the backend. The remainder of the article describes the components and their interactions.

Real-time inventory system uses multiple Cloud Platform services

Inventory tracking

Tracking systems for inventory in retail stores typically use a combination of tagged merchandise, tag readers, and a gateway responsible for transmitting inventory data. Such systems employ RFID tags because they are lightweight and easy to attach to merchandise, similar to a price tag. Tag readers are positioned in locations throughout a store, ensuring that the entire physical space can be scanned for RFID tags. Depending on the size and physical layout of the retail space, one or more gateways might be deployed. These components collect data from each reader’s passive and active scans, and transmit the data to centralized infrastructure.

Depending on business and technical requirements, data collection can range from capturing all events to aggregating events by time or other useful metadata. For example, all inventory arrival and departure events could be captured and sent immediately, while inventory repositioning within the store could be aggregated into a single daily transmission. After data is collected, the gateway is responsible for authenticating against the inventory data ingestion service, establishing a secure link, and transmitting the data.

Based on layout or limitations of the physical space and technical requirements, the deployment of inventory tracking systems can be fairly complex. It might make sense to seek help from ecosystem partners in the deployment and configuration of inventory tracking infrastructure. For more information about working with partners, see Google Cloud Platform Partner Community.

Data ingestion

In real-time inventory systems and infrastructure, the ingestion service periodically receives inventory data from each store and passes it along to downstream services. This service must satisfy several functional requirements. It must be highly available, able to scale elastically as the influx of inventory data scales up or down, and provide low-latency response time to store-level gateways. As part of the inventory data ingestion, the service must also authenticate and authorize individual store-level gateways to ensure that only known, properly provisioned devices are sending data.

Given those requirements, Google App Engine is an appropriate solution. App Engine is a platform for quickly building and deploying scalable web applications without having to provision or configure individual servers. App Engine has the ability to scale up or down, automatically and in real time, based on incoming traffic patterns. With built-in load balancing, traffic splitting, versioning, and security scanning, App Engine is a platform well suited to the needs of real-time inventory data ingestion.

Messaging services

For cloud infrastructure deployments, messaging is a critical component for applications and services to communicate in real time.

Google Cloud Pub/Sub is a fully-managed, real-time messaging service that enables applications and services to send and receive messages by using push- or pull-delivery schemes. Cloud Pub/Sub supports one-to-one, one-to-many, or many-to-many message delivery semantics, enabling multiple, independent applications and services to communicate in real time.

In this solution, the messaging component delivers data from the inventory ingestion application to several downstream components. First, it pushes inventory data streams into Google Cloud Dataflow for in-flight data processing, as described in the next section. Second, it provides a notification mechanism, alerting multiple applications when inventory data has changed, as described in Integrating with applications and services.

Processing and persistence

Stream processing is the core service in a real-time inventory system. This service receives inventory data streams that are delivered through the messaging component, examines each incoming update of inventory data, and applies in-flight rules and outbound actions, such as persistence. In-flight rules could consist of time-based aggregations or roll-ups centered around individual items, stores, or regions. The service must be a general-purpose, stream-processing mechanism, capable of elastically scaling up or down depending on incoming data throughput, computing aggregations across sliding temporal windows, and pushing raw or aggregated data to multiple persistence services.

Cloud Dataflow handles these types of workloads. Cloud Dataflow consists of two major components: a set of SDKs that can be used to define a pipeline of data processing tasks and a managed service to dynamically execute data processing pipelines. The Cloud Dataflow managed service is capable of distributing data processing workloads across multiple Compute Engine VM instances, sharding and rebalancing the workload in flight.

In real-time inventory scenarios, Cloud Dataflow pulls streaming data from Cloud Pub/Sub and creates sliding, time-based views of the incoming data. Within each data window, you can apply a series of operations or transformations by using the Cloud Dataflow SDK. These transforms apply functions such as counting, filtering, or aggregating the data contained within each window.

The following diagram illustrates how you can process real-time inventory workloads. Each phase in the processing pipeline ends with persisting the output data into a specific data storage system: Google Cloud Bigtable, Google Cloud BigQuery, and Google Cloud SQL.

Data flows from Cloud Pub/Sub through Cloud Dataflow pipeline to storage.

Time-series events

As shown in the diagram, each element in the incoming inventory data stream is written as an individual time-series event to Cloud Bigtable, which is a fully-managed, NoSQL, big-data database service. Cloud Bigtable is based on Google’s internal Bigtable database infrastructure, which powers Google search, Google Analytics, Google Maps, and Gmail. The service provides consistent, low-latency and high-throughput storage, making it ideal for storing each inventory event. The purpose of storing individual events is for offline predictive analytics or related big data analysis, such as supply chain optimization, using tools such as managed Apache Spark or Apache Hadoop deployments with Google Cloud Dataproc or Cloud Dataflow.

Cloud Bigtable schemas use a single-row key associated with a series of columns. For real-time inventory workloads, event data streams are stored as time-series data, meaning data models for tables should be “tall and narrow”. The downstream use cases for such datasets play a significant part in the data structure, which must support fast queries and minimal outbound filtering and processing. It can help to use a strategy that incorporates data denormalization: writing the same data elements multiple times using different key-value schemas.

For example, an inventory-item-centric schema uses the following table format:

Row key
Columns
UPC + Timestamp
Store ID, Location, Reader, Style

Another schema uses a structure more specific to stores:

Row key
Columns
Store ID + Timestamp
UPCs, Location, Reader

And finally, a combined schema uses a structure similar to the following example:

Row key
Columns
UPC + Store ID + Timestamp
Location, Reader, Style

When using Cloud Bigtable, it’s important to consider data locality and distribution of reads and writes to maximize performance. Given that consideration, the last schema listed would be the most appropriate choice, because it would support a wider distribution of writes and avoid data hotspots. With such a schema, a single product can’t negatively impact performance because the Store ID is concatenated with the product UPC.

Ultimately, the correct structure, or structures if you want denormalization, depends on how you intend to use the data. For more information about best practices for schema design, see Cloud Bigtable Schema Design for Time Series Data.

Aggregated events

For business analysis or business intelligence purposes, you can load incoming inventory streams into BigQuery, a fully managed data warehouse solution for ad-hoc SQL workloads across inventory data. BigQuery separates storage and computing, enabling each function to be scaled independently, which means that queries across GB or TB of data can be executed in a matter of seconds.

Based on the Cloud Dataflow pipeline previously described, the output to BigQuery could be denormalized into multiple tables. Each table could contain a different view of the incoming inventory streams. For example, tables could be organized by aggregation levels, such as hourly, daily, and weekly, and by multiple dimensions, such as UPC and Store ID.

Similar to relational databases, BigQuery schemas and data models provide support for rows and columns, and also support the use of JSON columns with nested data. This means that an individual row can store nested data of key-value pairs. Support for JSON data embedded into individual rows allows for more-complex schema designs that match typical data warehouse use cases, such as star, snowflake, or fact constellation schemas.

Beyond inventory data storage, BigQuery is an appropriate system for broader retail-data-warehouse use cases. For example, by importing customer purchase or supply chain data, you can create complex schemas and data representations that leverage all available data points.

Transactional counts

In real-time inventory workloads, a source of truth is required to store and manage accurate inventory counts at all times. Based on the Cloud Dataflow pipeline described previously, inventory data streams are closely examined and particular events, such as new inventory shipments and outgoing inventory through purchase or shipping, are extracted. This information is then used to update inventory counts on a per-store basis. For a low-latency, transactional workload such as this one, Google Cloud SQL is the most appropriate choice for data storage.

Cloud SQL is a fully-managed, cloud-native deployment of MySQL, with built-in support for replication. Because it is based on MySQL, CloudSQL supports many applications and frameworks. CloudSQL offers built-in backup and restoration, high availability deployments, and read replication. These features are helpful when multiple applications and services need to query frequently for counts from the inventory.

For inventory count workloads, the following table shows a sample schema that you can use to store this data in an easy-to-query-and-update format:

Type
VARCHAR
VARCHAR
INT
TIMESTAMP
Column
Store ID
UPC
Count
Last Updated

Depending on what other metadata might be needed by downstream systems, such as in-store or back-office applications and third-party integrations, you could add columns to the schema. For reasons of performance and data integrity, it might also be appropriate to create a compound index using the Store ID and UPC fields for fast lookups, and to set the Last Updated timestamp to automatically update whenever a particular row is modified.

Integrating with applications and services

The goal of real-time inventory systems is to provide greater accuracy, visibility, and analytics for inventory movement across the supply chain. One of the key requirements for these systems is to integrate easily, in a scalable and secure way, with applications and services that require this data. A variety of business requirements dictate the types of applications and services that must be integrated.

Examples of service integrations include:

  • Publishing data feeds of inventory information, such as brand, style, and count, to drive display or search advertising campaigns.
  • Joining online and offline shopping analytics and shopper profiles.

Examples of application integrations include:

  • Store-associate-facing applications for managing local inventory tasks, which can be delivered through point-of-sale systems or tablet devices.
  • Providing standards-based APIs to integrate with back-office systems and applications, including enterprise resource planning, business process management, supply chain, and financial apps.

Many packaged enterprise software systems support integration with external systems by using HTTP-based APIs for consuming data, which simplifies the integration process.

App Engine is well suited to provide these types of applications and services with highly-available, low-latency access to critical inventory data. App Engine applications can leverage Google Cloud Endpoints to quickly create and deploy REST APIs. These APIs are easily accessible from web or mobile applications by using generated client libraries. This approach can reduce overall development time. App Engine applications also support multiple authentication approaches, making it straightforward to control and meter access to specified clients.

What's next

Send feedback about...