Event-Driven Architecture: Design Patterns — Part 02

Deep Banerjee
5 min readJul 30, 2022

--

This post is a continuation of my previous post, if you haven’t read that, I’ll suggest you read that first.

Claim Check Pattern:

With the claim check pattern, instead of the complete representation of the transformed data being passed through the event bus, the message body is stored independently, while a message header containing a pointer to where the data is stored (a claim check) is sent to the subscribers. The main benefits of this pattern are lower data volumes being sent through the event bus and an increased likelihood that messages will fit within the size limitations of the subscribing systems.

claim check pattern

Overview:

By incorporating a streaming message platform like Apache Kafka on Heroku or AWS Kinesis. Incorporating a streaming message platform like Apache Kafka on Heroku is important to consider for three reasons:

  1. There’s a potential for spikes in volume where the data flowing into the integration can temporarily exceed the capacity of the synchronization and transformation processing.
  2. There may be multiple services that need to respond to the data changes.
  3. Data transformation can be complex and may need to be segmented to better support scaling and maintainability of the pipeline.

Here a monolithic transformation app could read a message from Kafka, transform the data, and then write it back out to Postgres. Transformation apps from multiple external services could subscribe to this message stream and handle data in whatever ways may be needed.

Advantages to This Approach

There are two key advantages to this. First, it allows for a reduction in the size of messages that are delivered through Kafka. Kafka performance is optimized for 1K messages, and the default limit on message sizes in Kafka on Heroku is 1MB. Smaller messages consume less of Kafka’s storage capacity, which can enable longer message retention if needed.

The second advantage is that this provides a more robust pipeline. If at any point, a part of the transformation pipeline fails, we don’t lose any of the data that was in transit. If we see data corruption happening, we can review the data as it was in the pipeline, and isolate any errors that may be causing it.

Business Use Case Examples

  • Large volumes of messages. Because the claim check pattern reduces the size of delivered messages to just the header information, messages can be processed more quickly and efficiently. This is helpful for integrations involving medical devices that collect and send data at short intervals, such as heart rate monitors, and also for retail or similar consumer-facing organizations that have to send and receive large numbers of messages.
  • Need for longer message retention. Because messages are sent to their own data store, they can be retained longer than the standard retention times of event buses. This is helpful for organizations that have regulatory mandates that require messages to be stored for a longer period of time. It also helps lower the likelihood of data loss since messages will still be accessible from the message store even if the original delivery attempt fails.
  • Message size reduction for legacy applications. When an organization relies on legacy applications that have limitations on the size of messages they’re able to process, this pattern can help with integrations since it keeps message sizes to a minimum by nature.

Considerations for the Claim Check Pattern

  • Make sure the message store is designed to handle the necessary throughput along with a potentially large volume of messages and message sizes.
  • Since the message store will have complete representations of the data, ensure it aligns with your organization’s security policies, including encryption and access controls.

Streaming:

While the event-driven architecture patterns covered thus far involve publishing single-purpose events that are consumed by subscribers, event streaming services publish streams of events. Subscribers access each event stream and process the events in the exact order in which they were received. Unique copies of each message stream are sent to each subscriber, which makes it possible to guarantee delivery and identify which subscribers received which streams.

Business Use Case Examples

  • Transportation management. Logistics organizations that need to monitor their fleets can use this pattern to view the routes that each vehicle is taking in near real-time and ensure that drivers are being as efficient as possible.
  • IoT devices. Manufacturers often use systems that generate rapid streams of data, and these streams can have downstream effects on additional systems. This pattern can be used to identify sequences of events that require human intervention before catastrophic failures spanning multiple systems occur.
  • Streaming media. Images and sounds in streaming video and audio need to be processed in order for the stream to make sense to the people consuming it.

Considerations for the Streaming Pattern

For a stream to make sense, all of its events and their associated messages need to be in the correct order. In some cases, you may want to source the data in a stream from different systems, which means that you’ll need to incorporate additional ordering logic as part of the design process.

Queuing:

In this pattern, producers send messages to queues, which hold the messages until subscribers retrieve them. Most message queues follow first-in, first-out (FIFO) ordering and delete every message after it is retrieved. Each subscriber has a unique queue, which requires additional setup steps but makes it possible to guarantee delivery and identify which subscribers received which messages.

queuing

Business Use Case Examples

  • Low-quality internet connections. Field service organizations or other organizations where teams with mobile devices need to work in areas with low quality or intermittent internet access will benefit from queuing since the applications on these devices can connect to their queues and retrieve any relevant messages when connectivity is restored.
  • Message buffering. For organizations that occasionally experience surges where the volume of messages being produced exceeds the subscribers’ ability to process them and where longer time periods between messages being published and received won’t create additional issues, queues can be used as buffers to store excess messages and prevent data loss.

Hope you enjoyed reading about them.

Next will discuss implementing the above pattern for real business use cases, soon…

--

--