An emerging pattern in server-side event-driven programming formalizes the data that might be generated by an event source, then a consumer of that event source registers for very specific events.
A declarative eventing system establishes a contract between the producer (event source) and consumer (a specific action) and allows for binding a source and action without modifying either.
Comparing this to how traditional APIs are constructed, we can think of it as a kind of reverse query — we reverse the direction of typical request-response by registering a query and then getting called back every time there’s a new answer. This new model establishes a specific operational contract for registering these queries that are commonly called event triggers.
This pattern requires a transport for event delivery. While systems typically support HTTP and RPC mechanisms for local events which might be connected point-to-point in a mesh network, they also often connect to messaging or streaming data systems, like Apache Kafka, RabbitMQ, as well as proprietary offerings.
This declarative eventing pattern can be seen in a number of serverless platforms, and is typically coupled with Functions-as-a-Service offerings, such as AWS Lambda and Google Cloud Functions.
An old pattern applied in a new way
Binding events to actions is nothing new. We have seen this pattern in various GUI programming environment for decades, and on the server-side in many Services Oriented Architecture (SOA) frameworks. What’s new is that we’re seeing server-side code that can be connected to managed services in a way that is almost as simple to set up as an onClick handler in HyperCard. However, the problems that we can solve with this pattern are today’s challenges of integrating data from disparate systems, often at high volume, along with custom analysis, business logic, machine learning and human interaction.
Distributed systems programming is no longer solely the domain of specialized systems engineers who create infrastructure, most applications we use every day integrate data sources from multiple systems across many providers. Distributed systems programming has become ubiquitous, providing an opportunity for interoperable systems at a much higher level.
Interesting. When you say “very specific events”, are you referring to the schema of the data included in the event, or to some filter conditions that must be met for the event to reach the consumer, or both? What is getting more specific?
I’m reminded of two things: percolators in elasticsearch, and separately, something we’re doing now; event streams are flowing through stages of normalization and enrichment, but also through light weight functions as intermediary consumers that carve out events of interest to specialized consumers. We don’t quite have it automated to the point of being declarative, but the intermediary consumers are extremely lightweight, only taking a few minutes to create and deploy.
Maybe there’s an opportunity to refine PaaS offerings in this space…?
I’ve written a bit more detail on what I mean by very specific events, in terms of the conditions that trigger the events: /2018/02/listening-to-very-specific-events/
CNCF Serverless WG is facilitating a process to formalize the event types with the CloudEvent specification: https://github.com/cloudevents/spec
There is indeed an opportunity to refine PaaS offerings. There’s a lot of discussion about events in the context of Functions-as-a-Service (FaaS), yet I think these kinds of events have a place in the broader ecosystem as well.
GUIs did not typically support or encourage event isolation, where every event carries all context required to process it – until we did this in OpenUI in 1990 – that made for a revolutionary UI dev experience. HP’s SoftBench also used a patented pub/sub messaging architecture for CASE tool integration, with pattern-matching for subscriptions.
However the real devils in these systems are the methods for discovering, subscribing and routing messages reliably. If every message is “very specific” and you have to know where to subscribe, then at scale you can’t manage faults and end up with a massive tangle of spaghetti that hinders rearchitecting.