Applications based on event streams have more demanding architectural qualities than ever, and traditional approaches to storing, querying and reacting to patterns are tearing at the seams. Business requirements mandate that our systems both record everything that’s ever happened, yet also summarize the entirety of that history with increasingly low latency. Reconciling these attributes and others into a new, unified architecture benefits from a change in perception of the problem.

Event streaming is a natural fit for many domains

Event streaming covers a category of applications in which units of related data are continuously consumed by one or more processes, wherein those datums need to be transformed and summarized in a variety of ways. Event streaming is a relatively generic creature compared to other system architectures. Effective implementations manifest careful consideration about what constitutes an event, plus thoughtful preparation for the inevitability that the problem domain will evolve.

We’ll take a look at the pitfalls inherent in a common event streaming implementation pattern against the backdrop of a fictional gaming startup, Aubergine Games, makers of Vegetable Towers.

The Data Loss by Design Architecture

Most event streaming applications follow what we’ll call the Data Loss By Design approach: write and condense all events into a centralized database and issue a query when a new information needs to be learned.

Vegetable Towers had the markings of a success from the start. As word spread in the indie gaming community, its developers watched usage grow organically as more users built taller towers and launched healthier assaults on their enemies. The technical implementation was easy enough to reason about. A table per user kept track of the player’s towers, with respective rows containing columns for attack strength and hit points. Given the outcome of a battle between two towers, hit points were recalculated and updated along with the users’ total scores.

Things were going well. Our friends at Aubergine Games knew it was time to figure out how to make money. The team crossed its fingers when it launched new features, blending freemium aspects into the gaming model: Vegetable Towers was free to play, and if you played hard enough, you could earn the right to use exotic materiel, including tuber trebuchets and active-armor gourds. Or, short on time, you could simply exchange currency for the freshest stuff.

And people ate it up.

Behind the scenes, however, technical challenges began to mount. The system strained under heavy load. It wasn’t nearly enough to keep the servers online. Players wanted to know why they weren’t winning sure-bet bouts. Paying customers yelled at the support team over billing discrepancies as the legal team worried about compliance. The new data science team was hampered by limited access to data. Increasing capacity would have to come at the cost of downtime and data loss.

An internal memo summed up the growing pains:

New business requirement Drawback of data loss by design
Custom analytics dashboards for every team member Expensive à la carte queries are computed from scratch every time.
Data science team wants to annotate user event behavior for cohort analysis When there’s only one schema, only the prix fixe menu can be offered.
Read-heavy mobile version launched Reads cannot be scaled independently from writes. Too many mouths to feed.
Read-only mode when database writes are unavailable When you can’t write you can’t read. Can’t talk when your mouth’s full.
Live customer support team issuing queries for user data Limited concurrent readers, and too many cooks in the kitchen.

The centralized database, with its update-in-place semantics, had been an easy choice in the game’s early days. Now the team realized data loss by design also meant suppressed opportunities for growth.

Setting aside our fictional world, let’s unpack in more generic detail the drawbacks of this architectural pattern. Later we’ll examine an alternative approach.

Queries are computed from scratch every time

Each time a query is issued, its answer needs to be computed from scratch. As the data set grows, the time to answer the query, irrespective of any contention issues, will at least grow proportionally. Large data sets result in long query times, and often tempt engineers to use an elaborate caching strategy or lazy read approach. Both of these techniques quickly become more expensive to maintain than they value they produce. Modern applications require low query latency.

There can only be one schema

A relational database table can only have one schema. This fact is hugely critical for the kinds of queries that you issue over your event data. Queries that can compute their results efficiently must have a schema that’s friendly to the type of questions being answered. As projects progress, data will arrive in increasingly variable formats, and both the number and types of queries being asked about the data set will grow. Schemaless databases may end up leaving you worse off because there’s no schema to leverage for query efficiency.

When you can’t write, you can’t read

When a database is used as the mechanism to support both reads (issuing queries) and writes (submitting new data), your architecture is positioned such that a database that’s having trouble receiving writes cannot service reads. Because the medium for both of these processes is the same, you get downtime across all your applications. At a minimum, when you can’t write new events to your data set, you should at least be able to read existing events.

Reads can’t be scaled independently from writes

Most systems that process event streams are heavily skewed in one direction: they either do a low amount of writes and a high amount of reads, or a high amounts of writes and a low amount of reads. When you use a central database to process your events, you must scale both of these attributes together with larger hardware. Even using synchronous replication isn’t much of a cure because reads only scale at the expense of slower writes.

There can only be a limited number of concurrent readers

The reality of software is that it’s executed on physical hardware that has limited capacity to pass electricity through it. At some point, the audience interested in your data set will grow large enough that the machine housing your database will bottleneck either on its disk IOPS or network traffic, and readers will need to contend with one another to issue queries to the database. Query contention on a central resource produces a diminished user experience.

A fresh take: databases as first-class event streams

The fundamental adjustment in thinking is more broad than you might expect. What we need to do is conceive of a database not as a place where we read and write state, but as streams: a stream of writes, and a stream of reads.

When we consider the way that a centralized database is used for processing a stream of events, it’s helpful to consider the queries that we ask of those events as a stream in their own right. User applications learn information by issuing a sequence of queries over tables of data. When a query is passed over its data, the query captures a view of the data that satisfies that query. The query is returned with that view to the calling application. The stream of queries is ephemeral since once the application that issued the query gets a result, that query and its view of the data are essentially discarded from the perspective of the database. On the other hand, the corpus of data is enduring. It remains in place no matter how many queries are asked of it.

As it turns out, by inverting our thinking from a stream of queries over data to a stream of writes over queries, we’re able to overcome the challenges that plague database-centric architectures. In this model, queries are enduring. We might say that we “install” a set of queries, and then pass a stream of write events over those queries. Each query incrementally selects a view of the event stream as it passed through. With this approach, the stream of writes is also enduring. We preserve the sequence of writes so that we can later install new queries which will have access to the entire history of events. The primary difference with this architecture is that queries are answered in constant time because each query maintains a view of the event stream that is incrementally updated.

A better architecture: a log-centric approach

There’s a better way to design an architecture that handles event data: one that uses a log. A log is an append-only sequence of immutable data records. Any number of processes can read from the log, and they will all read the same events in the same order that they were appended. Logs are incredibly simple. They form the basis for many database designs. In some respect, it shouldn’t be surprising that as the problem domains we face become more complicated, we fall back on more elegant designs that make use of the simple components on which our previous tools were built over.

With a log design, events are appended to the tail. Each query you issue becomes a consumer from the log, reading from the beginning and continually applying every new piece of data to its incremental results. At any point in time, you can ask the query for its current answer. We call this value a materialized view.

This simple inversion elegantly handles the challenges that a database-centric approach imposes.

Queries are incrementally computed

The sharpest contrast is the approach to how a query computes its answer. With a database, a query needs to rebuild its answer from scratch every time it’s issued. A log architecture, on the other hand, involves each consumer independently reading every entry from the log as its appended, and applying that entry to its local state. Using this approach, any application can ask any consumer for its query result and get an answer instantly. That answer will be true “as of a particular point in the log”. Consumers can read at different rates. When new data is appended to the log, the consumer need only read the new data to update its query result, thus doing the minimum amount of work possible to catch up.

Each query gets its own schema

With a database design, each table of data is only ever allowed to have exactly one schema. A log approach offers the same set of data to every consumer, meaning that each consumer can apply its own isolated each to its local state. Moreover, you can take a consumer offline, update its code, and bring it back online. When log contents are streamed to the consumer, it can apply an adjusted schema from what it had before. Adding another consumer has no effect on existing consumers.

Reads are fully decoupled from writes

When all events are stored sequentially in a log, reads are fully decoupled from writes. When an application wants to record a new event, it appends the data to the tail of the log. Conversely, when an application wants the answer to a query, communicates with the consumer to obtain its local value. The log infrastructure and the consumer are independent processes, presumably on different nodes in the network, meaning that if the log is unavailable, you can consume to make queries of the consumer. The reverse is also true: writes can proceed without reads being available.

Reads scale independently from writes

Reads are fully decoupled from writes in their ability to store and process data. This means that you can scale your reads separately from writes. If your system is biased to have more activity on one of these camps, you can either increase the cluster of log machines, or increase replicas of your reading consumer. Targeting exactly where new resources go to results in direct cost savings.

Readers scale without replication locking

A major problem with achieving higher scale while using a database are the tradeoffs involved in choosing a replication schema. Systems that are designed to be immutable and append-only have very performant replication properties. Since no data in the past can be changed, data placed at a specific physical location on a disk can be targeted for transfer. Replication can achieve better mechanical efficiency in the hardware. With more replicas of the log, more readers can join and before a significantly higher number of queries without resource contention.

Never forget an event

Logs are append-only. Entries can’t be deleted, and they especially can’t be modified once they are written. How then do we do deal with the problem of correcting bad data? Assuming that each event has a unique identifier, a new event with the same identifier is simply appended to the tail of the log, and consumers of the log can make independent choices about what to do with new events that have an identifier already in their local state. Logs are designed to never forget. Even when you want to forget a bad value, you get a superpower: the ability to remember that you forgot. An ability make adjustments while preserving the entire data set intact is a requirement for sane debugging and auditing.

Maintaining the entire event history also means that when a code mistake is made, you only need to replay the log from the index where things went awry—meaning you have the opportunity to do the minimum amount of work possible to catch up. In the worst case, you can recompute queries from scratch by starting at the tail of the log. The best case outcome for a database centric design is the worst case for a log architecture.

Wither not, Vegetable Towers

Finally, let’s return to the challenges of Aubergine Games, where the log-centric approach can help our strawman friends cultivate success.

New business requirement Log architecture advantage
Custom analytics dashboards for every team member Incremental queries deliver results instantly.
Data science team wants to annotate user event behavior for cohort analysis Each query gets its own schema.
Read-heavy mobile version launched Reads and writes scale independently.
Read-only mode when database writes are unavailable Readers scale without replication locking.
Live customer support team issuing queries for user data Append-only logs never forget.

Log-centric architectures with Pyroclast

Log-based architectures are a great fit across a broad landscape of applications. And we love opportunities to help solve interesting technical challenges. That’s why we built Pyroclast.

Pyroclast is Platform as a Service for building real time event processing applications. With Pyroclast, you write events into our hosted logs, quickly build services representing complex queries that process your logs, and launch those services to deliver query results. Pyroclast makes it trivially to transfer state across services, replay historical events, and scale independent aspects that have more demanding performance profiles. Pyroclast makes it easy to build log-oriented services that integrate seamlessly into your architecture.