Requirements for the storage of events
As Event Sourcing gains traction as an application persistence pattern, moving from a niche application and into the mainstream line of business applications, there are some questions that need to be answered:
- How do we choose a dedicated database?
- What kind of API does it need to offer?
- Do I need to build my own event store?
These questions are harder to respond to than they initially seem. The main assumption I often hear is that it is simple to build a store that allows the appending of events and reading those back. It simply takes "Just one or two tables in a relational database".
Unfortunately, those two operations are not enough in real-life scenarios as they are far too limiting for the type of systems we would want to build.
We want to be able to build systems that integrate nicely with Message & Event-Driven architecture, that embrace the reactive nature of such systems. And, that also can be used in the serverless cloud world as well as on-premise.
The other aspect to consider is education. How do we help large numbers of newcomers to easily ingest all the information needed to be successful at building event-sourced systems? How do we simplify learning? By establishing boundaries in which event sourcing and event stream databases are enclosed. The boundary around the database provides access to the complexity through a well-defined external-facing API.
The criteria set for this attempt are:
- What are the data structures needed by applications?
- What are the high-level API needs of applications?
We need to define what is stored how it's organized and all operations and capabilities that allow to full support of a truly event-sourced system.
The utopian answer would be to create an SQL ANSI-like standard for event store databases. This article is a draft at such an attempt, and no definitive or formal definitions are provided.
The goals of this article are to:
- provide a minimal list of structures and operations that should be available in any event store you consider using,
- Gather critical feedback,
- Ensure application developers can focus on adding business value while knowing the event store of choice provides all the needed features that will allow their application design to grow.
Take this list lightly, it is certainly not The Definitive List. It exists to help broaden the adoption of events stream databases, compare them and, first and foremost, trigger a discussion inside the community.
Data and structure
The primary data structures are Events
and Streams
.
Events
Events are the fined-grained data we append to the database. In Event-Sourced systems, those are traditionally business events. Nevertheless, from the point of view of the database, those are just data units and might be interpreted as events, commands, or documents by the application.
They have the following attributes:
- Type: a string, defined by the application.
- Stream: the stream the event belongs to, controlled by the applications.
- Id: a string, unique over the store, either supplied by the application or the server, most of the time this is some sort of UUID.
- Revision: a number representing the position in the specific instance stream this event is part of.
- Positions: Numbers representing the global position of the event in the different levels of the stream structure.
- Data: the payload of the events, no assumptions should be made on the data format, it could be JSON, byte arrays, XML,...
- Metadata: should have a clear distinction between the system reserved metadata and client-supplied metadata.
- System Metadata
- Timestamp: the date and time when the event was appended to the store, controlled by the database.
- CorrelationId: supplied by the application.
- CausationId: supplied by the application.
- Application Metadata: any application-level metadata, no assumptions should be made on the data format.
- System Metadata
The Id
of the event can be used for deduplication when appending data to the store.
The Revision
is typically used for optimistic locking.
The Timestamp
in the system metadata should never be used for application-level purposes.
Revision
& Positions
are strictly increasing in their respective streams, they do not need to be necessarily gapless but that would be an added bonus. These numbers are managed by the event store server only.
While this requirement might seem trivial at first glance, this needs to be enforced at various levels of load.
This should hold true when appending events at a sustained rate of 10E3 events/seconds, across multiple clients, on a database containing 10E6 Streams and 1/2 1E9 events: 50 events per stream with 10 million streams.
Combine this with optimistic lock & idempotent appends as well as allowing immediate read-your-own-write semantics and this becomes a hard problem to solve.
The CorrelationId
and CausationId
are considered system metadata, in order to allow the database engine to provide different ordering and query capabilities than just reading a stream in sequence order. For instance, Get all Events with CorrelationId
, Get All Events caused by EventId
Streams
In most usages and event stores out there, a stream represents a given entity and are just named strings. The interpretation of the value is then considered an application-level concern. I suggest adding some structure, similar to what exists in the relational database world where a table is part of a database and a schema inside it.
Streams could have the following attributes:
- Schema: a string, whose purpose is similar to a table schema in the relational world.
- Category: a string, that identifies the entity type in the event store.
- Id: a string, that uniquely identifies a stream instance.
- Metadata: should have a clear distinction between the system reserved metadata and client-supplied metadata.
- System Metadata
- Time To Live: the maximum age in seconds of events the stream.
- Maximum Count: the maximum number of events in the stream.
- Application Metadata: any application-level metadata, no assumptions is and should be made on the data format.
- System Metadata
The fully qualified name of an instance stream is then [Schema
].[Category
].[Id
]. This is similar to a fully qualified name of a table in a relational database.
We have other levels of streams as well: [Schema
].[Category
], [Schema
], and All
.
Note using [Schema
].[Category
].[Id
] is just a sample to illustrate the need for a tree-like structure of streams.
Time To Live
and Maximum count
are used for transient data that can be automatically deleted(*) by the database.
When TimeStamp
+ Time To Live
> Current TimeStamp or the number of events in the stream > Maximum Count
the events are eligible for deletion.
When and how these deletions occur is left to the specific database engine implementation.
Having a tree-like structure of streams allows:
- Grouping certain categories of streams together into, from a modeling perspective, a coherent set.
- To reuse
Category
names when it makes sense from a modeling perspective. For example, having aCustomer.User.1
stream and aPersonalInformation.User.1
stream.
(*) Yes you can and should delete data in an event store. Deleting data from the active store should be part of the archiving strategy.
Operations
There are 2 broad groups of operations non-streamed and streamed. The streamed operations exist to enable reactive (sub)systems.
Appending events
Append(Stream,ExpectedRevision, Event[]) -> Result
Appends are transactional and have ACID behavior, this means that appending events to the database is not eventually consistent: you can read-your-own-writes. Events are appended to one and only one instance stream. Either one or more events can be appended to one stream. The ExpectedRevision is used for optimistic locking, if the revision supplied does not match the target stream revision, a concurrency error must be raised.
-
Event structure
- Type
- Id
- Data
- Metadata
-
ExpectedRevision
- Any: No concurrency check is performed, the stream might already exist or not.
- NoStream: The stream should not exist, appending to an empty stream will result in concurrency errors.
- StreamExists: The stream exists, and might be empty.
- Some number: the stream Revision should match.
-
Result
- Revision: the new Revision of the stream.
- Positions: the positions of the events in the stream structure of the last event appended in this operation.
Returning the Revision
is necessary for the next append operation, when using optimistic concurrency.
Returning the Positions
and Revision
also allows for the caller to wait for any reactive components, (projections for instance) until that Revision
or Positions
have been processed.
Idempotency requirements
Idempotency of append operations is one of those less talked about requirements. We wouldn't want to add the same events multiple times because some client application has gone rogue, would we? Unfortunately, that behavior is rarely documented.
Idempotency checks should take the ExpectedRevision
and EventId
into account. If it is Any
no idempotency check is performed. This will result in events being duplicated.
All other optimistic concurrency check levels need well-defined idempotency behavior. And this gets complicated rapidly, here are a few scenarios:
Appending metadata to streams
Streams have metadata as well and can be set at any level of the stream
structure. They could be implemented as system streams. Appending metadata to a Stream is allowed even if the target stream does not exist yet in the database. A strict separation of system & application-level metadata should be enforced.
AppendMetadata(Stream, ExpectedRevision, Metadata) -> Result
-
Stream is any level in the structure.
-
ExpectedRevision.
- Any: No concurrency check is performed, the metadata stream might already exist or not.
- NoStream: The metadata stream should not exist, appending to an empty stream will result in concurrency errors.
- StreamExists: The metadata stream exists, and might be empty.
- Some Number: the metadata stream revision should match.
-
Result
- Revision: the new revision of the stream.
- Position: the global position of the last metadata appended in this operation.
Reading data and metadata
It should be possible to read streams either in a forward or backward way. It should be possible to read a stream from a given inclusive Revision
or to read all Event
from a Position
Read(Stream, Direction, Revision) -> Results
Read(Stream, Direction, Position) -> Results
Read(Direction, Position) -> Results
-
Stream is any level in the structure.
-
Direction:
- Forward
- Backward
-
Revision can be:
- Start: read from the start of the stream (the very first event).
- End: Read from the end ( the very last event in the stream).
- Some Number: Read from that Revision.
-
Result is a list of
Event
that can be iterated upon.
Behavior:
Reading events from:
- a
Schema
yields, in order, all events from all categories and streams in theSchema
. - a
Category
yields, in order all events from all streams in theCategory
. - a
Schema.Category.Id
yields, in order, all events of that particular stream.
What happens when there are 2 streams with the same category
and id
but in different schemas
? Customer.User.1
& PersonalInformation.User.1
and the operation is Read("Users.1", Start)
We should probably always use fully qualified stream names to avoid any confusion.
Truncating and deleting streams
It should be possible to truncate an instance Stream
before a certain Revision. This is especially useful for implementing a "Closing The Book" pattern.
Deleting a stream will allow it to be recreated at a later point. Some stores have the concept of a Tombstone, where a Stream
can never be recreated again.
Truncate(Stream, Revision)-> Results
Delete(Stream)-> Results
- Revision: All events before the supplied value will be deleted.
- Results: Whether the operation was successful or not.
Streaming operations
There are two broad types of streaming operations needed.
The first type is typically used by long-lived processes. The client starts a subscription, will eventually catch up and get new events as they are appended to the store. Support for single and competing consumers falls into this type of operation.
The second type is more like push notification. This is useful in serverless scenarios initiated by the server when a new event is appended, a notification is sent to a predefined consumer. For example, pushing notifications to SQS, EventGrid, Integration components.
Other operations
Check if a stream exists. StreamExist(Stream)-> Result
.
- Result:
- Deleted.
- Exists.
- NotFound.
Getting the last Revision & Position of a stream. StreamHead(Stream) -> Result
.
- Result:
- Revision: that last know event revision of this stream.
- Position: the last known position of the last know event in this stream.
Get the last known Position
in the store HeadPosition() -> Position
.
Get All Schemas, Categories, Id, Event type Those are similar to what we can do with information_schema.table
Streams(Filter)-> string[]
.
- Filter: filter on Schema, Category, Id.
Get the count of Events between two positions/revisions The count of events between two revisions/positions is not the difference between the number, as some events might have been deleted Count(Stream, Revision, Revision) -> Number
Count(Position , Position) -> Number
.
General questions
Revision
seems like an unnecessary concept, it's tied to a specific stream instance. Position
can be used in every place instead, or should it be separated since Revision
is useful to denote that the stream owns that event. While Position
denotes the fact that it's the event location in the hierarchy of streams.
Notes
The initial idea for this list has been around for some time already here, well before I joined EventStore. It has been heavily inspired by EventStoreDB since this is my preferred purpose-built store, and by other stores as well.
Feedback wanted!
If you have comments, additions, or any other thoughts on this article, don't hesitate to use our Discuss forum: https://discuss.eventstore.com.