Recently, we have been focused on making our NodeJS gRPC client even more robust and stable. To achieve this, we had to introduce a few breaking changes. Read on to learn about the enhancements and necessary migration steps.
The recommended way of dealing with streams in Event Sourcing is to keep them short-living. You can find examples of life cycles in many domains such as completing the books in accounting, cashier shift change, end of the day in the hospitality industry, etc. However, depending on your case, streams can get long. In the previous versions, we were materialising the whole stream into an array while reading the events. It was sufficient if the streams were short enough but could result in timeouts or an out of memory exception in edge cases.
To enable the performant way of reading long streams, we're now returning a readable stream from the readStream
and readAll
methods rather than a promise of an array. This is a breaking change.
In version 1.x you would have to await the complete read of the stream then act on the constructed array.
type OrderEvent = JSONEventType<"noodles-ordered", { noodlesCount: number }>;
const order = {
totalNoodles: 0,
};
const events = await client.readStream('order-123');
for (const { event } of events) {
if (event?.type !== 'noodles-ordered') continue;
order.totalNoodles += event.data.noodlesCount;
}
With version 2.x, you can act directly on the events as they are read, allowing you to more easily and efficiently build entity state with a reduced memory footprint.
const order = {
totalNoodles: 0,
};
const eventStream = client.readStream('order-123');
for await (const { event } of eventStream) {
if (event?.type !== 'noodles-ordered') continue;
order.totalNoodles += event.data.noodlesCount;
}
You can also use native NodeJS stream events:
const getOrder = (orderStream: string) =>
new Promise((resolve, reject) => {
const order = {
totalNoodles: 0,
};
client
.readStream(orderStream)
.on("data", ({ event }) => {
if (event?.type !== "noodles-ordered") return;
order.totalNoodles += event.data.noodlesCount;
})
.on("error", (err) => {
reject(err);
})
.on("end", () => {
resolve(order);
});
});
or create custom transformers and pipe them:
client
.readStream(orderStream)
.pipe(new NoodleTransformer())
.pipe(new NoodleMaker());
For those who prefer a functional style, there are also useful external NPM packages that helps in stream transformations, e.g. iter-tools:
import { asyncTakeLast } from 'iter-tools';
const eventStream = client.readStream('order-123');
const lastEvent = await asyncTakeLast(eventStream);
or RxJS
import { from } from "rxjs";
import { map, scan, filter } from "rxjs/operators";
const eventStream = client.readStream('order-123');
from(eventStream)
.pipe(
filter(({ event }) => event?.type === "noodles-ordered"),
map(({ event }) => event.data.noodlesCount),
scan((totalNoodles, noodlesCount) => totalNoodles + noodlesCount, 0)
)
.subscribe((totalNoodles) => console.log(totalNoodles));
See also:
There is also an open proposal for Iterator Helpers to ECMAScript standard. Once it's accepted, iterating a stream should be even more ergonomic. You can track its implementation in node here.
We added a built-in reconnection mechanism to improve the dev experience and make transient error handling more straightforward. Before, you had to implement such logic on your own. Thanks to changes introduced in v2.0.0:
We still recommend defining your retry policy tuned to your use case needs. The sample scenario for failover scenario:
We added support to preferring Read-only replica while discovering cluster node connection. We also enhanced and standardised the discovery process.
Node preference can be specified using the nodePreference
connection string parameter. You can use the following options:
leader
Connect to a node in the leader state. If there is no leader node, select the first from the list of allowed nodes.follower
Connect to a node in the follower state. If there is no follower node, then try to select a leader. Otherwise, select the first allowed node.read_only_replica
Connect to a node in one of the ReadOnlyReplica states (listed below). Otherwise, try to connect to the leader, followed by the first allowed node.random
Connect to a random allowed node.We recommend using leader
preference. Other options can be used to, e.g. offload traffic from the leader node. Until you observe performance issues, the default should work correctly.
V2 also brings an optimisation to the JSON event encoding, which improves the encoding speed by up to 4x. This results in, at best, an extra ten events written per second.
ConnectionTypeOptions
In a previous version, we exported the individual connection types (DNSClusterOptions
, GossipClusterOptions
, SingleNodeOptions
) to allow functions wrapping the client constructor to be correctly typed. With this change we also made the ConnectionTypeOptions
obsolete. Version 2.0.0 removes this obsolete type. Use the specific options instead.
To use the gRPC client package, you need to install it either with NPM.
npm install --save @eventstore/db-client@2.x.x
or Yarn.
yarn add @eventstore/db-client@2.x.x
Connecting to the DB server You also need to have EventStoreDB running. The easiest way is to run it via docker:
docker run --name esdb-node -it -p 2113:2113 -p 1113:1113 \
eventstore/eventstore:latest --insecure --run-projections=All
Note that we're using insecure mode here to speed up the setup. EventStoreDB is secure-by-default. For detailed instructions, check the installation guide and security recommendations.
Having EventStoreDB running, you can connect:
import { EventStoreDBClient } from "@eventstore/db-client";
const client = EventStoreDBClient.connectionString("esdb://localhost:2113?tls=false");
We are officially supporting the Active LTS version. At the moment I write this post, it's v14. It should also work at least with v12, but we recommend that you always use the Active LTS.
NodeJS gRPC client is open-sourced and available under Apache 2.0 License in the GitHub Repository. You can find detailed documentation and samples in our documentation. We value the open-source community. Feel free to send us pull requests, issues or other forms of contribution.
If you have more questions, we're available and happy to help on our Discuss forum.