Articles

Event Store on Kubernetes

Zachary Schneider  |  28 February 2020

Due to the recent increase in popularity of Kubernetes and its excellence for the hosting and orchestration of stateless workloads, we regularly encounter the request to expound upon Event Store’s suitability for deployment via the purported king of cluster schedulers.

To summarize, we recommend for reasons we will describe that, to get the most out of your Event Store database deployment, Event Store is deployed via native packages to bare metal systems or isolated virtual machines.

Factors to consider

We understand that Kubernetes is being sold as a one size fits all solution for orchestration problems, and stateful sets would seem to be the answer, however, there are requirements for the orchestration of distributed databases that Kubernetes does not satisfy on its own. There are also several technical properties to consider that may affect the availability, performance, and reliability of your database.

Suitability questions:

  • Is my workload a production workload?
    • While Kubernetes is fine for development setups, depending on the environment it may not be suitable for a production database workload.
  • How important is my data?
    • Kubernetes may be fine hosting databases for lossy data systems, those that host telemetry, or monitoring data for example. Risks for more important data should be considered.
  • How much downtime during upgrades is acceptable?
    • Upgrades utilizing the Kubernetes deployment resource may not ensure zero downtime in all environments.

Distributed databases on Kubernetes

Process isolation and page cache issues

Linux CGroups do not provide perfect isolation for processes utilizing the Linux page cache. Consistent page cache availability is required by databases to ensure read and write performance. As such, database processes should not be co-located with other processes that make heavy use of the page cache. A more in-depth description of the issue may be found here: https://engineering.linkedin.com/blog/2016/08/don_t-let-linux-control-groups-uncontrolled

Network latency risks

A stable and low latency network is a requirement for well-performing distributed databases. Cluster membership and consensus for reads and writes will affect performance and availability if related operations must be retried or if they time out. Overlay Container Network Interface (CNI) implementations that rely on encapsulation can add latency to network operations, affecting performance. Additionally CNI upgrades may interrupt network connectivity affecting the availability of the database.

The Container Storage Interface (CSI) specification does not include an Input/Output Operation (IOP) constraint declaration for storage classes to ensure that requested volumes will perform as expected. This is less of a concern for CSIs that target cloud-based persistent volumes as the volume size is generally tied to the IOPs it provides. For on premises CSIs one must ensure that a provisioned volume will guarantee the required IOPs for your workload. Occasionally, volume re-attachment, during process migration, may hang or fail, impacting the availability of the database.

The Kubernetes deployment controller does not provide the means necessary to do a rolling update of a distributed database deployment on its own without incurring some downtime. Knowledge of cluster state is required to ensure that quorum requirements are met in order to keep the cluster in service during an upgrade.

Recommendations for deployment on Kubernetes

  • Do not co-locate Event Store processes with other running components
    • Utilize labels to ensure that Event Store runs on dedicated nodes.
    • Utilize anti-affinity to ensure that only a single Event Store process runs per node.
    • Ideally ensure that there are dedicated labeled spares for Event Store should a node fail.
  • Do not run Event Store on Kubernetes clusters that utilize overlay CNIs
    • Prefer layer 2, native, or CNIs that do not rely on userland components.
  • The Event Store deployment will only be as reliable as the combination of the Kubernetes deployment, the chosen CNI, and the chosen CSI.
    • Google’s GKE currently satisfies the CNI and CSI requirements, while also providing for the most seamless upgrade experience.

Upcoming improvements in Event Store

We are committed to ensuring that Event Store is deployable, and runs well on Kubernetes. To achieve this goal, we are working on the following features:

  • Version 20.02
    • A liveness endpoint has been added, which can be utilized for liveness checks and startup probes. This feature should help to ensure minimal downtime for rollouts utilizing the Kubernetes deployment resource.
  • Future releases
    • A plugin for cluster member discovery via the Kubernetes API.
    • An operator that will increase the operability, reliability and improve the upgrade process of Event Store deployments on Kubernetes.

Photo of Zachary Schneider

Zachary Schneider Zach joined Event Store as Head of Cloud in early 2020 and is a Cloud SaaS, IaaS and PaaS industry veteran. Formerly he designed, built, and operated cloud products and streaming data systems at Rackspace and Instana. He loves distributed systems, especially when applied to streaming data problems. He lives in Austin TX, where he enjoys nature, as well as the craft food, coffee, and cocktail culture. On the weekends he can be found running or cycling, or enjoying brunch with his son and wife. He has now left Event Store.