Monitoring Cassandra with Seastat

Cassandra comes with a very large array of metrics and stats. These metrics can be queried over the JMX APIs.

There are a handful of projects which expose these stats as Prometheus metrics, providing a scrape target which you can then ingest into Prometheus:

  • JMX Exporter is a plug and play JAR provided by the Prometheus project. It is a collector which you can run standalone or as an agent. It’s pretty configurable and you can use patterns to generate nicer metric names. JMX Exporter isn’t just limited to Cassandra, you can use it for any Java application which exposes JMX stats.
  • Cassandra Exporter is a Prometheus exporter which is Cassandra specific. It is recommended to be used as a Java agent alongside your main Cassandra process to take advantage of direct access to metrics rather than querying over JMX which has a pretty significant performance hit.

Both projects give metrics broken down by keyspace and table. This is super useful for breaking down your client query count by keyspace and table or debugging a table or keyspace taking up too much resources.

On a normal Cassandra cluster tens of keyspaces and tables, either of these projects will likely suffice just fine.

But what happens when you have a large cluster with thousands or even tens of thousands of keyspaces and tables?

My experience with both of these projects on a large cluster meant large resource consumption and very large scrape times (in the order of minutes, sometimes tens of minutes). The best we could do with a lot of culling of metrics is scraping every 5 minutes.

To add to our woes, running these exporters as agents on a large cluster led to amplied heap pressure (thus noticeably slowing down client traffic). In certain cases, they also leaked file descriptors and used a lot of memory.1

The core issue stems from extracting everything via JMX at once. Each keyspace and table adds a linear number of stats and collating all that together in a single request meant a very large buffer of memory and a lot of computation to collect and format all the stats.

Hello Seastat 🏎️

Seastat is a project I built in a weekend to make extracting Cassandra stats from large clusters faster and less resource intensive. It’s a completely standalone exporter written in Go.

The main power behind Seastat is Jolokia. Jolokia is the agent which we embed in Cassandra processes to provide access to JMX over HTTP with JSON.

The superpower of Jolokia is bulk request queries. Bulk Requests allows Seastat to request a bunch of stats at once and filter to specific attributes and MBeans within Jolokia without relying filtering within Seastat. This means we can be hyperspecific on the data we request, resulting in less computation and memory usage on the JMX server within Cassandra and less CPU usage within Seastat itself.

Seastat chooses to decouple collecting of metrics and exposing of metrics to the Prometheus scrapers. This means your Prometheus scrapers are not left waiting whilst all the metrics are being gathered in real time.

There are a few downsides compared to other projects:

  • Seastat does not export every single Cassandra stat. The more stats we add to be scraped, the longer we have to wait to gather stats.
  • Seastat is always scraping metrics and caching the most recent results. This does increase the risk of exposing stale metrics by a few seconds. You can use the seastat_last_scrape_timestamp metric (the Unix timestamp of the last scrape) to calculate the skew.

The big benefit though is the same stats which originally took multiple minutes to export now can be exported in seconds. All of this means you can visualize cluster usage per keyspace and table at a very high resolution (such as every 15 seconds). Having metrics at this resolution means you can identify micro-spikes and latency jitters much more easily.

If you are operating a large Cassandra cluster with a lot of keyspaces and tables, give Seastat a try.


  1. Both of these issues may have been fixed in the respective projects by the time you read this. ↩︎


Other posts you may enjoy

See all blog posts →