You might be right! Kafka + KV store + Redis can definitely work for this.
Our teams typically started with MySQL, then added Kafka and Redis as they scaled. The pain wasn't the initial build—it was what happened after: every team implemented forward/reverse lists, counts, and indexes slightly differently. Retries and out-of-order events caused subtle drift. Migrations became error-prone because the "rules" for derived data lived in application code scattered across services.
That's what Actionbase does: versioned state transitions, pre-computed indexes, idempotent mutations—all in one place. If your current setup is working and correctness isn't drifting, you probably don't need this.
You might be right! ScyllaDB is a solid choice—eventual consistency is often fine for interactions.
The friction we hit was less about storage and more about fragmentation: teams kept rebuilding the same features (likes, views, follows) with slightly different implementations. Counters drifted, toggle logic varied, indexes duplicated.
If you have one team and one use case, ScyllaDB could work well. Our problem was multiple teams hitting the same walls repeatedly.
That said, HBase is just the storage backend—Actionbase is the interaction layer on top. We'd consider ScyllaDB as a backend too. Currently HBase is battle-tested in production, while SlateDB would need dev effort. We'd love community input on direction: https://github.com/kakao/actionbase/discussions/144
Hi HN,
I built Actionbase at Kakao (KakaoTalk, ~50M MAU).
It started because the same features—likes, views, follows—were being rebuilt across teams, each hitting similar scaling walls.
It's been in production for years, serving Kakao services at over 1M requests per minute.
Our approach: precompute everything at write time. Reads are just lookups—no aggregation, predictable latency.
Currently backed by HBase. Lighter backends (e.g., SlateDB) on the roadmap.
Try it — just Docker:
Quick Start and production stories are in the README.Genuinely curious: are there existing systems for high-volume interaction data (likes, follows, views) that I missed?
Happy to answer questions.
Looks a little over engineering, can't just Kafka and any key value db could do the same job with Redis(if required)?
You might be right! Kafka + KV store + Redis can definitely work for this.
Our teams typically started with MySQL, then added Kafka and Redis as they scaled. The pain wasn't the initial build—it was what happened after: every team implemented forward/reverse lists, counts, and indexes slightly differently. Retries and out-of-order events caused subtle drift. Migrations became error-prone because the "rules" for derived data lived in application code scattered across services.
That's what Actionbase does: versioned state transitions, pre-computed indexes, idempotent mutations—all in one place. If your current setup is working and correctness isn't drifting, you probably don't need this.
I wrote more about this trade-off in https://github.com/kakao/actionbase/discussions/32 —genuinely asking whether we overbuilt.
Won't something like scyllaDB be a better choice for such workloads ? as long as you're fine with eventual consistency ofc
You might be right! ScyllaDB is a solid choice—eventual consistency is often fine for interactions.
The friction we hit was less about storage and more about fragmentation: teams kept rebuilding the same features (likes, views, follows) with slightly different implementations. Counters drifted, toggle logic varied, indexes duplicated.
If you have one team and one use case, ScyllaDB could work well. Our problem was multiple teams hitting the same walls repeatedly.
That said, HBase is just the storage backend—Actionbase is the interaction layer on top. We'd consider ScyllaDB as a backend too. Currently HBase is battle-tested in production, while SlateDB would need dev effort. We'd love community input on direction: https://github.com/kakao/actionbase/discussions/144