1 comments

  • Displayusername 2 days ago ago

    The part I'm most interested in feedback on is the query cache design.

    Compound predicates like where.and(where.eq('status', 'active'), where.gt('signal', 0)) look simple but were a cache miss on every call in early versions. Each call constructs a new function object, so the cache had no way to recognise it had seen the same query before even if the predicate was semantically identical to one it had just run. The fix was tagging each where.* predicate with a stable string key at construction time (eq:status:active, gt:signal:0) and recursively composing them for and/or (and(eq:status:active,gt:signal:0)). Two separate calls to where.and(where.eq('status', 'active'), where.gt('signal', 0)) now produce the same cache key even though they're different function objects. Inline predicates (e => e.signal > 0) fall through to reference-identity keying, which is correct: two closures that look the same but close over different variables shouldn't share a cache entry. That one change is what flipped the mixed workload benchmark from LokiJS leading by ~20% to tinyop leading by ~32%. LokiJS has a native B-tree index on every field; tinyop was losing specifically because compound queries couldn't be cached and had to scan the full type set on every call.

    Once they could be cached, the hot tier returns them in under 0.01ms. For comparison, LokiJS's native indexed path measures 0.09ms for simple queries and 0.72ms for compound ones.