1 comments

  • thebasedcapital 11 hours ago ago

    Hi HN! I built BrainBox to give AI coding agents procedural memory using Hebbian learning — "neurons that fire together wire together."

    The problem: every time you start a new Claude Code session, the agent has zero memory of your codebase. It greps, reads, greps again — burning tokens rediscovering files it already found yesterday.

    BrainBox learns passively from every file read, edit, and search. Files accessed together form synaptic connections that strengthen with repetition. After a few sessions, the agent recalls auth.ts directly instead of searching for it — like muscle memory.

    Key ideas: - Synapses form between co-accessed files (sequential window, not time-based) - Myelination gives frequently-used paths instant recall - Multi-hop spreading activation discovers files through indirect connections (A→B→C) - Multiplicative confidence scoring prevents hub nodes from dominating (fan effect from Anderson 1983) - Exponential decay keeps the network clean - Error→fix pair learning: remembers which files fix which errors

    Production results after real usage: 79 neurons, 3,554 synapses, ~9% token savings. The strongest learned pathway is Grep→Read (weight 0.996) — the universal "search then read" pattern.

    The research gap that motivated this: published Hebbian learning papers exist at L1 (model internals) but nobody has applied it to L3 (agent behavioral patterns — file access, tool chains, error→fix pairs). The closest historical precedent is Fido (1991) — associative memory for hardware cache prefetching.

    Install: `npm install brainbox-hebbian` — hooks into Claude Code automatically via PostToolUse + UserPromptSubmit. TypeScript, MIT, ~2K LOC.

    Happy to answer questions about the neuroscience mapping, the math, or anything else.