I built a tool that sends the same question to two Claude instances simultaneously. One has a vanilla system prompt, the other has 3 mathematical axioms from a dimensional ontology paper (~2K tokens) embedded in it.
The enhanced version never uses jargon from the paper. It just reasons differently — more structural, fewer lists, deeper "why" answers.
Example: "Why can't humans know their own lifespan?"
Vanilla: lists factors (genetics, complexity, unpredictable events)
Enhanced: "Like a point walking on a line — to see the whole line you'd need to step outside it, but as a point that's impossible. Not knowing your lifespan isn't a flaw, it's a structural condition of living inside time."
Same model (Sonnet). Same question. Only the system prompt differs.
I built a tool that sends the same question to two Claude instances simultaneously. One has a vanilla system prompt, the other has 3 mathematical axioms from a dimensional ontology paper (~2K tokens) embedded in it. The enhanced version never uses jargon from the paper. It just reasons differently — more structural, fewer lists, deeper "why" answers. Example: "Why can't humans know their own lifespan?"
Vanilla: lists factors (genetics, complexity, unpredictable events) Enhanced: "Like a point walking on a line — to see the whole line you'd need to step outside it, but as a point that's impossible. Not knowing your lifespan isn't a flaw, it's a structural condition of living inside time."
Same model (Sonnet). Same question. Only the system prompt differs.