> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
> are [we] beginning to attribute too little mind to humans.
I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?
Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
I'm sure we don't know for sure that humans work like LLMs, but do we know that they don't?
Highly relevant: Reading Doesn't Fill a Database, It Trains Your Internal LLM <https://tidbits.com/2026/02/28/reading-doesnt-fill-a-databas...>