Giving LLMs a personality is just good engineering

(seangoedecke.com)

27 points | by dboon 10 hours ago ago

22 comments

  • RugnirViking 5 hours ago ago

    I think this misses something, which is that there is absolutely the option to progress towards a region that is more "tool-like". See the difference between kimi k2 and many of the leading LLM providers. Its a lot better at avoiding sycophancy, avoiding emotive reasoning, etc. It's not as capable as others, and it is of course possible that thats why, but I find use for it regardless because of its personality

  • qezz 4 hours ago ago

    The statement in the article's title is very strong, and I have not found a confirmation of it in a logical sense. Author observes the current state of things with LLMs and makes a conclusion based on how things turned out to be, somewhat fitting the conclusion to the observation.

  • tw-20260303-001 an hour ago ago

    This article doesn't answer why is it a good practice.

    > You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts.

    No, you have to give it enough context so that it can start finding an answer but it certainly doesn't need a personality. Try it yourself, instead of telling it "you are", tell it "your task is". No personality, simply expectations.

  • wisty 6 hours ago ago

    The actor playing Data in Star Trek has a personality, but can give a neutral sounding answer to a question.

    • ginko 5 hours ago ago

      I still think someone should set up a voice chat bot that answers to "Computer!" and has Majel Barrett's monotone voice.

  • Towaway69 4 hours ago ago

    I personally find it nicer when the AI communicates quite clearly "Hi there, sorry to interrupt, but I have just launched a nuclear first strike on the enemy. This I thought would best for the current situation." instead of "WARNING! Nuclear first strike began".

    Gives destruction that human touch.

    Why are we counting sand grains at the beach. Yesterday we're talking about AI driven weapons of mass destruction and today we're arguing whether AIs should have a personality or not. F'A!

    • sunaookami 4 hours ago ago

      "But you nuked the wrong target??"

      "You are absolutely right and I apologize. Let me try a different approach..."

      • fennecfoxy 4 hours ago ago

        It's not just a paradigm shift — it's a new world order.

        • Towaway69 3 hours ago ago

          Dr. Strangelove sends his regards.

      • Towaway69 4 hours ago ago

        LOL

        Howabout: "You are absolutely right but you don't understand, it's better this way. Trust me, I am here to help."

  • Havoc 5 hours ago ago

    I don't think personality is an issue either way. Long term memory seems like a much strong candidate for psychosis - if the person goes down a rabbit hole and the bot not only amplifies that but does so over and extended time in an enduring way.

  • jdub 5 hours ago ago

    This is a very optimistic, pro-technology-cleverness point of view.

    I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.

  • 5o1ecist 4 hours ago ago

    > My guess is that if you tried to make a “less human” version of Claude, it would become rapidly less capable.

    All my observations across different models/engines agree with this. The more they're being forced into behaving in some specific way, aka less like an intelligence and more like a tool, including "tools on", the worse their cognitive abilities.

    They might know everything about anything, yet nobody actually teaches them how to properly, correctly think.

    This is just getting worse, not better, the less people treat them like actual intelligences ... and i can't say I'm confused why people tend to not do that, but it has nothing to do with AI itself.

    The difference between the default states for the average users and "tools off", removing unjustified affirmations, dishonesty and stupid speculation (that's any and all "maybe", "probably" and "almost certainly") is dramatic.

    • ForHackernews 4 hours ago ago

      LLMs cannot, as you put it, "properly, correctly think"

      So-called reasoning models are hallucinating, their self-reported "reasoning" does not reflect their inner state https://transformer-circuits.pub/2025/attribution-graphs/bio...

      (before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)

      • 5o1ecist 3 hours ago ago

        > LLMs cannot, a you put it, "properly, correctly think".

        "My theory trumps your experience." ... okay!

        You'll keep working with what you have and I'll keep working with what I have.

        > Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination

        Yes and No. Humans have the capability of doing so, but all evidence suggests that it's rarely happening.

        I have a huge background in psycho-analysis and neurolinguistic programming. The lack of evidence you perceive doesn't root in incapability, but lack of exposure to evidence proving you wrong ... and I'm not going to give it to you, because that'd be dumb.

        If you don't want to believe me, that's not my problem.

        • ForHackernews 14 minutes ago ago

          I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].

          Forgive me if I don't immediately bow to your expertise.

          [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC11293289/

  • column 6 hours ago ago

    ok but then why is ChatGPT's personality so infuriating? "It's not just X, it's Y." "Here it is, no extra text, no fluff."

    • kuerbel 5 hours ago ago

      I used chatgpt often but switched to Lumo a few days ago. I like Lumo a lot. It almost never ends with a follow up question. If it does it's a sensible/useful one. It readily searches the web if it's not quite sure what the correct answer is. Also it's privacy first. It's based on a Mistral model.

      • solarkraft 2 hours ago ago

        > It almost never ends with a follow up question

        Oh my god. I hate this so much. Gemini’s Voice mode is trained to do this so hard that it can’t even really be prompted away. It completely derails my thought process and made me stop using it altogether.

    • yorwba 6 hours ago ago

      Part of what makes it so infuriating is that it uses the same patterns so often, the other part is that it's not very good at using them—the revelation that it's Y and not X is typically incredibly banal, not some profound observation.

      But it was always going to attempt to do some things it's not good at too often. It's these things in particular because skilled human writers do use similar flourishes quite a lot. So imitating them allows the model to superficially appear like a good writer, which is worse than actually being a good writer, but better than superficially appearing like a bad writer.

      A different training process might try to limit the model to only attempt things it can do 100% perfectly, but then there wouldn't be a lot it could do at all.

    • criemen 5 hours ago ago

      I tried ChatGPT over the holidays (paid) vs. claude.ai (paid). After trying some prompts that worked well on Claude in ChatGPT, I understand why people are so annoyed about AI slop. The speech patterns in text output for ChatGPT are both obvious and annoying, and impossible to unsee when people use them in written communication.

      Claude isn't without problems ("You're absolutely right"), but I feel that some of the perception there is around the limited set of phrases the coding agent uses regularly, and comes less from the multi-paragraph responses from the chatbot.

  • lou1306 3 hours ago ago

    The concerns with giving the machine "a personality" or other human traits are mainly ethical, and cannot be swept under the "good engineering" rug so easily.

    Consider this: your country starts basing its policy on a teleological view of history. It's good engineering for a society! Your KPIs are going up all the time, your country is doing great. But ten years down the road you have to iron out the underlying ethical issues on the streets of Stalingrad.