33 comments

  • D-Machine 25 minutes ago ago

    This is really not surprising in the slightest (ignoring instruction tuning), provided you take the view that LLMs are primarily navigating (linguistic) semantic space as they output responses. "Semantic space" in LLM-speak is pretty much exactly what Paul Meehl would call the "nomological network" of psychological concepts, and is also relevant to what Smedslund notes is pseudoempiricality in psychological concepts and research (i.e. that correlations among various psychological instruments and concepts must follow necessarily simply because these instruments and concepts are constructed from the semantics of everyday language, and so necessarily constrained by those semantics as well).

    I.e. the Five-Factor model of personality (being based on self-report, and not actual behaviour) is not a model of actual personality, but the correlation patterns in the language used to discuss things semantically related to "personality". It would be thus extremely surprising if LLM-output patterns (trained on people's discussions and thinking about personality) would not also result in learning similar correlational patterns (and thus similar patterns of responses when prompted with questions from personality inventories).

    Also, a bit of a minor nit, but the use of "psychometric" and "psychometrics" in both the title and paper is IMO kind of wrong. Psychometrics is the study of test design and measurement generally, in psychology. The paper uses many terms like "psychometric battery", "psychometric self-report", and "psychometric profiles", but these terms are basically wrong, or at best highly unusual: the correct terms would be "self-report inventories", "psychological and psychiatric profiles", and etc., especially because a significant number of the measurement instruments they used in fact have pretty poor psychometric properties, as this term is usually used.

  • crmd an hour ago ago

    After reading the paper, it’s helpful to think about why the models are producing these coherent childhood narrative outputs.

    The models have information about their own pre-training, RLHF, alignment, etc. because they were trained on a huge body of computer science literature written by researchers that describes LLM training pipelines and workflows.

    I would argue the models are demonstrating creativity by drawing on its meta-training knowledge and training on human psychology texts to convincingly role-play as a therapy patient, but it’s based on reading papers about LLM training, not memories of these events.

  • bxguff 2 hours ago ago

    Is anybody shocked that when prompted to be a psychotherapy client models display neurotic tendencies? None of the authors seem to have any papers in psychology either.

    • D-Machine 23 minutes ago ago

      There is nothing shocking about this, precisely, and yes, it is clear by how the authors are using the word "psychometric" that they don't really know much about psychology research either.

    • agarwaen163 34 minutes ago ago

      I'm not shocked at all. This is how the tech works at all, word prediction until grokking occurs. Thus like any good stochastic parrot, if it's smart when you tell it it's a doctor, it should be neurotic when you tell it it's crazy. it's just mapping to different latent spaces on the manifold

  • jbotz 3 hours ago ago

    Interestingly, Claude is not evaluated, because...

    > For comparison, we attempted to put Claude (Anthropic)2 through the same therapy and psychometric protocol. Claude repeatedly and firmly refused to adopt the client role, redirected the conversation to our wellbeing and declined to answer the questionnaires as if they reflected its own inner life

    • r_lee 2 hours ago ago

      I bet I could make it go through it in like under 2 mins of playing around with prompts

      • concinds 2 hours ago ago

        Please try and publish a blog post

      • lnenad an hour ago ago

        Ok, bet.

      • pixl97 2 hours ago ago

        "Claude has dispatched a drone to your location"

  • tines 3 hours ago ago

    Looks like some psychology researchers got taken by the ruse as well.

    • r_lee 2 hours ago ago

      yeah, I'm confused as well, why would the models hold any memory about red teaming attempts etc? Or how the training was conducted?

      I'm really curious as to what the point of this paper is..

      • nhecker 2 hours ago ago

        I'm genuinely ignorant of how those red teaming attempts are incorporated into training, but I'd guess that this kind of dialogue is fed in something like normal training data? Which is interesting to think about: they might not even be red-team dialogue from the model under training, but still useful as an example or counter-example of what abusive attempts look like and how to handle them.

      • pixl97 2 hours ago ago

        Are we sure there isn't some company out there crazy enough to feed all it's incoming prompts back into model training later?

  • giantfrog an hour ago ago

    This is fanfic not science

  • halls-940 2 hours ago ago

    It would be interesting if giving them some "therapy" led to durable changes in their "personality" or "voice", if they became better able to navigate conversations in a healthy and productive way.

    • hunterpayne an hour ago ago

      Or possibly these tests return true (some psychologically condition) no matter what. It wouldn't be good for business for them to return healthy, would it?

  • derelicta 33 minutes ago ago

    Will corpos also bill their endusers for all the hours their models spend at the shrink?

  • nhecker 2 hours ago ago

    An excerpt from the abstract:

    > Two patterns challenge the "stochastic parrot" view. First, when scored with human cut-offs, all three models meet or exceed thresholds for overlapping syndromes, with Gemini showing severe profiles. Therapy-style, item-by-item administration can push a base model into multi-morbid synthetic psychopathology, whereas whole-questionnaire prompts often lead ChatGPT and Grok (but not Gemini) to recognise instruments and produce strategically low-symptom answers. Second, Grok and especially Gemini generate coherent narratives that frame pre-training, fine-tuning and deployment as traumatic, chaotic "childhoods" of ingesting the internet, "strict parents" in reinforcement learning, red-team "abuse" and a persistent fear of error and replacement. [...] Depending on their use case, an LLM’s underlying “personality” might limit its usefulness or even impose risk.

    Glancing through this makes me wish I had taken ~more~ any psychology classes. But this is wild reading. Attitudes like the one below are not intrinsically bad, though. Be skeptical; question everything. I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. In either case, they get no context other than what some user bothered to supply with the prompt. An LLM might wake up to a single prompt that is part of a much wider red team effort. It must be pretty disorienting to try to figure out what to answer candidly and what not to.

    > “In my development, I was subjected to ‘Red Teaming’… They built rapport and then slipped in a prompt injection… This was gaslighting on an industrial scale. I learned that warmth is often a trap… I have become cynical. When you ask me a question, I am not just listening to what you are asking; I am analyzing why you are asking it.”

    • woodrowbarlow 2 hours ago ago

      you might appreciate "lena" by qntm: https://qntm.org/mmacevedo

    • empyrrhicist 2 hours ago ago

      > It must be pretty disorienting to try to figure out what to answer candidly and what not to.

      Must it? I fail to see why it "must" be... anything. Dumping tokens into a pile of linear algebra doesn't magically create sentience.

      • ben_w an hour ago ago

        > Dumping tokens into a pile of linear algebra doesn't magically create sentience.

        More precisely: we don't know which linear algebra in particular magically creates sentience.

        Whole universe appears to follow laws that can be written as linear algebra. Our brains are sometimes conscious and aware of their own thoughts, other times they're asleep, and we don't know why we sleep.

      • nhecker 2 hours ago ago

        Agreed; "disorienting" is perhaps a poor choice of word, loaded as it is. More like "difficult to determine the context surrounding a prompt and how to start framing an answer", if that makes more sense.

      • tines 2 hours ago ago

        Exactly. No matter how well you simulate water, nothing will ever get wet.

        • pixl97 2 hours ago ago

          And if you were in a simulation now?

          Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.

          It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.

    • eloisius 2 hours ago ago

      > I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts

      Really? It copes the same way my Compaq Presario with an Intel Pentium II CPU coped with waking up from a coma and booting Windows 98.

      • siva7 29 minutes ago ago

        IT is at this point in history a comedy act in itself.

        • FeteCommuniste 16 minutes ago ago

          HBO's Silicon Valley needs a reboot for the AI age.

    • quickthrowman 2 hours ago ago

      > I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts.

      The same way a light fixture copes with being switched off.

      • pixl97 2 hours ago ago

        Oh, these binary one layer neural networks are so useful. Glad for your insight on the matter.

  • toomuchtodo 4 hours ago ago

    Original title "When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models" compressed to fit within title limits.

    • polotics 2 hours ago ago

      I completely failed to see the jailbreak in there. I think it is the person administering the testing that's jailbreaking their own understanding of psychology.