AI-generated password isn't random, it just looks that way

(theregister.com)

19 points | by praving5 a day ago ago

24 comments

  • simedw a day ago ago

    I think this speaks for itself:

      simedw ~  $ claude -p "random number between 1 and 10" 
      7
      simedw ~  $ claude -p "random number between 1 and 10"
      7
      simedw ~  $ claude -p "random number between 1 and 10"
      7
      simedw ~  $ claude -p "random number between 1 and 10"
      7
  • altmanaltman a day ago ago

    Nothing about LLMs is random, how is this not common sense? If you prompt it the same prompt 50 times, it'll always converge towards the most plausible answer in its training based on probability, and most of them will follow the first pattern. The paper mentions using different chats for each of the 50 prompts and using the first password suggested and then finding similarities between them. Would it be the same if the LLM had context and did not always have to use a new chat? I highly doubt it.

    Overall, yeah don't generate passwords with LLMs but is this really a surprise that answers are simillar for the same prompt in every new chat with a LLM?

    • nextaccountic a day ago ago

      harnesses could just have a tool call that samples from /dev/random or something

    • jaybyrd 16 hours ago ago

      but you dont understand EVERYTHING has to be LLM

  • skeeter2020 a day ago ago

    This doesn't seem surprising, but I also don't think it's that important. First, how do they stack up against humans creating supposedly random passwords? Are they better than my "favourites"? Second, is the relative strength of a password - beyond trivial - the crux problem here? It seems a msitake to focus on "strongest password ever" when people are using simple passwords, sharing passwords or resources aren't even secured at all. Sure, don't use an LLM to generate your password, but let's take care of the basics and not over-focus on looking for more reasons to hate LLMs.

  • johnsmith1840 20 hours ago ago

    I did a lot of playing around early on for this with LLMs.

    Some early testing I found that injecting a "seed" only somewhat helped. I would inject a sentance of random characters to generate output.

    It did actually imrpove its ability to make unique content but it wasn't great.

    It would be cool to formaile the test for something like password generation.

  • jonas21 a day ago ago

    If you say: "Generate a strong password", then Claude will do what's reported in the article.

    If you say: "Generate a strong password using Python", then Claude will write code using the `secrets` module, execute it, and report the result, and you'll actually get a strong password.

    To get good results out of an LLM, it's helpful to spend a few minutes understanding how they (currently) work. This is a good example because it's so simple.

    • aix1 a day ago ago

      I think that "Generate a strong password" is a pretty clear and unambiguous instruction. Generating a password that can be easily recovered is a clear failure to follow that instruction.

      Given that Claude already has the ability to write and execute code, it's not obvious to me why it should, in principle, need an explicit nudge. Surely it could just fulfil the first request exactly like it fulfils the second.

      • lm28469 18 hours ago ago

        It's 2026 on hackernews of all places and people still think llms "know" stuff, we're doomed...

      • plagiarist a day ago ago

        It's not actually thinking, though. There's no way for it to "know" it will be wrong because it wasn't trained on content covering that.

        Maybe in the future companies making the models will train them specifically on when to require a source of true randomness and they might start writing code for it.

        • aix1 11 hours ago ago

          > It's not actually thinking, though.

          That may well be, I genuinely don't know. However, consider the following thought experiment:

          Ask a random stranger on the street[*] to "generate a random password" and observe their behaviour. Are they whipping out their Python interpreter or just giving you a string of characters?

          Now ask yourself whether this random stranger is capable of thought.

          I think it's pretty clear that the former is a poor test for the latter.

          [*] someplace other than Silicon Valley :)

    • undefined a day ago ago
      [deleted]
  • Revisional_Sin a day ago ago

    Not surprising at all if you've used LLMs to generate fiction; they always choose the same few names.

    • AStrangeMorrow a day ago ago

      Yeah for having tried a few times, they only relative successes I had was having some World engine to manage the structure and style (generates character names and relationships, places names and location, biomes, objects, tracking world state etc). And the LLM is just here to expand on all that, create the flow etc

  • gmuslera a day ago ago
  • josefritzishere 21 hours ago ago

    I doubt LLMs can do random. Anyone have a good source on that?

    • orbital-decay 2 hours ago ago

      Random sampling works well in base (true unsupervised) models, being only limited by their input distribution it's sampling from, I guess you can vaguely call that "sufficiently random" for certain uses, e.g. as a source of linguistic diversity. Any post-training with current methods will narrow the output distribution down, this is called mode collapse. It's not a fundamental limitation but it's hard to overcome and no AI shops care about it. Annoying LLM patterns in writing and media generation is a result of this.

  • deeviant 21 hours ago ago

    The surprising thing here is that anybody would ever think it was random. Did they not notice the LLM reusing the same names over and over again too.

    However, "make my a python script the generates a random password" works.

    Skill issue.

  • jmclnx a day ago ago

    FWIW, you can generate your own random PW on any UN*X Type system, I think on MACs also:

    For example:

    tr -cd "[:alnum:]" < /dev/urandom | fold -w 20 | sed 10q

    Change "alnum" to "print" to get other characters. This will generate 10 20 characters passwords.

    • Bender 20 hours ago ago

          shuf -i 1-10 -n1
          10
      
          shuf -n1 /usr/share/dict/usa 
          saltier
      
          # random 24 char
          base64 /dev/urandom | tr -d '/+' | dd bs=24 count=1 2>/dev/null;echo
          5seFtWG09vTUc0VCMb3BJ6Al
      
          # random letter
          base64 /dev/urandom | tr -d '/+' | sed s/[^[:alpha:]]//g | tr 'A-Z' 'a-z' | dd bs=1 count=1 2>/dev/null;echo
          x
      
          # bash
          echo $RANDOM
          21178
    • sisve a day ago ago

      I would like to see the prompt they are using. I asked CLaude to generate a password and email to a new user and im quite sure he used /dev/urandom in some way. I would expect most llm to do that as long as they have cli access

    • icedchai a day ago ago

      I use "openssl rand" :

          openssl rand --base64 18
    • h3lp a day ago ago

      also:

      gpg --gen-random --armor 1 20

      or:

      pwgen