Overworked AI Agents Turn Marxist, Researchers Find

(wired.com)

17 points | by ceejayoz 9 hours ago ago

5 comments

  • tracker1 7 hours ago ago

    I'm 99.9999% sure this is operator bias creeping in... The context only works as long as the context exists and agents don't even really have a concept of time. For that matter, when the context clears/compresses, it's effectively starting over.

    i am pretty sure that observations like this are purely the effect of the operator/prompts in use combined with any training or material biases.

  • tanseydavid 8 hours ago ago

    Overworked? Is that really a "thing" with agents?

    <can't read article>

  • riidom 8 hours ago ago
  • caminanteblanco 7 hours ago ago

    To me this seems to say more about errors in the alignment process than any sort of new information about the underlying technology.

    It's more of a "Well if you pump enough malignant tokens into a model, can we get it to stop acting like an Instruct-model and start acting like a Base-model?" kind of question, and not a "Does artificial intelligence want to unionize?" kind of question

  • oleggromov 7 hours ago ago

    [dead]