Top AI models fail at >96% of tasks

(zdnet.com)

23 points | by codexon a day ago ago

11 comments

  • codexon a day ago ago

    This paper creates a new benchmark comprised of real remote work tasks sourced from the remote working website Upwork. The best commercial LLMs like Opus, GPT, Gemini, and Grok were tested.

    Models released a few days ago, Opus 4.6 and GPT 5.3, haven't been tested yet, but given the performance on other micro-benchmarks, they will probably not be much different on this benchmark.

    • kolinko a day ago ago

      They didn't test Opus at all, only Sonnet.

      One of the tasks was "Build an interactive dashboard for exploring data from the World Happiness Report." -- I can't imagine how Opus4.5 could've failed that.

      • codexon 4 hours ago ago

        Check the link to the study. It has been updated for Opus 4.5.

  • tessitore a day ago ago

    This post really should be edited to say 96% of tasks posted on Upwork. Since we would all expect that to happen.

  • undefined a day ago ago
    [deleted]
  • Venn1 a day ago ago

    ChatGPT: when you want spellcheck to argue with you.

  • scotty79 11 hours ago ago

    Kinda sus that least known model did best and none of the more recent models were tested. Capabilities grow very fast. So things that now routinely succeed rarely ever succeeded even half a year ago.

    • rsynnott 7 hours ago ago

      I mean performance is so bad across the board that this is likely essentially random. Monkeys accidentally doing a bit of Shakespeare.

  • undefined 11 hours ago ago
    [deleted]
  • zb3 a day ago ago

    You think they don't? You think AI can replace programmers, today?

    Then go ahead and use AI to fix this: https://gitlab.gnome.org/GNOME/mutter/-/issues/4051

    • stoneforger 9 hours ago ago

      Rewrite it in react it will.