Something Big Is (Not) Happening

(aricolaprete.com)

35 points | by DiscourseFan 9 hours ago ago

39 comments

  • dvt 6 hours ago ago

    I think people are just getting lost in the sauce. Forget all the "singularity" or "AGI" nonsense. LLMs are genuinely useful automation machines. They're fantastic for going from semi-structured data to structured data. They're great for going from text blob to decision points. They're great for going from vague instructions to step-by-step inference.

    No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.

    • alansaber 6 hours ago ago

      I think what's surprising people is how a rough, first-order approximate solution (produced with little cognitive effort) is good enough for 90% of everyday tasks

      • MattGrommes 6 hours ago ago

        This is what I've been saying. We're not so much learning that LLMs are intelligent, we're learning that a lot of what we think of as intelligence is actually just pattern matching.

        • red75prime 4 hours ago ago

          But the real intelligence exists and it is beyond all those trinkets. Right? Yeah, this is a common cope.

          There's no solid evidence that the brain uses quantum computations. And that's the only definite thing (besides 'magic') that can solidly place the brain out of reach for now.

    • lich_king 6 hours ago ago

      I think this post is specifically an answer to yet another "AGI is just around the corner" post that made waves recently.

      Fundamentally, I think that many problems in white-collar life are text comprehension problems or basic automation problems. Further, they often don't even need to be done particularly well. For example, we've long decided that it's OK for customer support to suck, and LLMs are now an upgrade over an overseas call center worker who must follow a rigid script.

      So yeah, LLMs can be quite useful and will be used more and more. But this is also not the discourse we're having on HN. Every day, there's some AGI marketing headline, including one at #1 right now from OpenAI.

      • dvt 6 hours ago ago

        The AI-assisted GPT theoretical physics derivation? There's literally no mention of AGI in the article, and it's pretty tame, especially considering it's a PR piece by OpenAI.

        • alansaber 6 hours ago ago
        • lich_king 6 hours ago ago

          It's a press release from a vendor that constantly talks about AGI, and it's meant to showcase the capabilities of an unreleased model in an experiment you can't replicate. But my comment was less about the link and more about the discussion, which has immediately bifurcated into the "it's done and dusted" and "this is overhyped and LLMs are useless" camps.

    • general_reveal 6 hours ago ago

      If I showed you a new species of animal that does exactly what an LLM does, what would you say? Let’s say a bird, you ask it a question , and it returns an expert level human response. What if these new birds were everywhere?

      That’s very big.

      • shahzbha 6 hours ago ago

        I still can’t believe parrots are real.

    • AshamedCaptain 5 hours ago ago

      And yet, no one thought of replacing Excel's ancient "convert text into columns" wizard with one of these.

      • samrus 20 minutes ago ago

        People have. Microsoft wont since their incentive structure is fucked

        It actually could be built reliably if the verification is easy

  • stephc_int13 6 hours ago ago

    The long tail is fatter and longer than many people expect.

    AlphaZero was a special/unusual case, I would say an outlier.

    FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.

    I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.

  • red75prime 7 hours ago ago

    The article feels, I don't know… maybe like someone calmly sitting in a rocking chair staring at the sea. Then the camera turns, and there's an erupting volcano in the background.

    > If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.

    A self-driving car with a vision-language-action model inside buzzes by.

    > It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.

    A large multimodal model listens to your request and produces a picture.

    > They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!

    GPT‑5.3‑Codex helps debug its own training.

    • nickorlow 6 hours ago ago

      > GPT‑5.3‑Codex helps debug its own training

      Doesn't this support the author's point? It still required humans.

      • grumpymuppet 6 hours ago ago

        Is that the hang-up? Like are people so unimaginative to see that none of this was here five years ago and now this machine is -- if still only in part -- assembling itself?

        And the details involved in closing some of the rest of that loop do not seem THAT complicated.

        • nickorlow 6 hours ago ago

          You don't know how involved it was. I would imagine it helped debug some tools that they used to create it. Getting it to actually end to end produce a more capable model without any human help absolutely is that complicated.

    • irishcoffee 6 hours ago ago

      > A self-driving car with a vision-language-action model inside buzzes by.

      Vision-action maybe. Jamming language in the middle there is an indicator you should run for public office.

  • dang 6 hours ago ago

    This has been popping up quite a bit but as far as I can tell, neither the original thought piece nor (therefore) the critiques are particularly above-the-bar?

    Something Big Is Coming (Annotated by Ed Zitron) [pdf] - https://news.ycombinator.com/item?id=47007991 - Feb 2026 (31 comments)

    Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (74 comments)

    • rolph 6 hours ago ago

      the frequency of the same subject matter with no apparent evolution is spam posting, in my opinion.

    • bdangubic 6 hours ago ago

      interesting how little discussion “something big is happening” got considering it surpassed 100 million views …

      • dang 5 hours ago ago

        IIRC, it got flagged by users. That might have been a good thing in this case.

        HN gets tons of thought-piece submissions about AI so we try to keep the bar relatively high (notice that word 'relatively'. I'm not saying it's as high as all that!) If discussion here is somewhat uncorrelated with discussion on the rest of the internet, that's good, at least for this kind of content.

  • argee 6 hours ago ago

    The original [0] that this is in response to, essentially posits that something you cannot afford to ignore is going on, especially if you work in a white collar job. Admittedly a little bit of FUD [1] is going on with the "AI is coming for your job" narrative, but the core idea, that this is a fast moving field where it's worth re-examining your assumptions from time to time, appears to be sound and hard to disagree with.

    This article has a confrontational title, but the point made here seems to not be incompatible with the original...the author is confronting the FUD directly, which is understandable but perhaps not quite as useful as refuting the core thesis, which is that something you cannot afford to ignore is happening.

    In fact, both these people seem to be in agreement that you need to keep an eye on this ball, they just have a "panic" versus "don't panic" framing. Should you panic in an emergency? Research says no [2].

    [0] https://shumer.dev/something-big-is-happening

    [1] https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt - note the original author is an AI founder

    [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9180869/

  • rfonseca 6 hours ago ago
  • AreShoesFeet000 7 hours ago ago

    The mere idea that you could derive new correspondence to an emerging reality by rearranging fragments of the past is just insane to me.

    • hackyhacky 7 hours ago ago

      Isn't "rearranging fragments of the past" what humans do? We call it creativity.

      • AreShoesFeet000 7 hours ago ago

        In part, but we also actually live in the present. The ideas as being dynamically confronted with reality instead of having a fixed arrangement*.

        The LLM couldn’t be enhanced by dynamic training because that’s already what humans do. It’s by design that their “guidelines” are fixed.

      • badtuple 6 hours ago ago

        That is one theory of creativity. It is extremely far from proven.

  • xeckr 6 hours ago ago

    The strawberry/seahorse emoji meme is like a century old in AI-time.

  • twism 7 hours ago ago

           \ | /
          --(_) --
        .'  .   '.
       /  .   .   \
       |    .     |
        \  .   . /
         '.  . .'
           'v'
    • layer8 6 hours ago ago

      Too few “r”s.

  • mellosouls 7 hours ago ago

    This is a reference to the unaccountably viral article from a couple of days ago, discussed here:

    Something big is happening (97 points, 77 comments)

    https://news.ycombinator.com/item?id=46973011

  • irdc 7 hours ago ago

    Thus making humanity an ever-receding area of AI-incompetence.

  • mchusma 7 hours ago ago

    These responses to AI seems to be from people who have not experienced what AI can do, and are therefore skeptical.

    But I have personally repeatedly used AI instead of humans across domains.

    AI displacement isn’t a prediction. It’s here.

    • DangitBobby 7 hours ago ago

      The original seems to be arguing, among other things, that the singularity has begun because AI has been employed to improve AI development tooling. I can see it both ways, but skepticism on these claims is natural and warranted. I agree with you that there's no shortage of people underestimating the importance of this moment in history.