The Whole Anthropic Kerfuffle

(twitter.com)

61 points | by tosh 5 hours ago ago

81 comments

  • qaz_plm 4 hours ago ago
  • jadar 4 hours ago ago

    What's the subtext here? I don't live under a rock, but there have been so many Anthropic kerfuffles that I have lost track.

  • nickdothutton 4 hours ago ago

    I really wish Anthropic would consult some monetization experts. Their recent strategy has been all over the place and they are burning early goodwill.

    • Henchman21 24 minutes ago ago

      They asked Claude, what more can they do?!

    • ochronus 3 hours ago ago

      I don't think they are clueless, but rather, struggling. Being an AI provider is a money burner, and they probably don't have enough fuel for the fire anymore, so they are trying things to squeeze more $$$ and limit usage at the same time.

      • potsandpans 16 minutes ago ago

        I don't really expect a company run by radical effective altruists / ai doomers to have good business sense.

        They had a decent product and squandered a lot of dev goodwill almost overnight.

  • ochronus 4 hours ago ago

    Yeah, claude code is mostly unusable at this point. It's been in a constant decline for a good 1-2 months. It's more like a scam now.

    • quietsegfault 4 hours ago ago

      What are the biggest problems you've seen? Is it mostly related to limits for the subscription plans?

      Within my circles (mostly big enterprises), I see more and more of my friends using Claude, and spending money on it, so they must be getting some sort of value out of it. For my uses, I've also been successful with Claude Code, though someone else is paying for my tokens.

      • ochronus 3 hours ago ago

        Yes, it's related to the limits of the subscription plans. 2ish months ago the same sub started hitting limits earlier and earlier on very similar tasks/codebases. Then they said, Oh, sorry, it was a caching bug, now it's fixed. It wasn't. Then a couple more bugs. Still no fix. Recently they announced "doubling the limits" (made possible by the Grok deal) - still hitting limits super fast compared to how it used to be 2+ months ago. Moreover, the models got somewhat dumber and slower, too.

        • electric_mayhem 3 hours ago ago

          Also, they seem to ignore auto mode, seizing on any reason to stop, and sometimes just stopping despite saying they’re proceeding with something.

          Basically, ongoing enshittification.

  • frangonf 4 hours ago ago

    > Impacting devrel is just collateral damage, which is on par for a company which believes coding is going away any time now.

    This makes sense.

  • radu_floricica 3 hours ago ago

    Max x20 usage is so cheap, that it's pretty obviously subsidized. And the non-interactive usage is the easiest to explode. They could play games with "reasonable use" and whack a mole accounts that are obviously farming it, but their approach is ultimately more fair.

    And I say this as somebody who just discovered agent orchestration and would absolutely love their limits to remain as they are.

  • parliament32 4 hours ago ago

    It really is the doordash/uber playbook all over again eh? Sell at a massive loss, gain userbase, then gradually boil the frog by adding fees, removing features, and increasing prices. Except instead of doing this a few years down the line, they're speedrunning the tighten-the-noose phase.

    Unfortunately the competition is nipping at their heels so there's a good chance this blows up in their faces.

    • afavour 4 hours ago ago

      I still think/hope/pray the future will be on-device models that don't need constant retraining. That will blow up the existing business model but I think a company could still make good money with a "majority local/remote for the really challenging stuff" model.

      The problem is that today's AI companies have taken on so much funding that a reasonable, not crazy profit ratio isn't enough for them.

      • parliament32 3 hours ago ago

        The future is already pretty much here. Note the recent stories about Chrome adding a local model, not to mention the Googlebook demo (if it works as advertised, there's a 0% chance you could get that kind of latency with a non-local model).

      • davidw an hour ago ago

        If it continues to be a numbers game - the more resources you throw at it, the better it is - then on-device is always going to be not as good. I guess it might be good enough for some uses?

        I kind of loathe the move away from a world where we could control our own computers and run our own software on them.

    • doikor 4 hours ago ago

      It was very clear from the beginning purely from how much it costs to train and run the inference.

      Someone has to pay the 7 trillion (the current projections for the AI datacenter build up)

    • infecto 4 hours ago ago

      I think people are being too generous with these comparisons. Not defending Anthropic but at the same time they are releasing new features and adjusting cost at pretty record speed for a new industry. Uber/doordash were subsidizing cost for what felt like a decade. Anthropic and related companies are adjusting price within months.

      To me the bigger takeaway is that these business are seeing massive volume in use and figuring out how to price the products accordingly.

      • hdndjsbbs 4 hours ago ago

        They have to speedrun boiling the frog because the capital expenditure is insane. Remains to be seen just how fast you can boil a frog before the frog notices

        • infecto 4 hours ago ago

          Disagree. Most businesses of size are going to enterprise agreements which are all on demand rates. Those rates have not been changing other than the underlying cost to the model API rates fluctuating. You could make argument they are secretly using that has the price lever.

          With volume enterprises can already negotiate lower token rates. I don’t see a boiling the frog situation.

          • parliament32 3 hours ago ago

            They will still need to increase costs for enterprise to be profitable, they're just going to be more greasy about it. Claude 5 will cost 20% more but not be 20% better, more shenanigans with "oh no we had a bug in our cache system :^)", or this gem from the current enterprise pricing page: "Opus 4.7 uses a new tokenizer... may use up to 35% more tokens for the same fixed text".

    • intrasight 4 hours ago ago

      There is competition. And there is no moat nor network effect. I don't think it'll blow up in their faces if they provide a product and service that people value which demonstratively they have. But it may not be so lucrative to them or their shareholders.

    • ealready_value 4 hours ago ago

      As far as I can tell, it seemed very clear that was the playbook for about a year now. Its been regularly assumed they're selling plans as a major loss-leader because people can "spend" thousands of dollars a months on a plan if they were charged at API rates. I think there's good evidence that even the API rates are sold at a loss.

      I think its assumed in the LLM model business that the models themselves are not a good moat, the next model by another company is just as likely to be as good as the current model. So companies like Anthropic have to tighten the noose slowly to start recovering their costs. This appears to be one of those steps.

      • infecto 4 hours ago ago

        The simpler explanation is probably some mix of marketing and also an expected use from people paying for a plan. The money to be made is not from plans ever. It’s in everyone’s best interest for these companies to accurately oversubscribe plans. Enterprise is where the money is to be made and I don’t feel that pricing has changed much on that end.

        • ealready_value 4 hours ago ago

          I had never thought of it that way, but it seems very likely that Enterprise oversubscribing is in the mix. Which does tie in nicely with this change; if a few devs are using their max plan to programmatically run parts of the business that could break the oversubscribes assumption.

          • infecto 4 hours ago ago

            There are no Enterprise plans though only on demand usage which at worst is charged retail token costs and with volume negotiated token rates.

    • Aboutplants 4 hours ago ago

      “Unfortunately”?

      Uh, that’s a good thing

    • AlexandrB 4 hours ago ago

      How do we tell the doordash/uber playbook from the moviepass playbook? Because the latter would be awful to build your business on.

      • parliament32 4 hours ago ago

        Moviepass (afaik) was an attempt at the exact same playbook, it just failed.

        Anthropic will also fail when the competition is.. near-equivalent-capability DeepSeek/Qwen/Llama on a $1k GPU with a break-even of 5 months of subscription costs. The value is simply not there for what they would need to charge to become profitable.

        • gruez 3 hours ago ago

          >when the competition is.. near-equivalent-capability DeepSeek/Qwen/Llama on a $1k GPU with a break-even of 5 months of subscription costs

          Lol no. Chinese AIs are definitely not "near-equivalent-capability". The empirical proof is pretty obvious: how many people have you heard talking about using their codex/claude code subscription vs their z.ai or qwen subscription? Moreover even the Chinese models require epic amounts of GPUs to run the full version, eg. https://apxml.com/models/glm-51 needs 1515 GB to run, and that's with a measly 1024 token context. To get it to run on your "$1k GPU" you'd need to quantize it, making it even dumber.

          • parliament32 3 hours ago ago

            Today, sure. But we already see diminishing returns with Claude releases, and we know the open models are closing the gap (~6 months behind according to the benchmarks). And when the pitch is "our models are 5% better but cost $200/mo.. also here's a mountain of restrictions" it just won't make sense anymore. Give it a year or two.

            I could see the "avoid the hardship of running a local model for $20/mo" angle but Anthropic has shown they have little interest in those customers.

            • gruez 2 hours ago ago

              >and we know the open models are closing the gap (~6 months behind according to the benchmarks).

              Looking at just the benchmarks might be misleading: https://x.com/scaling01/status/2050616057191072161

              • parliament32 an hour ago ago

                Good article. But it concludes with "Open models may be only 4–5 months behind on coding-heavy, benchmark-visible tasks... the gap is likely much larger and closer to 8 months."

                So, fine. In 2024, being 8 months behind was massive. In 2025, pretty big. This year.. I guess CC has improved a bit between October and now? How much do you think it'll matter a few more years down the line?

                Even now.. I bet a non-trivial number of people would happily be 8 months behind just to avoid another rent-seeker. And this will only get worse over time, which makes it an unwinnable situation for Anthropic. Hence all the panicked flailing about with restricting tooling and trying to get something even resembling a moat.

    • pigpag 4 hours ago ago

      [dead]

  • undefined 4 hours ago ago
    [deleted]
  • jameskilton 5 hours ago ago

    > At Anthropic, we build AI to serve humanity’s long-term well-being.

    If Anthropic actually cared about humans, they would have the best customer support (staffed by humans, for humans) and communications team (again, staffed by humans, for humans).

    As both of these are actually on par with Silicon Valley standards (between medicore and atrociously bad), Anthropic cannot and should not be trusted with anything to do with AI, because whatever they do will not benefit humanity.

    • alach11 4 hours ago ago

      > If Anthropic actually cared about humans, they would have the best customer support (staffed by humans, for humans)

      I know Anthropic support is slow from firsthand experience, but it has to be pretty difficult to scale support 10-80x per year. And even more so when you have a long-tail of very low revenue usage in the form of $20/month subscriptions.

      • tomashubelbauer 4 hours ago ago

        There is basically no support to speak of. Scaling a zero is not hard. You can be paying 200 USD a month with barely any chance of ever hearing back. Your best chance of getting support from Anthropic is the same as with any other big tech company: have a Twitter following or know someone who works there.

      • ausbah 4 hours ago ago

        their world changing agents surely make this a non problem?

    • ungovernableCat 4 hours ago ago

      Extremely cynical take, but they're probably being honest. They wanna serve humanity. But maybe they only consider a small part of the population to be relevant humans.

      • Applejinx 4 hours ago ago

        To whom?

        • ungovernableCat 2 hours ago ago

          Anyone for whom "paying salaries" is a problem they wanna solve.

    • adampunk 4 hours ago ago

      How would you staff a support line for a product with a billion users?

      • 0gs 4 hours ago ago

        it's hard, but not THAT hard, to find a few dozen people who can deal with large volumes of support tickets every day. so for a company like anthropic, you'd use a customized claude to triage and then those few dozen people spend all day actually caring about solving users' problems. a contract with fin fka intercom (lol) to offload this is a step in the wrong direction imo, but then nobody pays for support so it's hard to turn it into a revenue stream.

        • adampunk 4 hours ago ago

          I'm sorry but a few dozen people actually caring about the problems of a billion users is a fart in a windstorm. You might as well hire a half-dozen to care, or none, for all the work you'll do. You'd need a dozen people just to design a scheduler for handling tickets only to watch that catch fire too.

          I don't get it. None of the hyperscalers have human support teams at scale because it's obviously infeasible. Why, just because it would be nice, do we take leave of the requirement that something actually be possible before demanding it.

          • 0gs 40 minutes ago ago

            oh i think i agree, with the economics tech companies (all companies, really) and their users currently accept/demand.

            but if caring about and solving customer problems was an actual income driver for a company, it could be very different.

            i don't think that's going to happen, because i think most users (like Anthropic's) will continue to refuse to pay >$0 for support -- or to claim that their subscription payment should somehow also cover support, which is ridiculous, since they can see with their own "eyes" how little support their "compute subscription" gets them -- and thus companies will continue to invest ~$0, if not less, in meaningful support models.

            it still blows my mind that nobody is willing to try charging people an extra $20 a month for unlimited support calls. most customers are DESPERATE for people to talk to about their problems.

            instead, they all just try to winnow the cost down as low as possible, and then point to the expense to explain the degradation of service.

          • 0gs 36 minutes ago ago

            also, remember that MOST of those "billion users" generally don't have problems that require product support expertise every day. if each of them were still paying a retainer for access to high-touch support, all kinds of crazy fun stuff would be possible.

          • sfifs 4 hours ago ago

            Not infeasible, just allows lower net margins.

      • tonyedgecombe 4 hours ago ago

        Does Anthropic have a billion users?

      • troyvit 4 hours ago ago

        That's just it. If they were prioritizing humans they'd have a product with a measely million users, charge more, and offer great support. Their game isn't a good product though, their game is scale because they think that's the only way to win, and winning is the only way to survive.

        • brookst 4 hours ago ago

          Wait, how would limiting a great tool to 0.1% of the TAM demonstrate caring for humans?

          Are you picturing them running a lottery for who’s allowed to use it, or an auction?

          And with the loss of scale economies, it would have to be much more expensive.

          So you end up charging, what, $10,000/month and only making it available to the very wealthy?

          I don’t see how this game plan is better for humans. And I’m honestly not being snarky. Have you thought through how your proposed limits would work? Am I missing something?

          • troyvit 3 hours ago ago

            I mean look at how Apple prices their computers and phones, or how WSJ charges for subscriptions, or how "Linux" keeps its market small by being awful at marketing. The point is there are plenty of ways to scale sustainably and support your customer base in a long-term way that keeps them, and it doesn't seem like Anthropic is doing that.

        • mock-possum 4 hours ago ago

          Love how you’re literally saying “instead of serving humanity they should serve the wealthiest 0.1%”

          Very humanitarian

          • troyvit 3 hours ago ago

            Honestly I never thought about it that way, but I do think that's an exaggeration. I don't see any believable sign that Anthropic's goal was ever to "serve humanity." That said, how do you serve humanity properly? Do you scale a mediocre product to a billion people and treat them like shit or do you build it deliberately and support what you make, even if that costs more?

            You sound like "AI" is something people deserve for free when clearly, if you look at the garbage energy footprint alone, it's going to have to cost. Supporting it is going to have even more.

            P.S. How can you "serve humanity" if you literally don't support the humans who use your stuff?

      • maplethorpe 4 hours ago ago

        Don't have a billion users if you can't offer them support?

        • alehlopeh 4 hours ago ago

          Like, what? Since when can you control how many people want your product?

      • skinfaxi 4 hours ago ago

        How much budget have they allocated to support?

      • sfifs 4 hours ago ago

        With lower margins of course. Walmart, Indian Railways, major airlines etc all support massive user bases comparable to or bigger than the paid tiers of these apps. But of course the cult of Big Shareholder value creation means the CEO that does this, especially in the US will be fired.

      • Wowfunhappy 4 hours ago ago

        I mean, the simplistic answer is that if a billion people are paying you, you should be able to hire a proportional number of support staff, because you're getting additional revenue from each customer.

        I can imagine scaling may be difficult, but that should be a temporary problem.

      • cycomanic 4 hours ago ago

        It's funny how silicon valley bros always talk like making real world things is essentially impossible. I mean walmart or aldi are serving > 200 million customers a week, how do they manage that I can tell you that's much harder than customer support for an online product.

        As a side note, how do you make up that billion user number? Claude has 10 million users.

      • poszlem 4 hours ago ago

        Imagine if they had access to a good AI! They don't even have a bot support.

  • tommek4077 4 hours ago ago

    If you would be honest, everyone understood that this is a work around that us not there to last.

  • davidw 4 hours ago ago

    LLMs represent a big shift of power towards capital until such time as local ones are 'good enough'.

    • dormento 4 hours ago ago

      That's true. But keep in mind capital can be trivially used to influence policy so that local ones are eventually disallowed.

      • XorNot 4 hours ago ago

        When has that ever happened though?

        LLMs are software there's no plausible way to stop them running locally.

        • AlexandrB 4 hours ago ago

          California is trying to ban the sale of 3d printers that don't detect and block "gun parts" from being printed[1]. All Anthropic and friends need is some kind of safety rationale and we won't be able to buy computers that can run local models.

          The plausible way to do this is to force all software through some kind of signing process. This would be trivial for Apple to pull off and not much harder for Microsoft. On the Linux side, I expect the systemd folks would be happy to add some kind of signature checking to "head off the inevitable".

          [1] https://old.reddit.com/r/3Dprinting/comments/1rek7ky/new_cal...

        • pessimizer 3 hours ago ago

          They're already nearly there. Once everybody's computers are rooted for age verification, with government-approved OSes, it's barely even a step to only allow government-approved AI on them.

          For some reason, people assume that mandatory age/identity verification on people's machines is primarily geared toward reporting across the network. If you wanted to age-restrict across the network, there are any number of trivial ways to do it that they have never even entertained. The reason that the first victims of this legislation are operating systems is because they, like Apple, Google, and Microsoft already have, want to restrict the software you can run.

          They have to make sure that you can't have AI that can make pictures of naked children, or naked anything, or that might "underreport" the number of casualties in the Tienanmen Square protests, or could cast doubt on the vaccine (whichever vaccine), or say anything that could be interpreted as anti-Semitic (like that the Palestinians aren't savage animals), etc., etc....

          This is already easy to get public support on, because in the same way they whipped up bizarre mind control allegations against genuinely evil social media companies to throw the public off the scent, the public is being groomed with absolutely bizarre and incoherent predictions of the evils of generative AI in order to throw them off the actual evils of the people behind generative AI. The same way that the anti-social media agitprop just resulted in TikTok being sold to explicit propagandists during a genocide and age attestation (as the social media giants do business without interruption), the "AI scaremongering" is just going to result in physical restrictions on individuals running AI - it will be tracked like explosives or nuclear material. The giant AI companies will be sold as the solution, just like closed platform software "stores" from Apple and Google are sold as consumer protection.

          > When has that ever happened though?

          Microsoft has to sign Linux so it can be installed under Secure Boot. Encryption was regulated as arms export, and is fully under attack again.

    • adamors 4 hours ago ago

      Yes, this is the main point, employees will have less and less leverage (I'm even seeing AI doing interviews now, good luck). Soon we'll be explaining to an AI why we aren't as productive as two weeks ago.

      • jnovek 4 hours ago ago

        AI interviews are a hard “no”. If I’m going to invest my time in an interview, the company must as well.

  • ChrisArchitect 4 hours ago ago
  • undefined 4 hours ago ago
    [deleted]
  • siliconpotato 4 hours ago ago

    Find it surprising that there are people still under the delusion that they have an audience of humans on twitter worth speaking to

    • skrebbel 4 hours ago ago

      You think a bot cross-posted this tweet to HN?

    • Imustaskforhelp 4 hours ago ago

      Jose valim is the creator of elixir for reference. He definitely includes a fan following within the elixir community. So its not much about twitter as much as the guy.

      I mean even on top of my head, I still remember when jose commented back to me and it was a highlight for a few days as I told my friend about it: https://news.ycombinator.com/item?id=44234633

      > I have a fun anecdote. About 5-6 years ago, Elixir completely disappeared from the top 100 after spending some time in the top 50. People reached out to me and then I reached out to TIOBE to understand why and the reason given was "bad presence on Amazon".

      > After further investigation, the root cause seemed to be that we finally had enough published Elixir books. At the time, if you searched for "xyz programming" on Amazon and only found a few results, Amazon would pad those results with non-relevant entries. However, because Elixir reached about 20-30 books, we were no longer padded, so we suddenly got worse rankings than every other language with only a handful of books. This happened on every Amazon domain they searched on, so it compounded and effectively kicked us out of the top 100 altogether. This all happened at a time Elixir language activity had already reached top 25 on GitHub PRs/stars.

      So although my comment has gotten a little offtopic but people have literally written books about elixir (the language he created).

      My point is, people like to listen to jose and he's a really chill guy from what I know of him and elixir feels like a great language :-D

      • DiabloD3 4 hours ago ago

        That doesn't actually answer the original poster's commentary, however.

        Humans have left Twitter, its all propaganda and spam bots just spamming and propagandizing each other.

        José Valim needs to move to either Bluesky (if he prefers to stay within the corporate ecosystem) or Mastodon (which is where the entirety of the FOSS universe went).

        • skinfaxi 4 hours ago ago

          Humans are still on twitter. Maybe the people José Valim wants to reach are on twitter and not those other platforms.

        • Imustaskforhelp 4 hours ago ago

          > José Valim needs to move to either Bluesky (if he prefers to stay within the corporate ecosystem) or Mastodon (which is where the entirety of the FOSS universe went).

          Personally, I am not particularly on X so much as much as I am on bluesky, and I would really appreciate Jose joining bluesky.

          But at the end of the day, I might take critique with the idea of needs

          Nobody needs to do anything. It's his freedom and I just searched and Jose is literally on bsky: https://bsky.app/profile/did:plc:6h6jhmuogujxac24oilywd45 but his account is inactive since last message of 11 months ago.

          So I think that he's open to new platforms and old habits die hard perhaps. I don't wish to defend X because I don't particularly like it, but being honest, it is what it is.

          > Humans have left Twitter, its all propaganda and spam bots just spamming and propagandizing each other.

          Can't say about all but I can indeed confirm that when I tried to make a new account and post something, I was literally recommended tweets basically saying "like this tweet/follow us to get 1000 followers or buy these followers" when I had posted a video for an product.

    • redsocksfan45 2 hours ago ago

      [dead]